Exotic 7-spheres

November 18, 2010

Since I already gave a talk about Milnor’s paper on constructing exotic 7-spheres (which are smooth manifolds that are homeomorphic but not diffeomorphic to the standard 7-sphere) anyways, I figured it was a reasonable topic for a blog entry. The result is also pretty cool, as it very much disagrees with our intuition of how spheres should behave, based on examples in lower dimensions, and furthermore is given by a construction that is reasonably tangible, (the ones that Milnor constructs are 3-sphere bundles over the 4-sphere for which an explicit classifying map is given, as well as an explicit Morse function to show that it is in fact a topological 7-sphere) rather than something arising from invoking the axiom of choice. Of course, there are nonintuitive results about higher dimensions that can be more simply stated- for example, if we embed 2^n unit spheres in the corners of an n-dimensional cube with all side lengths equal to 4, we can ask how large the sphere in the center is. Since it must be tangent to all 2^n of the other spheres, we can simply use the diagonal of the cube to calculate its length. By the pythagorean theorem, the length of the diagonal of the cube is 4 \sqrt n, so subtracting the diameters of the two spheres, we find that the inner sphere has diameter equal to 4 \sqrt n - 4 = 4(\sqrt n - 1). However, this means that when n = 4, the inner sphere has diameter 4, and therefore is tangent to the hypercube at all 2n = 8 of its faces. Even weirder still is the situation when n > 4. In this case, \sqrt n - 1 > 1, so the diameter of the sphere is larger than the minimal distance across the cube, telling us that near enough to the center of the faces, the sphere is actually protruding out of the cube.

Now, for X to be an exotic 7-sphere, X must satisfy two properties: (1) X is not diffeomorphic to S^7, and (2) X is homeomorphic to S^7. This at first seems very counterintuitive- how can we possibly find a homeomorphism between smooth manifolds X \to S^7 that is both nice enough to write down, but also not a diffeomorphism?

We know that any diffeomorphism of smooth manifolds f: X \to Y naturally induces an isomorphism of tangent bundles Tf: TX \to TY, as f gives us an isomorphism of tangent spaces at each point. Thus, we want X \cong S^7 but TX \not \cong T S^7.

It is well known that for an n-dimensional manifold, if there is a smooth map f: M \to \mathbb R such that f has exactly two critical points, (both of which are nondegenerate) then M is homeomorphic to an n-sphere. This is the criterion that we will use to check that our constructed manifold is homeomorphic to S^7.

We now consider manifolds that are given as 3-sphere bundles over S^4. These are classified by maps S^3 \to SO(4), in other words, by elements of \pi_3(SO(4)). Note that this is isomorphic to \mathbb Z \oplus\mathbb Z, by the following correspondence: for each (m, n) \in \mathbb Z \oplus \mathbb Z, we have the map u \mapsto (v \mapsto u^m v u^n), where quaternion multiplication is understood on the right. Let \xi_{m, n} be the 3-sphere bundle corresponding to a given pair (m, n). If we let \iota be the standard generator for H^4(S^4), it turns out that the Pontryagin class p_1(\xi_{m, n}) = \pm 2(m - n) \iota.

For m, n be such that m + n = 1, m - n = k, define M_k to be the total space of the bundle \xi_{m, n}. Then, it turns out that M_k satisfies our previous condition for being homeomorphic to S^7. Furthermore, given the Pontryagin class, we find that \lambda(M_k) = k^2 - 1 \pmod{7}, and therefore is not diffeomorphic to the standard 7-sphere, as desired.

More about sheaves (stalks, sheafification, and morphisms)

July 19, 2010

Wow, I haven’t updated for a while. I guess I got distracted for a bit, and never got around to coming back to this for a while. Hopefully, I’ll be updating more regularly from now on.

While I only defined direct and inverse limits for totally ordered collections of objects, we can do the same for partially ordered collections as well, (where we say X_\alpha \le X_\beta if there is a morphism X_\alpha \to X_\beta) by still defining the direct limit to be the universal object receiving morphisms from all of the objects in a way that is compatible with the given morphisms. For example, letting X_n = \mathbb Z[\frac1n], and letting i_{n, m}: X_n \to X_m be the inclusion morphism if n | m, and having no morphism from X_n to X_m otherwise, we have \displaystyle \lim_\to X_n = \mathbb Q, as we would expect. In a similar manner, we can extend the definition of inverse limit to partially ordered systems as well (here, we define the ordering on the collection to be X_\alpha \le X_\beta if there is a morphism X_\beta \to X_\alpha).

Let X be some topological space, and let \mathcal F be a sheaf on X. Given a point p \in X, we define the stalk of \mathcal F at p, \mathcal F_p to be \displaystyle \lim_{\to \atop{p \in U}} \mathcal F(U). If X is a smooth manifold and \mathcal F is the sheaf of C^\infty functions on X, then \mathcal F_p is the ring of all locally convergent power series around p. If \mathcal G is the sheaf of continuous real-valued functions on X, then \mathcal G_p is the ring of all functions that are continuous in some open neighborhood of p.

A morphism of presheaves \phi: \mathcal F \to \mathcal G on some space X is simply a collection of morphisms \phi(U): \mathcal F(U) \to \mathcal G(U) for each open subset U \subset X that commute with the restriction morphisms. Because all of the morphisms commute with the restriction morphisms, it is clear that this induces morphisms of stalks \phi_p: \mathcal F_p \to \mathcal G_p for each point $p$.

One may wonder whether if given an arbitrary presheaf, there is a canonical sheaf associated to it. The answer turns out to be yes. Recalling our discussion of presheaves in the last post, it is clear that we should be able to define the notion of a stalk for a presheaf as well, as neither of the two sheaf axioms are involved. In fact, if we consider all presheaves that have the same stalks at all points, it turns out that exactly one of them is a sheaf, which we will define as the sheaf associated to any of them. In other words, for sheaves, the set of stalks contains all of the original information of the sheaf, as we can use this to reconstruct it.

To define sheafification in a nice manner, we will first need to discuss morphisms. Fortunately, the category of sheaves is a full subcategory of the category of presheaves, meaning that in this case, if we have a morphism f \in \hom(\mathcal F, \mathcal G) of two presheaves such that \mathcal F, \mathcal G are actually sheaves, then f is also a morphism of sheaves. Because of this, it will never be ambiguous what we mean by a morphism.

Given two presheaves \mathcal F, \mathcal G on a topological space X, a morphism f between them is a collection of morphisms f(U) \in \hom(\mathcal F(U), \mathcal G(U)) for every open subset U \subset X, such that they commute with the restriction maps for \mathcal F, \mathcal G.

Now, given a presheaf \mathcal F, we define \mathcal F^\dagger, the sheafification of \mathcal F as the unique (up to unique isomorphism) sheaf, along with the sheafification morphism \psi: \mathcal F \to \mathcal F^\dagger satisfying the following universal mapping property: for any sheaf \mathcal G and a morphism f: \mathcal F \to \mathcal G, there is a unique morphism g: \mathcal F^+ \to \mathcal G such that f = g \circ \psi.

For example, suppose that \mathcal F is the constant sheaf that assigns \mathbb Z to each open subset of X. Then, \mathcal F^+ is the sheaf that assigns \mathbb Z^{\oplus n} to the open subset U \subset X if U is a disjoint union of exactly n connected components.

Luckily, most of our immediate examples of presheaves are already sheaves. However, the presheaf quotient of two sheaves is not necessarily a sheaf, so when we refer to the cokernel of a map of sheaves, we will mean the sheafification of the presheaf cokernel, which is in fact the correct definition of the cokernel in the category of sheaves (you should convince yourself that this is true).

Sheaves

February 27, 2010

I have now reached the point where I can no longer reasonably avoid mentioning sheaves. Seriously, though, they’re not as scary as they sound, and they actually allow you to think about certain aspects of (both analytic and algebraic) geometry in a new way.

First, we will define the category of open sets \underline{X} associated to a topological space X. The objects of \underline X, as you may have guessed, are simply the open sets of X. If U, V \subset X are open subsets, then we say that \hom(U, V) = \left\{ \begin{array}{ll} i: U \hookrightarrow V & \text{if } U \subset V \\ \emptyset & \text{otherwise} \end{array} \right.

Let \mathcal C be any category. A presheaf of objects of \mathcal C is simply a contravariant functor \mathcal F: \underline X \to \mathcal C. That is, to each open subset U \subset X, we assign an object \mathcal F(U) \in \mathcal C. Additionally, if V \subset U, we have a morphism \mathcal F(U) \xrightarrow{\mathcal F(i)} \mathcal F(V), known as a restriction morphism. We will often write \mathcal F(i)(a) = a |_V.

However, in order for this to be useful geometrically, we’d like to be able to study such an object locally. That is, if U is some open subset of X, and \{ U_{\alpha} \} is an open cover of U, then we would like to be able to determine \mathcal F(U) from \{ \mathcal F(U_{\alpha}) \}. It turns out that if we impose two simple axioms (or alternatively, the single axiom known as the sheaf axiom, although this one is less intuitive), we can do just that.

The first is known as the identity axiom. Let \{U_\alpha\} be an open cover of U. For any s, t \in \mathcal F(U) such that s_{U_\alpha} = t_{U_\alpha} for each \alpha, we have s = t.

The second is known as the gluing axiom. Again, let \{U_\alpha\} be an open cover of U, and suppose that we have s_\alpha \in \mathcal F(U_\alpha) such that s_\alpha |_{U_\alpha \cap U_\beta} = s_\beta |_{U_\alpha \cap U_\beta}. Then, there is some s \in \mathcal F(U) such that s |_{U_\alpha} = s_\alpha. Note that by the previous axiom, we know that such an s must be unique.

Let’s look at some examples. Let M be a smooth manifold, and let \mathcal F(U) be the ring of C^{\infty} functions U \to \mathbb R. It is not hard to see that \mathcal F is a presheaf. Additionally, both axioms are satisfied, as a function is determined by the values it takes at each point (you should check these details if you’ve never done so before – the gluing axiom in this case is known as the pasting lemma). Actually, replacing C^{\infty} with C^k for any nonnegative integer k also gives us a sheaf, by exactly the the same arguments.

Let X be a topological space, and let x_0 \in X. Suppose that \mathcal C has a zero object; that is, an object which is both an initial and a final object (an initial object is an object with exactly one morphism to any other object, and a final object is an object with exactly one morphism from any other object). Then, define the skyscraper sheaf \mathcal S_{x_0} for an object C \in \mathcal C by setting \mathcal S_{x_0}(U) = C when x_0 \in U and \mathcal S_{x_0}(U) = 0 if x_0 \notin U.

Note that sometimes, the condition that \mathcal F(\emptyset) must be the terminal object of \mathcal C is added to the definition of a sheaf. In all of the cases anyone cares about, this always happens to be true anyways, so we need not concern ourselves with it.

Now, let \mathcal C be the category of abelian groups and let A be some abelian group. Let X be a topological space, and let \mathcal F be the presheaf obtained by setting \mathcal F(U) = A for each nonempty open U \subset X, and \mathcal F(\emptyset) = 0. In most cases, X will contain two disjoint nonempty open sets, and because of this, \mathcal F will not be a sheaf, as the gluing axiom is not satisfied. However, we can construct a sheaf \mathcal F^{\dagger} by letting \mathcal F^{\dagger}(U) = A if U is connected, and \mathcal F^{\dagger}(\bigsqcup U_\alpha) = \prod \mathcal F^{\dagger}(U_\alpha).

There is, in fact, a way to do this for any presheaf, which I will talk about in the next post.

Some category theory

February 18, 2010

While I have mentioned the idea of categories and functors briefly in previous posts, I wanted to take the time to define some other categorical notions that I’d like to be able to reference in future posts. This post summarizes basic category theory, and introduces the most commonly used constructions.

A category \mathcal C consists of a class of objects \text{Ob}(\mathcal C), as well as a set \hom(X, Y) of morphisms between each pair of objects X, Y \in \text{Ob}(\mathcal C). Given f \in \hom(X, Y), g \in \hom(Y, Z) we have a composite morphism $g \circ f \in \hom(X, Z)$, such that composition is associative. Additionally, for each object X, we have an identity morphism \text{id}_X \in \hom(X, X), such that \text{id}_X \circ f = f \in \hom(Y, X) and g \circ \text{id}_X = g \in \hom(X, Y) for any object Y and morphisms f, g. We will often get lazy and write X \in \mathcal C when we mean X \in \text{Ob}(\mathcal C).

Given two categories \mathcal C, \mathcal D, a (covariant) functor F: \mathcal C \to \mathcal D is an assignment of an object F(X) \in \mathcal D for each object X \in \mathcal C, and a function \hom(X, Y) \to \hom(F(X), F(Y)) for each pair of objects X, Y \in \mathcal C that commutes with composition (ie. F(f \circ g) = F(f) \circ F(g)). We also require that F(id_X) = id_{F(X)} for each X \in \mathcal C.

A contravariant functor is defined similarly, with the only difference being that we now have maps \hom(X, Y) \to \hom(F(Y), F(X)) for each pair of objects X, Y \in \mathcal C. In other words, it reverses the direction that the arrows go.

Given two or more objects in a category, we sometimes want to form a new object from them. For example, in the category of groups, we might want to form the group that is the direct sum of several groups. The two most natural ways of doing this, that are well-defined (although not guaranteed to exist) for any category, are the product and coproduct.

Let X, Y \in \mathcal C. We define their product to be an object Z, along with projection morphisms \pi_X: Z \to X, \pi_Y: Z \to Y such that for any object W along with morphisms f: W \to X, g: W \to Y, there is a unique morphism \phi: W \to Z such that f = \pi_X \circ \phi, g = \pi_Y \circ \phi. In the category of sets or topological spaces, this is the usual notion of product. For groups, rings, modules, or vector spaces, it is the direct product that you are probably familiar with.

After seeing the above definition, it is natural to wonder about the dual notion, where we have an object Z equipped with morphisms to it from X and Y, rather than the other way around, which we shall refer to as the coproduct. We find (conveniently) that this is also a useful notion, and is a familiar construction in the categories that we are used to working with. More formally, the coproduct Z is an object together with inclusion morphisms \iota_X: X \to Z, \iota_Y: Y \to Z, such that for any object W with morphisms f: f: X \to W, g: Y \to W, there is a unique morphism \phi: Z \to W such that f = \phi \circ \iota_X, g = \phi \circ \iota_Y. In the category of sets or topological spaces, this is simply the disjoint union. For groups, modules, and vector spaces, it is the direct sum (note that this is why the infinite direct sum differs from the infinite direct product, and furthermore, why they are so named). In the category of rings, this gives us the tensor product, which is a bit less intuitive than the other examples, so you should think about why this is true.

We now define direct and inverse limits. To motivate this, let \mathcal C be the category of rings, and let X_n = \mathbb Z[\frac{1}{n!}]. If m | n, then we have an inclusion morphism X_m \hookrightarrow X_n. As n becomes larger and larger, the ring X_n becomes “closer” to \mathbb Q, so we’d like to be able to say that \lim X_n = \mathbb Q in some sense. It turns out that once we define the direct limit \displaystyle \lim_\to in the correct manner, we do in fact have \displaystyle \lim_\to X_n = \mathbb Q.

Suppose that we have some collection of objects A_n \in \mathcal C, for n \in \mathbb N, with maps i_n: A_n \to A_{n+1}. Then, we define the direct limit \displaystyle \lim_\to A_n, if it exists, to be the universal object A, along with maps j_n: A_n \to A, such that j_{n+1} \circ i_n = j_n \in \hom(A_n, A). By universal, we mean that for any object B and morphisms k_n: A_n \to B, such that k_{n+1} \circ i_n = k_n, there is a unique morphism \phi: A \to B such that \phi \circ j_n = k_n for all n.

Letting X_n = \mathbb Z[\frac{1}{n!}], and i_n the inclusion \mathbb Z[\frac{1}{n!}] \hookrightarrow \mathbb Z[\frac{1}{(n+1)!}], we find that \mathbb Q is in fact the direct limit of the system (X_n, i_n).

There is another kind of limit, which, as you may have guessed, is simply the dual notion of the direct limit, and is known as the inverse limit. Suppose that we have a collection \{(X_n, p_n)\} of objects and morphisms between them p_n: X_{n+1} \to X_n. The inverse limit is the universal object X, with morphisms q_n: X \to X_n such that p_n \circ q_{n+1} = q_n. Unfortunately, there is not as simple of an example here. Let X_n = \mathbb Z / p^n \mathbb Z, and let p_n be the natural quotient map. Then, the inverse limit X = \mathbb Z_p, the p-adic integers.

To conclude, inverse limits and products are both examples of what is called a limit in the more general case, which is a universal object with maps to every object in some diagram. Similarly, direct limits and coproducts are both examples of colimits, a universal object receiving morphisms from every object in some diagram.

Maps between spectra

February 7, 2010

In the previous post, I defined the prime spectrum of a ring. This time we will discuss morphisms between these objects. It turns out that the category of prime spectra of commutative rings, with the correct notion of morphisms between them, is equivalent to the category of commutative rings (although the natural functor that gives us an equivalence of categories is contravariant).

Let A, B be commutative rings, and let X = \text{Spec} A, Y = \text{Spec} B. Suppose that we have a unital ring homomorphism \varphi: A \to B. Then, since prime ideals pull back to prime ideals under unital ring homomorphisms, \varphi defines a map f: Y \to X by f(p) = \varphi^{-1}(p), for prime ideals p \in Y. Certainly, we want to allow such maps as morphisms Y \to X.

Since X and Y are topological spaces, we can also look at the set of all continuous maps from Y to X. One natural question to ask then is the following: does every continuous map f: Y \to X arise from a ring homomorphism \varphi: A \to B?

The answer, not unsurprisingly, is no. Only a small subset of all topological spaces arise as the prime spectrum of a ring, so we should expect an arbitrary continuous map to respect this structure, especially as th is an algebraic condition. Consider A = \mathbb Z, B = \mathbb Q, and the map f: Y \to X given by (0) \mapsto (p), for any nonzero prime ideal of \mathbb Z. This is clearly Assume for the sake of contradiction that this comes from some ring homomorphism \varphi: A \to B. We must have \phi^{-1}(0) = (p), so \phi(p) = 0. But, p \phi(1) = \phi(p) = 0, so \phi(1) = 0 as well, since \mathbb Q is a field. Thus, \varphi must be the zero map. But, then \varphi^{-1}(0) = \mathbb Z, which is a contradiction. Therefore, this map does not arise from a ring homomorphism.

The reason we choose schemes, and in particular, affine schemes, as an object of study is that we can convert geometric statements to algebraic statements and vice versa. Thus, we want our morphisms of spectra to be related to the rings themselves, and so we will only consider morphisms which arise from ring homomorphisms. Given this definition of \hom(Y, X), it is clear that the contravariant function \text{Spec} from the category of rings to the category of topological spaces is an equivalence of categories (while I won’t give the formal definition of this here, you should have some idea of what it should mean intuitively).

Let’s look at some examples. Let A be any ring and let B = A / I, where I \subset A is some ideal. Consider the quotient map \phi: A \to A/I. What exactly is the map f: Y \to X induced by this? Recall that we have a natural bijection between the (prime) ideals of A / I and the (prime) ideals of A that contain I. Furthermore, if J \supset I is an ideal of A, then \varphi^{-1}(J / I) = J, so J/I \mapsto J. This is also the natural bijection mentioned before, giving us a natural identification of \text{Spec} A/I with V(I) \subset \text{Spec} A, the set of all prime ideals of A containing I. We can then think of f as an inclusion V(I) \hookrightarrow \text{Spec} A, which is a closed immersion, since V(I) is closed.

You may have seen from classical algebraic geometry that any affine variety, that is the vanishing set of some ideal of polynomials I in k[x_1, \ldots, x_n], as a subset of k^n, where k is some algebraically closed field, can be identified with the set of maximal ideals of k[x_1, \ldots, x_n] / I. From the previous article and the above discussion, we know that this is just the set of closed points of \text{Spec} k[x_1, \ldots, x_n] / I. Furthermore, the map of spectra induced by any k-homomrphism between finitely-generated k-algebras takes maximal ideals to maximal ideals, so in this case, we can literally ignore the non-closed points.

On first inspection, it seems like using this construction is nicer- all of the points in our topological space are now closed, and we still have just as much information as before (we can certainly recover the original ring, and the non-closed points simply correspond to the irreducible closed subsets of our space). However, using just the maximal ideals seriously limits us when we want to move to more exotic rings. For a general homomorphism of rings, maximal ideals do not necessarily pull back to maximal ideals. As a counter-example, consider the inclusion \mathbb Z \hookrightarrow \mathbb Q. Here, (0) pulls back to (0), which is not maximal in \mathbb Z. The actual issue here is related to the fact that the all of the residue fields of a finitely-generated k-algebra obtained by modding out by a maximal idea are isomorphic to k, whereas in general it depends on the maximal ideal chosen (I won’t go into this in detail here, but Qiaochu discusses this in his post “Max-Spec is not a functor”).

The prime spectrum of a ring

February 5, 2010

This will be the first in a series of posts with the goal of providing a brief introduction to schemes. I’m writing this partly at the request of a friend, but was something I planned on doing at some point anyways, so that I can talk about scheme theory in this blog. Here I will discuss the prime spectrum of a ring, which is the topological space associated to it.

While I was tempted to write this in the more general context of noncommutative rings, it turns out that prime spectra of noncommutative rings are pretty tricky, as the definition of a prime ideal of a noncommutative ring R is a subset P such that for all a, b \in R, if a r b \in P for all r \in R, then either a \in P or b \in P. This is certainly much harder to work with than the definition of a prime ideal of a commutative ring (although it can easily be seen to be equivalent in this case).

Let A be a commutative ring. Let X be the set of all prime ideals of A. We will now give X a topology, known as the Zariski topology. For any subset S \subset A, define V(S) to be the set of all prime ideals P such that S \subset P. Note that if I is the ideal generated by S, then we have V(S) = V(I), so we need not consider all subsets of A.

Now, we claim that declaring such sets to be closed gives us a topology on X. It is not hard to see that arbitrary intersections of sets of the form V(I) can be written in this form, as can finite unions. Additionally, V(0) = X and V(A) = \emptyset, so this does in fact define a topology. We will refer to the topological space X as \text{Spec} A. This seems rather strange, and indeed it should, as it turns out that this space is not even Hausdorff (although it is still compact- it is not hard to show this, and is a good exercise if you haven’t done this before). The closed points of X are precisely the maximal ideals of A. It is clear that for any maximal ideal m, V(m) = \{m\}. Furthermore, since every ideal is contained in a maximal ideal, if p is prime and not maximal, and p is in some closed set V(I), then p \subset m for some maximal ideal m, and since I \subset p, we have I \subset m, and therefore m \in V(I). Thus, not only are the maximal ideals the closed points, but every closed subset of X contains at least one maximal ideal. (It should also be obvious that the closure of \{p\} is simply V(p).

To see what’s actually going on, let’s look at some examples. First, let A = \mathbb Z, our usual starting block for thinking about rings. We have \text{Spec} \mathbb Z = \{ (p) \ | \ p \text{ is prime} \} \cup \{ (0) \}. Each ideal (p) is a closed point of \text{Spec} \mathbb Z, while the closure of the ideal (0) is all of \text{Spec} \mathbb Z, and is known as the generic point for this reason.

Next, we look at the ring A \mathbb C[x]. We have \text{Spec} A = \{ (x - a) \ | \ a \in \mathbb C \} \cup \{ (0) \} \cong \mathbb C \cup \{ (0) \}. We can think of this as the complex plane, together with a generic point (0), whose closure is again the whole space. However, the topology here is different from the standard topology on \mathbb C. Giving \mathbb C the subspace topology (by removing the generic point), we see that the only nonempty open sets are exactly those with finite complement. However, this is at least comparable to the standard topology on \mathbb C, as it is easily seen to be coarser (have fewer open sets) than \mathbb C in the usual topology.

Both of these cases were pretty straightforward, so we will now look at one last example that is a bit more complicated. The first gives us a look into the natural question to ask: how much more complicated do things get when we work with \mathbb C[x_1, \ldots, x_n] for n > 1. It turns out that looking at X = \text{Spec} \mathbb C[x, y] gives us enough insight to be able to correctly guess what happens in all higher dimensions. Recall that there are three types of prime ideals in A = \mathbb C[x, y]: the maximal ideals, the principal ideals generated by irreducible polynomials, and the zero ideal. In general, for any algebraically closed field k, the maximal ideals of k[x_1, \ldots, x_n] can all be expressed in the form (x_1 - a_1, \ldots, x_n - a_n), where a_1, \ldots, a_n \in k.

Here, the maximal ideals are the ideals of the form (x - a, y - b), which are in a natural bijection with the points of \mathbb C^2 (and in general, it is straightforward to see that the maximal ideals of k[x_1, \ldots, x_n] have a natural bijection with the points of k^n for any algebraically closed field k). So X can be thought of as \mathbb C^2 \cup \{\text{principal prime ideals}\} \cup \{(0)\}. Again, (0) is simply the generic point- it’s closure is the whole space (this will always be true whenever (0) is a prime ideal, for example in any reduced ring). But what exactly are the principal prime ideals here? Well, as we said before, each of these can be expressed in the form (f(x, y)), where f(x, y) is an irreducible monic polynomial. This suggests that they may be related to the curve given by the zero set of the polynomial. Suppose that f(a, b) = 0. This occurs if and only if f \mapsto 0 under the quotient map A \to A / (x - a, y - b), which is equivalent to saying that f \in (x- a, y - b). Thus, f(x, y) \subset (x - a, y - b) if and only if f(a, b) = 0, so the closure of the singleton set \{(f)\} is just f along with all of the points (a, b) \in \mathbb C^2 on which f vanishes.

So, we conclude that X = \mathbb C^2 \cup \{\text{irreducible curves}\} \cup \{(0)\}. While the formal definition of a curve is more complicated in general, a curve in \mathbb C^2 is simply the vanishing set of some nonzero polynomial in \mathbb C[x, y]. A curve C is said to be irreducible if we cannot write it as the union of two distinct curves, neither of which is all of C. It turns out that a curve defined by a polynomial is irreducible if and only if the polynomial is irreducible (which is good, since this agrees with our intuition). We see that the three types of prime ideals became three types of points- the maximal ideals became the “zero-dimensional” closed points, the non-zero principal prime ideals became “one-dimensional” curves, and the zero ideal became the “two-dimensional” generic point.

Woah

January 12, 2010

Here

I guess we can no longer use “contains chlorophyll” as the distinction between plants and animals.

Apparently daydreaming is good for your brain

January 11, 2010

Link

I guess it makes sense- I often end up solving the problems I get stuck on for a while when I’m just letting my mind wander.

The delta function is not a function

January 8, 2010

We’re all familiar with the dirac delta “function”, \delta (x), defined to be 0 if x \ne 0 and \infty when x = 0, such that \displaystyle \int_{-\infty}^{\infty} \delta(x) dx = 1, or, equivalently \displaystyle \int_{-\infty}^{\infty} \delta(x) f(x) dx = f(0), for any C^{\infty} function f.  However, we know that no such function exists, as its support is a set of measure zero, and therefore its integral should be zero as well.  Still, it turns out that we can make a completely rigorous definition of \delta, by considering it as a distribution, which is in a way a generalization of a function.

Let \mathcal D be the set of all compactly supported C^{\infty} functions on \mathbb R, and let \mathcal D' = \hom(\mathcal D, \mathbb R), where \mathcal D is topologized as follows: a sequence \{ \varphi_n \} converges to \varphi if (1) there is a compact set K containing all of the supports of the \varphi_n, and (2) for each k, D^k \varphi_n converges uniformly to D^k \varphi.  Elements of \mathcal D' are referred to as distributions.

Given g \in L^1(\mathbb R), we can define T_g \in \mathcal D' by f \mapsto \int f g.  It is clear that if T_{g}(f) = T_{h}(f) for all f \in \mathcal D, then g = h as elements of L^1.  In this way, we can think of \mathcal D' as containing L^1, and therefore of distributions as a generalization of Lebesgue-integrable functions.

The linear functional T given by f \mapsto f(0) clearly satisfies properties (1) and (2), so it is an element of \mathcal D'.  Note that we also have T(f) = \int f(x) \delta(x) dx, by our previous definition of the delta function.  Thus, while we cannot really think of the delta function as a function, it gives a perfectly well-defined distribution.

You may have seen mention of \delta'(x) before (possibly in the context of a physics textbook), and wondered what this meant.  After all, even when we pretend that the delta function is a function, it is not even continuous, and certainly not differentiable.  However, given any distribution, we can make a perfectly well-defined definition of its derivative.  Let g \in \mathcal D, and define the distribution T_g by f \mapsto \int f g.  Here, the obvious definition for T_g' is T_{g'}.  Then, we observe that T_g'(f) = \int f g' = - \int f' g = - T_g(f'), using integration by parts.  Since we know that f' \in \mathcal D whenever f \in \mathcal D, we can simply define the derivative of any distribution T to be the linear functional f \mapsto - T(f').  In this way, we can now talk about the derivative of the dirac delta function as the linear operator which takes f to - f'(0).  Physicists may say that this is the “function” \delta'(x) such that \displaystyle \int_{- \infty}^{\infty} f(x) \delta'(x) dx = - f'(0), agreeing with our definition.  Continuing in this way, we can define the n‘th derivative of the distribution T as the linear functional f \mapsto (-1)^n T(f^{(n)}).

We can generalize all of the above constructions to \mathbb R^n, or any open subset U \subset \mathbb R^n.  In this case, if \alpha = (\alpha_1, \ldots, \alpha_n), and |\alpha| = \alpha_1 + \ldots + \alpha_n, we define D^{\alpha} T(f) = (-1)^{|\alpha|} T(D f).

You have probably come across a partial differential equation of the form L u = f, where L is a linear differential operator.  One strategy for solving such an equation is to first find a solution to L u = \delta, known as a fundamental solution of the operator L, and then use it to obtain an actual solution (the Green’s function method).  While we can do this while still thinking of the delta function as a function (say, as the derivative of the unit step function), this requires a good deal of hand-waving.  Fortunately, when thinking of this as an equation whose solution is some distribution, we can still be completely rigorous.  Because of this, distribution theory allows us to make certain methods, such as this, of solving PDEs mathematically rigorous.

Not (necessarily) commutative rings

January 2, 2010

A great deal of attention is always given to commutative rings.  In many introductory algebra courses, immediately after rings are introduced, they are henceforth assumed to be commutative.  With only a few exceptions, such as certain cohomology rings, one may never really deal with noncommutative rings in later courses.  Despite this, there are some interesting results in the theory of noncommutative rings.  Note that while we will not assume our rings to be commutative here, we will still assume that they have a multiplicative identity element.

A division ring is defined to be a ring R such that every nonzero element of R has an inverse, that is, for each a \in R there is a b \in R such that $ab = ba = 1$.  Note that a commutative division ring is simply a field.

Exercise: Show that every finite division ring is a field.

This is actually a well-known result, but still a good problem.

Let A be a ring, and let I \subset A.  We call $I$ a left ideal if AI = I, right ideal if IA = I, and an ideal if IA = AI = I.  If A is commutative, then these three conditions are equivalent.  In this case, we also know that A is a field iff every ideal of A is trivial, that is the only nonzero ideal of A is all of A.  However, we have to be careful when formulating the noncommutative analogue of this statement.  While it is clear that if A is a division ring, then every ideal of A is trivial (a ring in which this is true is known as a simple ring), the converse does not hold (why?).  It turns out that the necessary and sufficient condition for A to be a division ring is that every left ideal of A is trivial (equivalently, every right ideal of A is trivial).  You should convince yourself of this.

This alone motivates the study of simple rings, which by necessity requires understanding noncommutative rings to be able to say anything at all (as the only commutative simple rings are fields).

As in the commutative case, we call an element e \in A idempotent if e^2 = e.  If e is idempotent, then we again have (1 - e)^2 = 1 - 2e + e^2 = 1 - e, as in the commutative case, so 1 - e is idempotent as well.  Another familiar relation that still holds is e (1 - e) = 0.  It is clear from this that A \cong e A \oplus (1 - e) A \cong A e \oplus A (1 - e).  Applying this relation a second time yields the following decomposition: A \cong e A e \oplus e A (1 - e) \oplus (1 - e) A e \oplus (1 - e) A (1 - e).

We can also write this in the form A \cong \begin{pmatrix} e A e & e A (1 - e) \\ (1 - e) A e & (1 - e) A (1 - e) \end{pmatrix}.  It is clear that the left and right hand sides are isomorphic as additive groups.  Recalling how matrix multiplication is defined, we see that the ring multiplication of both sides agrees as well.  This is known as the Peirce decomposition.

More generally, if e_1, \ldots, e_n \in A are a complete orthogonal set of idempotents, that is e_i e_j = 0 if i \ne j and e_1 + \ldots + e_n = 1, then we have A \cong M, where M is a matrix ring with M_{ij} = e_i A e_j.  Note  that e (1 - e) = 0 and e + (1 - e) = 1, so this is in fact a generalization of our previous result.  Additionally, if A is commutative, then e_i A e_j = e_i e_j A = 0 if i \ne j, so M becomes the diagonal matrix whose nonzero entries are e_i A, trivializing the matrix algebra, as each e_i A is now a ring and A is simply the product ring e_1 A \times \cdots \times e_n A.  Thus, the Peirce decomposition is only interesting in the case where A is not commutative.