## Παρασκευή, 4 Νοεμβρίου 2016

### TERENCE TAO's 246A, Notes 0: the complex numbers | What's new

246A, Notes 0: the complex numbers | What's new

Kronecker is famously reported to have said, “God created the natural
numbers; all else is the work of man”. The truth of this statement
(literal or otherwise) is debatable; but one can certainly view the
other standard number systems ${{\bf Z}, {\bf Q}, {\bf R}, {\bf C}}$ as (iterated) completions of the natural numbers ${{\bf N}}$ in various senses. For instance:

• The integers ${{\bf Z}}$ are the additive completion of the natural numbers ${{\bf N}}$ (the minimal additive group that contains a copy of ${{\bf N}}$).
• The rationals ${{\bf Q}}$ are the multiplicative completion of the integers ${{\bf Z}}$ (the minimal field that contains a copy of ${{\bf Z}}$).
• The reals ${{\bf R}}$ are the metric completion of the rationals ${{\bf Q}}$ (the minimal complete metric space that contains a copy of ${{\bf Q}}$).
• The complex numbers ${{\bf C}}$ are the algebraic completion of the reals ${{\bf R}}$ (the minimal algebraically closed field that contains a copy of ${{\bf R}}$).
These descriptions of the standard number systems are elegant and conceptual, but not entirely suitable for constructing the number systems in a non-circular manner from more primitive foundations. For instance, one cannot quite define the reals ${{\bf R}}$ from scratch as the metric completion of the rationals ${{\bf Q}}$, because the definition of a metric space itself requires the notion of the reals! (One can of course construct ${{\bf R}}$ by other means, for instance by using Dedekind cuts or by using uniform spaces
in place of metric spaces.) The definition of the complex numbers as
the algebraic completion of the reals does not suffer from such a
non-circularity issue, but a certain amount of field theory is required
to work with this definition initially. For the purposes of quickly
constructing the complex numbers, it is thus more traditional to first
define ${{\bf C}}$ as a quadratic extension of the reals ${{\bf R}}$, and more precisely as the extension ${{\bf C} = {\bf R}(i)}$ formed by adjoining a square root ${i}$ of ${-1}$ to the reals, that is to say a solution to the equation ${i^2+1=0}$. It is not immediately obvious that this extension is in fact algebraically closed; this is the content of the famous fundamental theorem of algebra, which we will prove later in this course.

The two equivalent definitions of ${{\bf C}}$
– as the algebraic closure, and as a quadratic extension, of the reals
respectively – each reveal important features of the complex numbers in
applications. Because ${{\bf C}}$
is algebraically closed, all polynomials over the complex numbers split
completely, which leads to a good spectral theory for both
finite-dimensional matrices and infinite-dimensional operators; in
particular, one expects to be able to diagonalise most matrices and
operators. Applying this theory to constant coefficient ordinary
differential equations leads to a unified theory of such solutions, in
which real-variable ODE behaviour such as exponential growth or decay,
polynomial growth, and sinusoidal oscillation all become aspects of a
single object, the complex exponential ${z \mapsto e^z}$ (or more generally, the matrix exponential ${A \mapsto \exp(A)}$).
Applying this theory more generally to diagonalise arbitrary
translation-invariant operators over some locally compact abelian group,
one arrives at Fourier analysis,
which is thus most naturally phrased in terms of complex-valued
functions rather than real-valued ones. If one drops the assumption that
the underlying group is abelian, one instead discovers the
representation theory of unitary representations,
which is simpler to study than the real-valued counterpart of
orthogonal representations. For closely related reasons, the theory of
complex Lie groups is simpler than that of real Lie groups.

Meanwhile, the fact that the complex numbers are a quadratic extension
of the reals lets one view the complex numbers geometrically as a
two-dimensional plane over the reals (the Argand plane).
Whereas a point singularity in the real line disconnects that line, a
point singularity in the Argand plane leaves the rest of the plane
connected (although, importantly, the punctured plane is no longer simply connected).
As we shall see, this fact causes singularities in complex analytic
functions to be better behaved than singularities of real analytic
functions, ultimately leading to the powerful residue calculus
for computing complex integrals. Remarkably, this calculus, when
combined with the quintessentially complex-variable technique of contour shifting, can also be used to compute some (though certainly not all) definite integrals of real-valued
functions that would be much more difficult to compute by purely
real-variable methods; this is a prime example of Hadamard’s famous
dictum that “the shortest path between two truths in the real domain
passes through the complex domain”.

Another important geometric feature of the Argand plane is the angle
between two tangent vectors to a point in the plane. As it turns out,
the operation of multiplication by a complex scalar preserves the
magnitude and orientation of such angles; the same fact is true for any
non-degenerate complex analytic mapping, as can be seen by performing a
Taylor expansion to first order. This fact ties the study of complex
mappings closely to that of the conformal geometry
of the plane (and more generally, of two-dimensional surfaces and
domains). In particular, one can use complex analytic maps to
conformally transform one two-dimensional domain to another, leading
among other things to the famous Riemann mapping theorem, and to the classification of Riemann surfaces.

If one Taylor expands complex analytic maps to second order rather than
first order, one discovers a further important property of these maps,
namely that they are harmonic.
This fact makes the class of complex analytic maps extremely rigid and
well behaved analytically; indeed, the entire theory of elliptic PDE now
comes into play, giving useful properties such as elliptic regularity and the maximum principle.
In fact, due to the magic of residue calculus and contour shifting, we
already obtain these properties for maps that are merely complex
differentiable rather than complex analytic, which leads to the striking
fact that complex differentiable functions are automatically analytic
(in contrast to the real-variable case, in which real differentiable
functions can be very far from being analytic).

The geometric structure of the complex numbers (and more generally of
complex manifolds and complex varieties), when combined with the
algebraic closure of the complex numbers, leads to the beautiful subject
of complex algebraic geometry, which motivates the much more
general theory developed in modern algebraic geometry. However, we will
not develop the algebraic geometry aspects of complex analysis here.

Last, but not least, because of the good behaviour of Taylor series in
the complex plane, complex analysis is an excellent setting in which to
manipulate various generating functions, particularly Fourier series ${\sum_n a_n e^{2\pi i n \theta}}$ (which can be viewed as boundary values of power (or Laurent) series ${\sum_n a_n z^n}$), as well as Dirichlet series ${\sum_n \frac{a_n}{n^s}}$. The theory of contour integration provides a very useful dictionary between the asymptotic behaviour of the sequence ${a_n}$,
and the complex analytic behaviour of the Dirichlet or Fourier series,
particularly with regard to its poles and other singularities. This
turns out to be a particularly handy dictionary in analytic number theory, for instance relating the distribution of the primes to the Riemann zeta function. Nowadays, many of the analytic number theory results first obtained through complex analysis (such as the prime number theorem)
can also be obtained by more “real-variable” methods; however the
complex-analytic viewpoint is still extremely valuable and illuminating.

We will frequently touch upon many of these connections to other fields
of mathematics in these lecture notes. However, these are mostly side
remarks intended to provide context, and it is certainly possible to
skip most of these tangents and focus purely on the complex analysis
material in these notes if desired.

Note: complex analysis is a very visual subject, and one should draw
plenty of pictures while learning it. I am however not planning to put
too many pictures in these notes, partly as it is somewhat inconvenient
to do so on this blog from a technical perspective, but also because
pictures that one draws on one’s own are likely to be far more useful to
you than pictures that were supplied by someone else.

— 1. The construction and algebra of the complex numbers —
Note: this section will be far more algebraic in nature than the rest of
the course; we are concentrating almost all of the algebraic
preliminaries in this section in order to get them out of the way and
focus subsequently on the analytic aspects of the complex numbers.

Thanks to the laws of high-school algebra, we know that the real numbers ${{\bf R}}$ are a field:
it is endowed with the arithmetic operations of addition, subtraction,
multiplication, and division, as well as the additive identity ${0}$ and multiplicative identity ${1}$, that obey the usual laws of algebra (i.e. the field axioms).

The algebraic structure of the reals does have one drawback though – not
all (non-trivial) polynomials have roots! Most famously, the polynomial
equation ${x^2+1=0}$ has no solutions over the reals, because ${x^2}$ is always non-negative, and hence ${x^2+1}$ is always strictly positive, whenever ${x}$ is a real number.

As mentioned in the introduction, one traditional way to define the complex numbers ${{\bf C}}$ is as the smallest possible extension of the reals ${{\bf R}}$ that fixes this one specific problem:

Definition 1 (The complex numbers) A field of complex numbers is a field ${{\bf C}}$ that contains the real numbers ${{\bf R}}$ as a subfield, as well as a root ${i}$ of the equation ${i^2+1=0}$. (Thus, strictly speaking, a field of complex numbers is a pair ${({\bf C},i)}$, but we will almost always abuse notation and use ${{\bf C}}$ as a metonym for the pair ${({\bf C},i)}$.) Furthermore, ${{\bf C}}$ is generated by ${{\bf R}}$ and ${i}$, in the sense that there is no subfield of ${{\bf C}}$, other than ${{\bf C}}$ itself, that contains both ${{\bf R}}$ and ${i}$; thus, in the language of field extensions, we have ${{\bf C} = {\bf R}(i)}$.
(We will take the existence of the real numbers ${{\bf R}}$
as a given in this course; constructions of the real number system can
of course be found in many real analysis texts, including my own.)

Definition 1 is short, but proposing it as a definition of the complex numbers raises some immediate questions:

• (Existence) Does such a field ${{\bf C}}$ even exist?
• (Uniqueness) Is such a field ${{\bf C}}$ unique (up to isomorphism)?
• (Non-arbitrariness) Why the square root of ${-1}$? Why not adjoin instead, say, a fourth root of ${-42}$, or the solution to some other algebraic equation? Also, could one iterate the process, extending ${{\bf C}}$ further by adding more roots of equations?
The third set of questions can be answered satisfactorily once we possess the fundamental theorem of algebra. For now, we focus on the first two questions.

We begin with existence. One can construct the complex numbers quite
explicitly and quickly using the Argand plane construction; see Remark 7
below. However, from the perspective of higher mathematics, it is more
natural to view the construction of the complex numbers as a special
case of the more general algebraic construction that can extend any
field ${k}$ by the root ${\alpha}$ of an irreducible nonlinear polynomial ${P \in k[\mathrm{x}]}$ over that field; this produces a field of complex numbers ${{\bf C}}$ when specialising to the case where ${k={\bf R}}$ and ${P = \mathrm{x}^2+1}$. We will just describe this construction in that special case, leaving the general case as an exercise.

Starting with the real numbers ${{\bf R}}$, we can form the space ${{\bf R}[\mathrm{x}]}$ of (formal) polynomials

$\displaystyle P(x) = a_d \mathrm{x}^d + a_{d-1} \mathrm{x}^{d-1} + \dots + a_0$
with real co-efficients ${a_0,\dots,a_d \in {\bf R}}$ and arbitrary non-negative integer ${d}$ in one indeterminate variable ${\mathrm{x}}$. (A small technical point: we do not view this indeterminate ${\mathrm{x}}$ as belonging to any particular domain such as ${{\bf R}}$, so we do not view these polynomials ${P}$ as functions but merely as formal expressions involving a placeholder symbol ${\mathrm{x}}$ (which we have rendered in Roman type to indicate its formal character). In this particular characteristic zero setting of working over the reals, it turns out to be harmless to identify each polynomial ${P}$ with the corresponding function ${P: {\bf R} \rightarrow {\bf R}}$ formed by interpreting the indeterminate ${\mathrm{x}}$
as a real variable; but if one were to generalise this construction to
positive characteristic fields, and particularly finite fields, then one
can run into difficulties if polynomials are not treated formally, due
to the fact that two distinct formal polynomials might agree on all
inputs in a given finite field (e.g. the polynomials ${x}$ and ${x^p}$ agree for all ${x}$ in the finite field ${{\mathbf F}_p}$). However, this subtlety can be ignored for the purposes of this course.) This space ${{\bf R}[x]}$
of polynomials has a pretty good algebraic structure, in particular the
usual operations of addition, subtraction, and multiplication on
polynomials, together with the zero polynomial ${0}$ and the unit polynomial ${1}$, give ${{\bf R}[\mathrm{x}]}$ the structure of a (unital) commutative ring. This commutative ring also contains ${{\bf R}}$ as a subring (identifying each real number ${a}$ with the degree zero polynomial ${a x^0}$). The ring ${{\bf R}[\mathrm{x}]}$ is however not a field, because many non-zero elements of ${{\bf R}[\mathrm{x}]}$ do not have multiplicative inverses. (In fact, no non-constant polynomial in ${{\bf R}[\mathrm{x}]}$ has an inverse in ${{\bf R}[\mathrm{x}]}$, because the product of two non-constant polynomials has a degree that is the sum of the degrees of the factors.)

If a unital commutative ring fails to be field, then it will instead possess a number of non-trivial ideals. The only ideal we will need to consider here is the principal ideal

$\displaystyle \langle \mathrm{x}^2+1 \rangle := \{ (\mathrm{x}^2+1) P(\mathrm{x}): P(\mathrm{x}) \in {\bf R}[\mathrm{x}] \}.$
This is clearly an ideal of ${{\bf R}[\mathrm{x}]}$ – it is closed under addition and subtraction, and the product of any element of the ideal ${\langle \mathrm{x}^2 + 1 \rangle}$ with an element of the full ring ${{\bf R}[\mathrm{x}]}$ remains in the ideal ${\langle \mathrm{x}^2 + 1 \rangle}$.

We now define ${{\bf C}}$ to be the quotient space

$\displaystyle {\bf C} := {\bf R}[\mathrm{x}] / \langle \mathrm{x}^2+1 \rangle$
of the commutative ring ${{\bf R}[\mathrm{x}]}$ by the ideal ${\langle \mathrm{x}^2+1 \rangle}$; this is the space of cosets ${P(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle = \{ P(\mathrm{x}) + Q(\mathrm{x}): Q(\mathrm{x}) \in \langle \mathrm{x}^2+1 \rangle \}}$ of ${\langle \mathrm{x}^2+1 \rangle}$ in ${{\bf R}[\mathrm{x}]}$. Because ${\langle \mathrm{x}^2+1 \rangle}$ is an ideal, there is an obvious way to define addition, subtraction, and multiplication in ${{\bf C}}$, namely by setting

$\displaystyle (P(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) + (Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) := (P(\mathrm{x}) + Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle),$
$\displaystyle (P(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) - (Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) := (P(\mathrm{x}) - Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle)$
and

$\displaystyle (P(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) \cdot (Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle) := (P(\mathrm{x}) Q(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle)$
for all ${P(\mathrm{x}), Q(\mathrm{x}) \in {\bf R}[\mathrm{x}]}$; these operations, together with the additive identity ${0 = 0 + \langle \mathrm{x}^2+1 \rangle}$ and the multiplicative identity ${1 = 1 + \langle \mathrm{x}^2+1 \rangle}$, can be easily verified to give ${{\bf C}}$ the structure of a commutative ring. Also, the real line ${{\bf R}}$ embeds into ${{\bf C}}$ by identifying each real number ${a}$ with the coset ${a + \langle \mathrm{x}^2+1 \rangle}$; note that this identification is injective, as no real number is a multiple of the polynomial ${\mathrm{x}^2+1}$.

If we define ${i \in {\bf C}}$ to be the coset

$\displaystyle i := \mathrm{x} + \langle \mathrm{x}^2 + 1 \rangle,$
then it is clear from construction that ${i^2+1=0}$. Thus ${{\bf C}}$ contains both ${{\bf R}}$ and a solution of the equation ${i^2+1=0}$. Also, since every element of ${{\bf C}}$ is of the form ${P(\mathrm{x}) + \langle \mathrm{x}^2+1 \rangle}$ for some polynomial ${P \in {\bf R}[\mathrm{x}]}$, we see that every element of ${{\bf C}}$ is a polynomial combination ${P(i)}$ of ${i}$ with real coefficients; in particular, any subring of ${{\bf C}}$ that contains ${{\bf R}}$ and ${i}$ will necessarily have to contain every element of ${{\bf C}}$. Thus ${{\bf C}}$ is generated by ${{\bf R}}$ and ${i}$.

The only remaining thing to verify is that ${{\bf C}}$ is a field and not just a commutative ring. In other words, we need to show that every non-zero element of ${{\bf C}}$ has a multiplicative inverse. This stems from a particular property of the polynomial ${\mathrm{x}^2 + 1}$, namely that it is irreducible in ${{\bf R}[\mathrm{x}]}$. That is to say, we cannot factor ${\mathrm{x}^2+1}$ into non-constant polynomials

$\displaystyle \mathrm{x}^2 + 1 = P(\mathrm{x}) Q(\mathrm{x})$
with ${P(\mathrm{x}), Q(\mathrm{x}) \in {\bf R}[\mathrm{x}]}$. Indeed, as ${\mathrm{x}^2+1}$ has degree two, the only possible way such a factorisation could occur is if ${P(\mathrm{x}), Q(\mathrm{x})}$ both have degree one, which would imply that the polynomial ${x^2+1}$ has a root in the reals ${{\bf R}}$, which of course it does not.

Because the polynomial ${\mathrm{x}^2+1}$ is irreducible, it is also prime: if ${\mathrm{x}^2+1}$ divides a product ${P(\mathrm{x}) Q(\mathrm{x})}$ of two polynomials in ${{\bf R}[\mathrm{x}]}$, then it must also divide at least one of the factors ${P(\mathrm{x})}$, ${Q(\mathrm{x})}$. Indeed, if ${\mathrm{x}^2 + 1}$ does not divide ${P(\mathrm{x})}$, then by irreducibility the greatest common divisor of ${\mathrm{x}^2+1}$ and ${P(\mathrm{x})}$ is ${1}$. Applying the Euclidean algorithm for polynomials, we then obtain a representation of ${1}$ as

$\displaystyle 1 = R(\mathrm{x}) (\mathrm{x}^2+1) + S(\mathrm{x}) P(\mathrm{x})$
for some polynomials ${R(\mathrm{x}), S(\mathrm{x})}$; multiplying both sides by ${Q(\mathrm{x})}$, we conclude that ${Q(\mathrm{x})}$ is a multiple of ${\mathrm{x}^2+1}$.

Since ${\mathrm{x}^2+1}$ is prime, the quotient space ${{\bf C} = {\bf R}[\mathrm{x}] / \langle \mathrm{x}^2+1 \rangle}$ is an integral domain: there are no zero-divisors in ${{\bf C}}$ other than zero. This brings us closer to the task of showing that ${{\bf C}}$ is a field, but we are not quite there yet; note for instance that ${{\bf R}[\mathrm{x}]}$ is an integral domain, but not a field. But one can finish up by using finite dimensionality. As ${{\bf C}}$ is a ring containing the field ${{\bf R}}$, it is certainly a vector space over ${{\bf R}}$; as ${{\bf C}}$ is generated by ${{\bf R}}$ and ${i}$, and ${i^2=-1}$, we see that it is in fact a two-dimensional vector space over ${{\bf R}}$, spanned by ${1}$ and ${i}$ (which are linearly independent, as ${i}$ clearly cannot be real). In particular, it is finite dimensional. For any non-zero ${z \in {\bf C}}$, the multiplication map ${w \mapsto zw}$ is an ${{\bf R}}$-linear map from this finite-dimensional vector space to itself. As ${{\bf C}}$ is an integral domain, this map is injective; by finite-dimensionality, it is therefore surjective (by the rank-nullity theorem). In particular, there exists ${w}$ such that ${zw = 1}$, and hence ${z}$ is invertible and ${{\bf C}}$ is a field. This concludes the construction of a complex field ${{\bf C}}$.

Remark 2 One can think of the action of passing from a ring ${R}$ to a quotient ${R/I}$ by some ideal ${I}$ as the action of forcing some relations to hold between the various elements of ${R}$, by requiring all the elements of the ideal ${I}$ (or equivalently, all the generators of ${I}$) to vanish. Thus one can think of ${{\bf R}[\mathrm{x}] / \langle \mathrm{x}^2 + 1 \rangle}$ as the ring formed by adjoining a new element ${i}$ to the existing ring ${{\bf R}}$ and then demanding the constraint ${i^2+1=0}$.
With this perspective, the main issues to check in order to obtain a
complex field are firstly that these relations do not collapse the ring
so much that two previously distinct elements of ${{\bf R}}$
become equal, and secondly that all the non-zero elements become
invertible once the relations are imposed, so that we obtain a field
rather than merely a ring or integral domain.
Remark 3 It is instructive to compare the complex field ${{\bf R}[\mathrm{x}] / \langle \mathrm{x}^2 + 1 \rangle}$, formed by adjoining the square root of ${-1}$ to the reals, with other commutative rings such as the dual numbers ${{\bf R}[\mathrm{x}] / \langle \mathrm{x}^2 \rangle}$ (which adjoins an additional square root of ${0}$ to the reals) or the split complex numbers ${{\bf R}[\mathrm{x}] / \langle \mathrm{x}^2 - 1 \rangle}$ (which adjoins a new root of ${+1}$
to the reals). The latter two objects are perfectly good rings, but are
not fields (they contain zero divisors, and the first ring even
contains a nilpotent). This is ultimately due to the reducible nature of
the polynomials ${\mathrm{x}^2}$ and ${\mathrm{x}^2-1}$ in ${{\bf R}[\mathrm{x}]}$.
Uniqueness of ${{\bf C}}$ up to isomorphism is a straightforward exercise:

Exercise 4 (Uniqueness up to isomorphism) Suppose that one has two complex fields ${{\bf C} = ({\bf C},i)}$ and ${{\bf C}' = ({\bf C}',i')}$. Show that there is a unique field isomorphism ${\iota: {\bf C} \rightarrow {\bf C}'}$ that maps ${i}$ to ${i'}$ and is the identity on ${{\bf R}}$.
Now that we have existence and uniqueness up to isomorphism, it is safe to designate one of the complex fields ${{\bf C} = ({\bf C},i)}$ as the
complex field; the other complex fields out there will no longer be of
much importance in this course (or indeed, in most of mathematics), with
one small exception that we will get to later in this section. One can,
if one wishes, use the above abstract algebraic construction ${( {\bf R}[\mathrm{x}] / \langle \mathrm{x}^2+1 \rangle, \mathrm{x} + \langle \mathrm{x}^2 + 1 \rangle)}$ as the choice for “the” complex field ${{\bf C}}$, but one can certainly pick other choices if desired (e.g. the Argand plane construction in Remark 7 below). But in view of Exercise 4, the precise construction of ${{\bf C}}$
is not terribly relevant for the purposes of actually doing complex
analysis, much as the question of whether to construct the real numbers ${{\bf R}}$
using Dedekind cuts, equivalence classes of Cauchy sequences, or some
other construction is not terribly relevant for the purposes of actually
doing real analysis. So, from here on out, we will no longer refer to
the precise construction of ${{\bf C}}$ used; the reader may certainly substitute his or her own favourite construction of ${{\bf C}}$ in place of ${{\bf R}[\mathbf{x}] / \langle {\mathbf x}^2 + 1 \rangle}$ if desired, with essentially no change to the rest of the lecture notes.

Exercise 5 Let ${k}$ be an arbitrary field, let ${k[\mathrm{x}]}$ be the ring of polynomials with coefficients in ${k}$, and let ${P(\mathrm{x})}$ be an irreducible polynomial in ${k[\mathrm{x}]}$ of degree at least two. Show that ${k[\mathrm{x}] / \langle P(\mathrm{x}) \rangle}$ is a field containing an embedded copy of ${k}$, as well as a root ${\alpha}$ of the equation ${P(\alpha)=0}$, and that this field is generated by ${k}$ and ${\alpha}$. Also show that all such fields are unique up to isomorphism. (This field ${k[\mathrm{x}] / \langle P(\mathrm{x}) \rangle}$ is an example of a field extension of ${k}$,
the further study of which can be found in any advanced undergraduate
or early graduate text on algebra, and is the starting point in
particular for the beautiful topic of Galois theory, which we will not discuss here.)
Exercise 6 Let ${k}$ be an arbitrary field. Show that every non-constant polynomial ${P(\mathrm{x})}$ in ${k[\mathrm{x}]}$ can be factored as the product ${P_1(\mathrm{x}) \dots P_r(\mathrm{x})}$ of irreducible non-constant polynomials. Furthermore show that this factorisation is unique up to permutation of the factors ${P_1(\mathrm{x}),\dots,P_r(\mathrm{x})}$,
and multiplication of each of the factors by a constant (with the
product of all such constants being one). In other words: the
polynomiail ring ${k[\mathrm{x}]}$ is a unique factorisation domain.
Remark 7 (Real and imaginary coordinates) As a complex field ${{\bf C}}$ is spanned (over ${{\bf R}}$) by the linearly independent elements ${1}$ and ${i}$, we can write

$\displaystyle {\bf C} = \{ a + b i: a,b \in {\bf R} \}$
with each element ${z}$ of ${{\bf C}}$ having a unique representation of the form ${a+bi}$, thus

$\displaystyle a+bi = c+di \iff a=c \hbox{ and } b=d$
for real ${a,b,c,d}$. The addition, subtraction, and multiplication operations can then be written down explicitly in these coordinates as

$\displaystyle (a+bi) + (c+di) = (a+c) + (b+d)i$
$\displaystyle (a+bi) - (c+di) = (a-c) + (b-d)i$
$\displaystyle (a+bi) (c+di) = (ac-bd) + (ad+bc)i$
and with a bit more work one can compute the division operation as

$\displaystyle \frac{a+bi}{c+di} = \frac{ac+bd}{c^2+d^2} + \frac{bc-ad}{c^2+d^2} i$
if ${c+di \neq 0}$. One could take these coordinate representations as the definition of the complex field ${{\bf C}}$
and its basic arithmetic operations, and this is indeed done in many
texts introducing the complex numbers. In particular, one could take the
Argand plane ${({\bf R}^2, (0,1))}$ as the choice of complex field, where we identify each point ${(a,b)}$ in ${{\bf R}^2}$ with ${a+bi}$ (so for instance ${{\bf R}^2}$ becomes endowed with the multiplication operation ${(a,b) (c,d) = (ac-bd,ad+bc)}$).
This is a very concrete and direct way to construct the complex
numbers; the main drawback is that it is not immediately obvious that
the field axioms are all satisfied. For instance, the associativity of
multiplication is rather tedious to verify in the coordinates of the
Argand plane. In contrast, the more abstract algebraic construction of
the complex numbers given above makes it more evident what the source of
the field structure on ${{\bf C}}$ is, namely the irreducibility of the polynomial ${\mathrm{x}^2+1}$.
Remark 8 Because of the Argand plane construction, we will sometimes refer to the space ${{\bf C}}$ of complex numbers as the complex plane. We should warn, though, that in some areas of mathematics, particularly in algebraic geometry, ${{\bf C}}$ is viewed as a one-dimensional complex vector space (or a one-dimensional complex manifold or complex variety), and so ${{\bf C}}$ is sometimes referred to in those cases as a complex line. (Similarly, Riemann surfaces, which from a real point of view are two-dimensional surfaces, can sometimes be referred to as complex curves in the literature; the modular curve
is a famous instance of this.) In this current course, though, the
topological notion of dimension turns out to be more important than the
algebraic notions of dimension, and as such we shall generally refer to ${{\bf C}}$ as a plane rather than a line.
Elements of ${{\bf C}}$ of the form ${bi}$ for ${b}$ real are known as purely imaginary numbers;
the terminology is colourful, but despite the name, imaginary numbers
have precisely the same first-class mathematical object status as real
numbers. If ${z=a+bi}$ is a complex number, the real components ${a,b}$ of ${z}$ are known as the real part ${\mathrm{Re}(z)}$ and imaginary part ${\mathrm{Im}(z)}$ of ${z}$ respectively. Complex numbers that are not real are occasionally referred to as strictly complex numbers. In the complex plane, the set ${{\bf R}}$ of real numbers forms the real axis, and the set ${i{\bf R}}$ of imaginary numbers forms the imaginary axis. Traditionally, elements of ${{\bf C}}$ are denoted with symbols such as ${z}$, ${w}$, or ${\zeta}$, while symbols such as ${a,b,c,d,x,y}$ are typically intended to represent real numbers instead.

Remark 9 We noted earlier that the equation ${x^2+1=0}$ had no solutions in the reals because ${x^2+1}$ was always positive. In other words, the properties of the order relation ${<}$ on ${{\bf R}}$ prevented the existence of a root for the equation ${x^2+1=0}$. As ${{\bf C}}$ does have a root for ${x^2+1=0}$, this means that the complex numbers ${{\bf C}}$
cannot be ordered in the same way that the reals are ordered (that is
to say, being totally ordered, with the positive numbers closed under
both addition and multiplication). Indeed, one usually refrains from
putting any order structure on the complex numbers, so that statements
such as ${z < w}$ for complex numbers ${z,w}$ are left undefined (unless ${z,w}$ are real, in which case one can of course use the real ordering). In particular, the complex number ${i}$ is considered to be neither positive nor negative, and an assertion such as ${z is understood to implicitly carry with it the claim that ${z,w}$
are real numbers and not just complex numbers. (Of course, if one
really wants to, one can find some total orderings to place on ${{\bf C}}$, e.g. lexicographical ordering on the real and imaginary parts. However, such orderings do not interact too well with the algebraic structure of ${{\bf C}}$ and are rarely useful in practice.)
As with any other field, we can raise a complex number ${z}$ to a non-negative integer ${n}$ by declaring inductively ${z^0 := 1}$ and ${z^{n+1} := z \times z^n}$ for ${n \geq 0}$; in particular we adopt the usual convention that ${0^0=1}$ (when thinking of the base ${0}$ as a complex number, and the exponent ${0}$ as a non-negative integer). For negative integers ${n = -m}$, we define ${z^n = 1/z^m}$ for non-zero ${z}$; we leave ${z^n}$ undefined when ${z}$ is zero and ${n}$ is negative. At the present time we do not attempt to define ${z^\alpha}$ for any exponent ${\alpha}$
other than an integer; we will return to such exponentiation operations
later in the course, though we will at least define the complex
exponential ${e^z}$ for any complex ${z}$ later in this set of notes.

By definition, a complex field ${({\bf C},i)}$ is a field ${{\bf C}}$ together with a root ${z=i}$ of the equation ${z^2+1=0}$. But if ${z=i}$ is a root of the equation ${z^2+1=0}$, then so is ${z=-i}$ (indeed, from the factorisation ${z^2+1 = (z-i) (z+i)}$ we see that these are the only two roots of this quadratic equation. Thus we have another complex field ${\overline{{\bf C}} := ({\bf C},-i)}$ which differs from ${{\bf C}}$ only in the choice of root ${i}$. By Exercise 4, there is a unique field isomorphism from ${{\bf C}}$ to ${{\bf C}}$ that maps ${i}$ to ${-i}$ (i.e. a complex field isomorphism from ${{\bf C}}$ to ${\overline{{\bf C}}}$); this operation is known as complex conjugation and is denoted ${z \mapsto \overline{z}}$. In coordinates, we have

$\displaystyle \overline{a+bi} = a-bi.$
Being a field isomorphism, we have in particular that

$\displaystyle \overline{z+w} = \overline{z} + \overline{w}$
and

$\displaystyle \overline{zw} = \overline{z} \overline{w}$
for all complex numbers ${z,w}$. It is also clear that complex conjugation fixes the real numbers, and only the real numbers: ${z = \overline{z}}$ if and only if ${z}$
is real. Geometrically, complex conjugation is the operation of
reflection in the complex plane across the real axis. It is clearly an involution in the sense that it is its own inverse:

$\displaystyle \overline{\overline{z}} = z.$
One can also relate the real and imaginary parts to complex conjugation via the identities

Remark 10 Any field automorphism of ${{\bf C}}$ has to map ${i}$ to a root of ${z^2+1=0}$, and so the only field automorphisms of ${{\bf C}}$ that preserve the real line are the identity map and the conjugation map; conversely, the real line is the subfield of ${{\bf C}}$ fixed by both of these automorphisms. In the language of Galois theory, this means that ${{\bf C}}$ is a Galois extension of ${{\bf R}}$, with Galois group ${\mathrm{Gal}({\bf C}/{\bf R})}$ consisting of two elements. There is a certain sense in which one can think of the complex numbers (or more precisely, the scheme ${{\mathcal C}}$ of complex numbers) as a double cover of the real numbers (or more precisely, the scheme ${{\mathcal R}}$
of real numbers), analogous to how the boundary of a Möbius strip can
be viewed as a double cover of the unit circle formed by shrinking the
width of the strip to zero. (In this analogy, points on the unit circle
correspond to specific models of the real number system ${{\bf R}}$, and lying above each such point are two specific models ${({\bf C}, i)}$, ${({\bf C},-i)}$
of the complex number system; this analogy can be made precise using
Grothendieck’s “functor of points” interpretation of schemes.) The
operation of complex conjugation is then analogous to the operation of monodromy
caused by looping once around the base unit circle, causing the two
complex fields sitting above a real field to swap places with each
other. (This analogy is not quite perfect, by the way, because the
boundary of a Möbius strip is not simply connected and can in turn be
finitely covered by other curves, whereas the complex numbers are
algebraically complete and admit no further finite extensions; one
should really replace the unit circle here by something with a
two-element fundamental group, such as the projective plane ${\mathbf{RP}^2}$ that is double covered by the sphere ${S^2}$, but this is harder to visualize.) The analogy between (absolute)
Galois groups and fundamental groups suggested by this picture can be
made precise in scheme theory by introducing the concept of an étale fundamental group, which unifies the two concepts, but developing this further is well beyond the scope of this course; see this book of Szamuely for further discussion.
Observe that if we multiply a complex number ${z}$ by its complex conjugate ${\overline{z}}$, we obtain a quantity ${N(z) := z \overline{z}}$ which is invariant with respect to conjugation (i.e. ${\overline{N(z)} = N(z)}$) and is therefore real. The map ${N: {\bf C} \rightarrow {\bf R}}$ produced this way is known in field theory as the norm form of ${{\bf C}}$ over ${{\bf R}}$; it is clearly multiplicative in the sense that ${N(zw) = N(z) N(w)}$, and is only zero when ${z}$ is zero. It can be used to link multiplicative inversion with complex conjugation, in that we clearly have

for any non-zero complex number ${z}$. In coordinates, we have

$\displaystyle N(a+bi) = (a+bi) (a-bi) = a^2+b^2$
(thus recovering, by the way, the inversion formula ${\frac{1}{a+bi} = \frac{a}{a^2+b^2} - \frac{b}{a^2+b^2} i}$ implicit in Remark 7). In coordinates, the multiplicativity ${N(zw) = N(z) N(w)}$ takes the form of Lagrange’s identity

$\displaystyle (ac-bd)^2 + (ad+bc)^2 = (a^2+b^2) (c^2+d^2)$
— 2. The geometry of the complex numbers —
The norm form ${N}$ of the complex numbers has the feature of being positive definite: ${N(z)}$ is always non-negative (and strictly positive when ${z}$ is non-zero). This is a feature that is somewhat special to the complex numbers; for instance, the quadratic extension ${{\bf Q}(\sqrt{2})}$ of the rationals ${{\bf Q}}$ by ${\sqrt{2}}$ has the norm form ${N(n+m\sqrt{2}) = (n+m\sqrt{2}) (n-m\sqrt{2}) = n^2-2m^2}$, which is indefinite. One can view this positive definiteness of the norm form as the one remaining vestige in ${{\bf C}}$ of the order structure ${<}$
on the reals, which as remarked previously is no longer present
directly in the complex numbers. (One can also view the positive
definiteness of the norm form as a consequence of the topological
connectedness of the punctured complex plane ${{\bf C} \backslash \{0\}}$: the norm form is positive at ${z=1}$, and cannot change sign anywhere in ${{\bf C} \backslash \{0\}}$, so is forced to be positive on the rest of this connected region.)

One consequence of positive definiteness is that the bilinear form

$\displaystyle \langle z, w \rangle := \mathrm{Re}( z \overline{w} )$
becomes a positive definite inner product on ${{\bf C}}$ (viewed as a vector space over ${{\bf R}}$). In particular, this turns the complex numbers into an inner product space over the reals. From the usual theory of inner product spaces, we can then construct a norm

$\displaystyle |z| := \langle z, z \rangle^{1/2} = N(z)^{1/2}$
(thus, the norm is the square root of the norm form) which obeys the triangle inequality

(which implies the usual permutations of this inequality, such as ${||z|-|w|| \leq |z-w| \leq |z| + |w|}$), and from the multiplicativity of the norm form we also have

(and hence also ${|z/w| = |z|/|w|}$ if ${w}$ is non-zero) and from the involutive nature of complex conjugation we have

The norm ${|\cdot|}$ clearly extends the absolute value operation ${x \mapsto |x|}$ on the real numbers, and so we also refer to the norm ${|z|}$ of a complex number ${z}$ as its absolute value or magnitude. In coordinates, we have

thus for instance ${|i|=1}$, and from (6) we also immediately have the useful inequalities

As with any other normed vector space, the norm ${z \mapsto |z|}$ defines a metric on the complex numbers via the definition

$\displaystyle d(z,w) := |z-w|.$
Note that using the Argand plane representation of ${{\bf C}}$ as ${{\bf R}^2}$ that this metric coincides with the usual Euclidean metric on ${{\bf R}^2}$. This metric in turn defines a topology on ${{\bf C}}$ (generated in the usual manner by the open disks ${D(z,r) := \{ w \in {\bf C}: |z-w| < r \}}$),
which in turn generates all the usual topological notions such as the
concept of an open set, closed set, compact set, connected set, and
boundary of a set; the notion of a limit of a sequence ${z_n}$; the notion of a continuous map, and so forth. For instance, a sequence ${z_n}$ of complex numbers converges to a limit ${z \in {\bf C}}$ if ${|z_n -z| \rightarrow 0}$ as ${n \rightarrow \infty}$, and a map ${f: {\bf C} \rightarrow {\bf C}}$ is continuous if one has ${f(z_n) \rightarrow f(z)}$ whenever ${z_n \rightarrow z}$,
or equivalently if the inverse image of any open set is open. Again,
using the Argand plane representation, these notions coincide with their
counterparts on the Euclidean plane ${{\bf R}^2}$.

As usual, if a sequence ${z_n}$ of complex numbers converges to a limit ${z}$, we write ${z = \lim_{n \rightarrow \infty} z_n}$. From the triangle inequality (3) and the multiplicativity (4) we see that the addition operation ${+: {\bf C} \times {\bf C} \rightarrow {\bf C}}$, subtraction operation ${-: {\bf C} \times {\bf C} \rightarrow {\bf C}}$, and multiplication operation ${\times: {\bf C} \rightarrow {\bf C} \rightarrow {\bf C}}$, thus we have the familiar limit laws

$\displaystyle \lim_{n \rightarrow \infty} (z_n + w_n) = \lim_{n \rightarrow \infty} z_n + \lim_{n \rightarrow \infty} w_n,$
$\displaystyle \lim_{n \rightarrow \infty} (z_n - w_n) = \lim_{n \rightarrow \infty} z_n - \lim_{n \rightarrow \infty} w_n$
and

$\displaystyle \lim_{n \rightarrow \infty} (z_n \cdot w_n) = \lim_{n \rightarrow \infty} z_n \cdot \lim_{n \rightarrow \infty} w_n$
whenever the limits on the right-hand side exist. Similarly, from (5) we see that complex conjugation is an isometry of the complex numbers, thus

$\displaystyle \lim_{n \rightarrow \infty} \overline{z_n} = \overline{\lim_{n \rightarrow \infty} z_n}$
when the limit on the right-hand side exists. As a consequence, the norm form ${N: {\bf C} \rightarrow {\bf R}}$ and the absolute value ${|\cdot|: {\bf C} \rightarrow {\bf R}}$ are also continuous, thus

$\displaystyle \lim_{n \rightarrow \infty} |z_n| = |\lim_{n \rightarrow \infty} z_n|$
whenever the limit on the right-hand side exists. Using the formula (2)
for the reciprocal of a complex number, we also see that division is a
continuous operation as long as the denominator is non-zero, thus

$\displaystyle \lim_{n \rightarrow \infty} \frac{z_n}{w_n} = \frac{\lim_{n \rightarrow \infty} z_n}{\lim_{n \rightarrow \infty} w_n}$
as long as the limits on the right-hand side exist, and the limit in the denominator is non-zero.

From (7) we see that

$\displaystyle z_n \rightarrow z \iff \mathrm{Re}(z_n) \rightarrow \mathrm{Re}(z) \hbox{ and } \mathrm{Im}(z_n) \rightarrow \mathrm{Im}(z);$
in particular

$\displaystyle \lim_{n \rightarrow \infty} \mathrm{Re}(z_n) = \mathrm{Re} \lim_{n \rightarrow \infty} z_n$
and

$\displaystyle \lim_{n \rightarrow \infty} \mathrm{Im}(z_n) = \mathrm{Im} \lim_{n \rightarrow \infty} z_n$
whenever the limit on the right-hand side exists. One consequence of this is that ${{\bf C}}$ is complete: every sequence ${z_n}$ of complex numbers that is a Cauchy sequence (thus ${|z_n-z_m| \rightarrow 0}$ as ${n,m \rightarrow \infty}$) converges to a unique complex limit ${z}$. (As such, one can view the complex numbers as a (very small) example of a Hilbert space.)

As with the reals, we have the fundamental fact that any formal series ${\sum_{n=0}^\infty z_n}$ of complex numbers which is absolutely convergent, in the sense that the non-negative series ${\sum_{n=0}^\infty |z_n|}$ is finite, is necessarily convergent to some complex number ${S}$, in the sense that the partial sums ${\sum_{n=0}^N z_n}$ converge to ${S}$ as ${N \rightarrow \infty}$. This is because the triangle inequality ensures that the partial sums are a Cauchy sequence. As usual we write ${S = \sum_{n=0}^\infty z_n}$ to denote the assertion that ${S}$ is the limit of the partial sums ${\sum_{n=0}^\infty}$.
We will occasionally have need to deal with series that are only
conditionally convergent rather than absolutely convergent, but in most
of our applications the only series we will actually evaluate are the
absolutely convergent ones. Many of the limit laws imply analogues for
series, thus for instance

$\displaystyle \sum_{n=0}^\infty \mathrm{Re}(z_n) = \mathrm{Re} \sum_{n=0}^\infty z_n$
whenever the series on the right-hand side is absolutely convergent
(or even just convergent). We will not write down an exhaustive list of
such series laws here.

An important role in complex analysis is played by the unit circle

$\displaystyle S^1 := \{ z \in {\bf C}: |z|=1 \}.$
In coordinates, this is the set of points ${a+bi}$ for which ${a^2+b^2=1}$,
and so this indeed has the geometric structure of a unit circle.
Elements of the unit circle will be referred to in these notes as phases. Every non-zero complex number ${z}$ has a unique polar decomposition as ${z = r \omega}$ where ${r>0}$ is a positive real and ${\omega}$ lies on the unit circle ${S^1}$. Indeed, it is easy to see that this decomposition is given by ${r = |z|}$ and ${\omega = \frac{z}{|z|}}$, and that this is the only polar decopmosition of ${z}$. We refer to the polar components ${r=|z|}$ and ${\omega = z/|z|}$ of a non-zero complex number ${z}$ as the magnitude and phase of ${z}$ respectively.

From (4) we see that the unit circle ${S^1}$ is a multiplicative group; it contains the multiplicative identity ${1}$, and if ${z, w}$ lie in ${S^1}$, then so do ${zw}$ and ${1/z}$. From (2) we see that reciprocation and complex conjugation agree on the unit circle, thus

$\displaystyle \frac{1}{z} = \overline{z}$
for ${z \in S^1}$. It is worth emphasising that this useful identity does not hold as soon as one leaves the unit circle, in which case one must use the more general formula (2) instead! If ${z_1,z_2}$ are non-zero complex numbers ${z_1,z_2}$ with polar decompositions ${z_1 = r_1 \omega_1}$ and ${z_2 = r_2 \omega_2}$ respectively, then clearly the polar decompositions of ${z_1 z_2}$ and ${z_1/z_2}$ are given by ${z_1 z_2 = (r_1 r_2) (\omega_1 \omega_2)}$ and ${z_1/z_2 = (r_1/r_2) (\omega_1/\omega_2)}$
respectively. Thus polar coordinates are very convenient for performing
complex multiplication, although they turn out to be atrocious for
performing complex addition. (This can be contrasted with the usual
Cartesian coordinates ${z=a+bi}$,
which are very convenient for performing complex addition and mildly
inconvenient for performing complex multiplication.) In the language of
group theory, the polar decomposition splits the multiplicative complex
group ${{\bf C}^\times = ({\bf C} \backslash \{0\}, \times)}$ as the direct product of the positive reals ${(0,+\infty)}$ and the unit circle ${S^1}$: ${{\bf C}^\times \equiv (0,+\infty) \times S^1}$.

If ${\omega}$ is an element of the unit circle ${S^1}$, then from (4) we see that the operation ${z \mapsto \omega z}$ of multiplication by ${\omega}$ is an isometry of ${{\bf C}}$, in the sense that

$\displaystyle |\omega z - \omega w| = |z-w|$
for all complex numbers ${z, w}$. This isometry also preserves the origin ${0}$. As such, it is geometrically obvious (see Exercise 11 below) that the map ${z \mapsto \omega z}$
must either be a rotation around the origin, or a reflection around a
line. The former operation is orientation preserving, and the latter is
orientation reversing. Since the map ${z \mapsto \omega z}$ is clearly orientation preserving when ${\omega = 1}$, and the unit circle ${S^1}$ is connected, a continuity argument shows that ${z \mapsto \omega z}$ must be orientation preserving for all ${\omega \in S^1}$, and so must be a rotation around the origin by some angle. Of course, by trigonometry, we may write

$\displaystyle \omega = \cos \theta + i \sin \theta$
for some real number ${\theta}$. The rotation ${z \mapsto \omega z}$ clearly maps the number ${1}$ to the number ${\cos \theta + i \sin \theta}$, and so the rotation must be a counter-clockwise rotation by ${\theta}$ (adopting the usual convention of placing ${1}$ to the right of the origin and ${i}$ above it). In particular, when applying this rotation ${z \mapsto \omega z}$ to another point ${\cos \phi + i \sin \phi}$ on the unit circle, this point must get rotated to ${\cos(\theta+\phi) + i \sin(\theta+\phi)}$. We have thus given a geometric proof of the multiplication formula

taking real and imaginary parts, we recover the familiar trigonometric addition formulae

$\displaystyle \cos(\theta+\phi) = \cos \theta \cos \phi - \sin \theta \sin \phi$
$\displaystyle \sin(\theta+\phi) = \sin \theta \cos \phi + \cos \theta \sin \phi.$
We can also iterate the multiplication formula to give de Moivre’s formula

$\displaystyle \cos( n \theta ) + i \sin(n \theta) = (\cos \theta + i \sin \theta)^n$
for any natural number ${n}$ (or indeed for any integer ${n}$), which can in turn be used to recover familiar identities such as the double angle formulae

$\displaystyle \cos(2\theta) = \cos^2 \theta - \sin^2 \theta$
$\displaystyle \sin(2\theta) = 2 \sin \theta \cos \theta$
or triple angle formulae

$\displaystyle \cos(3\theta) = \cos^3 \theta - 3 \sin^2 \theta \cos \theta$
$\displaystyle \sin(3\theta) = 3 \sin \theta \cos^2 \theta - \sin^3 \theta$
after expanding out de Moivre’s formula for ${n=2}$ or ${n=3}$ and taking real and imaginary parts.

Exercise 11

• (i) Let ${T: {\bf R}^2 \rightarrow {\bf R}^2}$ be an isometry of the Euclidean plane that fixes the origin ${(0,0)}$. Show that ${T}$ is either a rotation around the origin by some angle ${\theta \in {\bf R}}$, or the reflection around some line through the origin. (Hint: try to compose ${T}$ with rotations or reflections to achieve some normalisation of ${T}$, e.g. that ${T}$ fixes ${(1,0)}$. Then consider what ${T}$ must do to other points in the plane, such as ${(0,1)}$. Alternatively, one can use various formulae relating distances to angle, such as the sine rule or cosine rule, or the formula ${\langle z, w \rangle = |z| |w| \cos \theta}$ for the inner product.) For this question, you may use any result you already know from Euclidean geometry or trigonometry.
• (ii) Show that all isometries ${T: {\bf C} \rightarrow {\bf C}}$ of the complex numbers take the form
$\displaystyle T(z) = z_0 + \omega z$
or

$\displaystyle T(z) = z_0 + \omega \overline{z}$
for some complex number ${z_0}$ and phase ${\omega \in S^1}$.
Every non-zero complex number ${z}$ can now be written in polar form as

with ${r>0}$ and ${\theta \in {\bf R}}$; we refer to ${\theta}$ as an argument of ${z}$,
and can be interpreted as an angle of counterclockwise rotation needed
to rotate the positive real axis to a position that contains ${z}$. The argument is not quite unique, due to the periodicity of sine and cosine: if ${\theta}$ is an argument of ${z}$, then so is ${\theta + 2\pi k}$ for any integer ${k}$, and conversely these are all the possible arguments that ${z}$ can have. The set of all such arguments will be denoted ${\mathrm{arg}(z)}$; it is a coset of the discrete group ${2\pi {\bf Z} := \{ 2\pi k: k \in {\bf Z}\}}$, and can thus be viewed as an element of the ${1}$-torus ${{\bf R}/2\pi{\bf Z}}$.

The operation ${w \mapsto zw}$ of multiplying a complex number ${w}$ by a given non-zero complex number ${z}$ now has a very appealing geometric interpretation when expressing ${z}$ in polar coordinates (9): this operation is the composition of the operation of dilation by ${r}$ around the origin, and counterclockwise rotation by ${\theta}$ around the origin. For instance, multiplication by ${i}$ performs a counter-clockwise rotation by ${\pi/2}$ around the origin, while multiplication by ${-i}$ performs instead a clockwise rotation by ${\pi/2}$.
As complex multiplication is commutative and associative, it does not
matter in which order one performs the dilation and rotation operations.
Similarly, using Cartesian coordinates, we see that the operation ${w \mapsto z+w}$ of adding a complex number ${w}$ by a given complex number ${z}$ is simply a spatial translation by a displacement of ${z}$. The multiplication operation need not be isometric (due to the presence of the dilation ${r}$), but observe that both the addition and multiplication operations are conformal
(angle-preserving) and also orientation-preserving (a counterclockwise
loop will transform to another counterclockwise loop, and similarly for
clockwise loops). As we shall see later, these conformal and
orientation-preserving properties of the addition and multiplication
maps will extend to the larger class of complex differentiable maps (at least outside of critical points of the map), and are an important aspect of the geometry of such maps.

Remark 12 One can also interpret the operations
of complex arithmetic geometrically on the Argand plane as follows. As
the addition law on ${{\bf C}}$ coincides with the vector addition law on ${{\bf R}^2}$,
addition and subtraction of complex numbers is given by the usual
parallelogram rule for vector addition; thus, to add a complex number ${z}$ to another ${w}$, we can translate the complex plane until the origin ${0}$ gets mapped to ${z}$, and then ${w}$ gets mapped to ${z+w}$; conversely, subtraction by ${z}$ corresponds to translating ${z}$ back to ${0}$. Similarly, to multiply a complex number ${z}$ with another ${w}$, we can dilate and rotate the complex plane around the origin until ${1}$ gets mapped to ${z}$, and then ${w}$ will be mapped to ${zw}$; conversely, division by ${z}$ corresponds to dilating and rotating ${z}$ back to ${1}$.
When performing computations, it is convenient to restrict the argument ${\theta}$ of a non-zero complex number ${z}$ to lie in a fundamental domain of the ${1}$-torus ${{\bf R}/2\pi{\bf Z}}$, such as the half-open interval ${\{ \theta: 0 \leq \theta < 2\pi \}}$ or ${\{ \theta: -\pi < \theta \leq \pi \}}$, in order to recover a unique parameterisation (at the cost of creating a branch cut at one point of the unit circle). Traditionally, the fundamental domain that is most often used is the half-open interval ${\{ \theta: -\pi < \theta \leq \pi \}}$. The unique argument of ${z}$ that lies in this interval is called the standard argument of ${z}$ and is denoted ${\mathrm{Arg}(z)}$, and ${\mathrm{Arg}}$ is called the standard branch of the argument function. Thus for instance ${\mathrm{Arg}(1)=0}$, ${\mathrm{Arg}(i) = \pi/2}$, ${\mathrm{Arg}(-1) = \pi}$, and ${\mathrm{Arg}(-i) = -\pi/2}$. Observe that the standard branch of the argument has a discontinuity on the negative real axis ${\{ x \in {\bf R}: x \leq 0\}}$, which is the branch cut
of this branch. Changing the fundamental domain used to define a branch
of the argument can move the branch cut around, but cannot eliminate it
completely, due to non-trivial monodromy
(if one continuously loops once counterclockwise around the origin, and
varies the argument continuously as one does so, the argument will
increment by ${2\pi}$, and so no branch of the argument function can be continuous at every point on the loop).

The multiplication formula (8) resembles the multiplication formula

for the real exponential function ${\exp: {\bf R} \rightarrow {\bf R}}$. The two formulae can be unified through the famous Euler formula involving the complex exponential ${\exp: {\bf C} \rightarrow {\bf C}}$. There are many ways to define the complex exponential. Perhaps the most natural is through the ordinary differential equation ${\frac{d}{dz} \exp(z) = \exp(z)}$ with boundary condition ${\exp(0)=1}$.
However, as we have not yet set up a theory of complex differentiation,
we will proceed (at least temporarily) through the device of Taylor series. Recalling that the real exponential function ${\exp: {\bf R} \rightarrow {\bf R}}$ has the Taylor expansion

$\displaystyle \exp(x) = \sum_{n=0}^\infty \frac{x^n}{n!}$
$\displaystyle = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$
which is absolutely convergent for any real ${x}$, one is led to define the complex exponential function ${\exp: {\bf C} \rightarrow {\bf C}}$ by the analogous expansion

$\displaystyle = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \dots$
noting from (4) that the absolute convergence of the real exponential ${\exp(x)}$ for any ${x \in {\bf R}}$ implies the absolute convergence of the complex exponential for any ${z \in {\bf C}}$. We also frequently write ${e^z}$ for ${\exp(z)}$. The multiplication formula (10) for the real exponential extends to the complex exponential:

Exercise 13 Use the binomial theorem and Fubini’s theorem for (complex) doubly infinite series to conclude that

for any complex numbers ${z,w}$.
If one compares the Taylor series for ${\exp(z)}$ with the familiar Taylor expansions

$\displaystyle \sin(x) = \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}$
$\displaystyle = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dots$
and

$\displaystyle \cos(x) = \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{(2n)!}$
$\displaystyle = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \dots$
for the (real) sine and cosine functions, one obtains Euler formula

for any real number ${x}$; in particular we have the famous identities

and

We now see that the multiplication formula (8) can be written as a special form

$\displaystyle e^{i(\theta + \phi)} = e^{i\theta} e^{i\phi}$
of (12); similarly, de Moivre’s formula takes the simple and intuitive form

$\displaystyle e^{i n \theta} = (e^{i\theta})^n.$
From (12) and (13) we also see that the exponential function basically transforms Cartesian coordinates to polar coordinates:

$\displaystyle \exp( x+iy ) = e^x ( \cos y + i \sin y ).$
Later on in the course we will study (the various branches of) the
logarithm function that inverts the complex exponential, thus converting
polar coordinates back to Cartesian ones.

From (13) and (1), together with the easily verified identity

$\displaystyle \overline{e^{ix}} = e^{-ix},$
we see that we can recover the trigonometric functions ${\sin(x), \cos(x)}$ from the complex exponential by the formulae

(Indeed, if one wished, one could take these identities as the definition
of the sine and cosine functions, giving a purely analytic way to
construct these trigonometric functions.) From these identities one can
derive all the usual trigonometric identities from the basic properties
of the exponential (and in particular (12)). For instance, using a little bit of high school algebra we can prove the familiar identity

$\displaystyle \sin^2(x) + \cos^2(x) = 1$
from (16):

$\displaystyle \sin^2(x) + \cos^2(x) = \frac{(e^{ix}-e^{-ix})^2}{(2i)^2} + \frac{(e^{ix} + e^{-ix})^2}{2^2}$
$\displaystyle = \frac{e^{2ix} - 2 + e^{-2ix}}{-4} + \frac{e^{2ix} + 2 + e^{-2ix}}{4}$
$\displaystyle = 1.$
Thus, in principle at least, one no longer has a need to memorize
all the different trigonometric identities out there, since they can now
all be unified as consequences of just a handful of basic identities
for the complex exponential, such as (12), (14), and (15).

In view of (16), it is now natural to introduce the complex sine and cosine functions ${\sin: {\bf C} \rightarrow {\bf C}}$ and ${\cos: {\bf C} \rightarrow {\bf C}}$ by the formula

These complex trigonometric functions no longer
have a direct trigonometric interpretation (as one cannot easily
develop a theory of complex angles), but they still inherit almost all
of the algebraic properties of their real-variable counterparts. For
instance, one can repeat the above high school algebra computations verbatim to conclude that

for all ${z}$. (We caution however that this does not imply that ${\sin(z)}$ and ${\cos(z)}$ are bounded in magnitude by ${1}$ – note carefully the lack of absolute value signs outside of ${\sin(z)}$ and ${\cos(z)}$ in the above formula! See also Exercise 16
below.) Similarly for all of the other trigonometric identities. (Later
on in this series of lecture notes, we will develop the concept of analytic continuation, which can explain why so many real-variable algebraic identities naturally extend to their complex counterparts.) From (11)
we see that the complex sine and cosine functions have the same Taylor
series expansion as their real-variable counterparts, namely

$\displaystyle \sin(z) = \sum_{n=0}^\infty (-1)^n \frac{z^{2n+1}}{(2n+1)!}$
$\displaystyle = z - \frac{z^3}{3!} + \frac{z^5}{5!} - \dots$
and

$\displaystyle \cos(z) = \sum_{n=0}^\infty (-1)^n \frac{z^{2n}}{(2n)!}$
$\displaystyle = 1 - \frac{z^2}{2!} + \frac{z^4}{4!} - \dots.$
The formulae (17) for the complex sine and cosine functions greatly resemble those of the hyperbolic trigonometric functions ${\sinh, \cosh: {\bf R} \rightarrow {\bf R}}$, defined by the formulae

$\displaystyle \sinh(x) := \frac{e^x - e^{-x}}{2}; \quad \cosh(x) := \frac{e^x + e^{-x}}{2}.$
Indeed, if we extend these functions to the complex domain by defining ${\sinh, \cosh: {\bf C} \rightarrow {\bf C}}$ to be the functions

$\displaystyle \sinh(z) := \frac{e^z - e^{-z}}{2}; \quad \cosh(z) := \frac{e^z + e^{-z}}{2},$
then on comparison with (17) we obtain the complex identities

or equivalently

for all complex ${z}$.
Thus we see that once we adopt the perspective of working over the
complex numbers, the hyperbolic trigonometric functions are “rotations
by 90 degrees” of the ordinary trigonometric functions; this is a simple
example of what physicists call a Wick rotation.
In particular, we see from these identities that any trigonometric
identity will have a hyperbolic counterpart, though due to the presence
of various factors of ${i}$, the signs may change as one passes from trigonometric to hyperbolic functions or vice versa (a fact quantified by Osborne’s rule). For instance, by substituting (19) or (20) into (18) (and replacing ${z}$ by ${iz}$ or ${-iz}$ as appropriate), we end up with the analogous identity

$\displaystyle \cosh^2(z) - \sinh^2(z) = 1$
for the hyperbolic trigonometric functions. Similarly for all other
trigonometric identities. Thus we see that the complex exponential
single-handedly unites the trigonometry, hyperbolic trigonometry, and
the real exponential function into a single coherent theory!

Exercise 14

• (i) If ${n}$ is a positive integer, show that the only complex number solutions to the equation ${z^n = 1}$ are given by the ${n}$ complex numbers ${e^{2\pi i k/n}}$ for ${k=0,\dots,n-1}$; these numbers are thus known as the ${n^{th}}$ roots of unity. Conclude the identity ${z^n - 1 = \prod_{k=0}^{n-1} (z - e^{2\pi i k/n})}$ for any complex number ${z}$.
• (ii) Show that the only compact subgroups ${G}$ of the multiplicative complex numbers ${{\bf C}^\times}$ are the unit circle ${S^1}$ and the ${n^{th}}$ roots of unity
$\displaystyle C_n := \{ e^{2\pi i k/n}: k=0,1,\dots,n-1\}$
for ${n=1,2,\dots}$. (Hint: there are two cases, depending on whether ${1}$ is a limit point of ${G}$ or not.)
• (iii) Give an example of a non-compact subgroup of ${S^1}$.
• (iv) (Warning: this one is tricky.) Show that the only connected closed subgroups of ${{\bf C}^\times}$ are the whole group ${{\bf C}^\times}$, the trivial group ${\{1\}}$, and the one-parameter groups of the form ${\{ \exp( tz ): t \in {\bf R} \}}$ for some non-zero complex number ${z}$.
The next exercise gives a special case of the fundamental theorem of algebra, when considering the roots of polynomials of the specific form ${P(z) = z^n - w}$.

Exercise 15 Show that if ${w}$ is a non-zero complex number and ${n}$ is a positive integer, then there are exactly ${n}$ distinct solutions to the equation ${z^n = w}$, and any two such solutions differ (multiplicatively) by an ${n^{th}}$ root of unity. In particular, a non-zero complex number ${w}$ has two square roots, each of which is the negative of the other. What happens when ${w=0}$?
Exercise 16 Let ${z_n}$ be a sequence of complex numbers. Show that ${\sin(z_n)}$ is bounded if and only if the imaginary part of ${z_n}$ is bounded, and similarly with ${\sin(z_n)}$ replaced by ${\cos(z_n)}$.
Exercise 17 (This question was drawn from a previous version of this course taught by Rowan Killip.) Let ${w_1, w_2}$ be distinct complex numbers, and let ${\lambda}$ be a positive real that is not equal to ${1}$.

• (i) Show that the set ${\{ z \in {\bf C}: |\frac{z-w_1}{z-w_2}| = \lambda \}}$
defines a circle in the complex plane. (Ideally, you should be able to
do this without breaking everything up into real and imaginary parts.)
• (ii) Conversely, show that every circle in the complex plane arises in such a fashion (for suitable choices of ${w_1,w_2,\lambda}$, of course).
• (iii) What happens if ${\lambda=1}$?
• (iv) Let ${\gamma}$ be a circle that does not pass through the origin. Show that the image of ${\gamma}$ under the inversion map ${z \mapsto 1/z}$ is a circle. What happens if ${\gamma}$ is a line? What happens if the ${\gamma}$ passes through the origin (and one then deletes the origin from ${\gamma}$ before applying the inversion map)?
Exercise 18 If ${z}$ is a complex number, show that ${\exp(z) = \lim_{n \rightarrow \infty} (1 + \frac{z}{n})^n}$.