An introduction to the calculus, with an excellent balance between theory and technique. Integration is treated before d
393 131 47MB
English Pages 699 Year 1969
Table of contents :
Preface
Contents
PART 1. LINEAR ANALYSIS
1. LINEAR SPACES
1.1 Introduction
1.2 The definition of a linear space
1.3 Examples of linear spaces
1.4 Elementary consequences of the axioms
1.5 Exercises
1.6 Subspaces of a linear space
1.7 Dependent and independent sets in a linear space
1.8 Bases and dimension
1.9 Components
1.10 Exercises
1.11 Inner products, Euclidean spaces. Norms
1.12 Orthogonality in a Euclidean space
1.13 Exercises
1.14 Construction of orthogonal sets. The GramSchmidt process
1.15 Orthogonal complements. Projections
1.16 Best approximation of elements in a Euclidean space by elements in a finitedimensional subspace
1.17 Exercises
2. LINEAR TRANSFORMATIONS AND MATRICES
2.1 Linear transformations
2.2 Null space and range
2.3 Nullity and rank
2.4 Exercises
2.5 Algebraic operations on linear transformations
2.6 Inverses
2.7 Onetoonelinear transformations
2.8 Exercises
2.9 Linear transformations with prescribed values
2.10 Matrix representations of linear transformations
2.11 Construction of a matrix representation in diagonal form
2.12 Exercises
2.13 Linear spaces of matrices
2.14 Isomorphism betweenlinear transformations and matrices
2.15 Multiplication of matrices
2.16 Exercises
2.17 Systems of linear equations
2.18 Computation techniques
2.19 Inverses of square matrices
2.20 Exercises
2.21 Miscellaneous exercises on matrices
3. DETERMINANTS
3.1 Introduction
3.2 Motivation for the choiceof axiomsfor a determinant function
3.3 A set of axioms for a determinant function
3.4 Computation of determinants
3.5 The uniquenesstheorem
3.6 Exercises
3.7 The product formula for determinants
3.8 The determinant of the inverse of a nonsingular matrix
3.9 Determinantsand independence of vectors
3.10 The determinant of a blockdiagonal matrix
3.11 Exercises
3.12 Expansion formulas for determinants. Minors and cofactors
3.13 Existence of the determinant function
3.14 The determinant of a transpose
3.15 The cofactor matrix
3.16 Cramer's rule
3.17 Exercises
4. EIGENVALUES AND EIGENVECTORS)
4.1 Linear transformations with diagonal matrix representations
4.2 Eigenvectors and eigenvalues of a linear transformation
4.3 Linear independenceof eigenvectors corresponding to distinct eigenvalues
4.4 Exercises
4.5 The tinitedimensional case. Characteristicpolynomials
4.6 Calculation of eigenvalues and eigenvectorsin the finitedimensional case
4.7 Trace of a matrix
4.8 Exercises
4.9 Matrices representing the same linear transformation. Similar matrices
4.10 Exercises
5. EIGENVALUES OF OPERATORS ACTING ON EUCLIDEAN SPACES
5.1 Eigenvalues and inner products
5.2 Hermitian and skewHermitiantransformations
5.3 Eigenvalues and eigenvectors of Hermitian and skewHermitian operators
5.4 Orthogonalityof eigenvectors corresponding to distinct eigenvalues
5.5 Exercises
5.6 Existence of an orthonormal set of eigenvectors for Hermitian and skewHermitian operators acting on finitedimensional spaces
5.7 Matrix representations for Hermitian and skewHermitian operators
5.8 Hermitian and skewHermitian matrices. The adjoint of a matrix
5.9 Diagonalization of a Hermitian or skewHermitian matrix
5.10 Unitary matrices. Orthogonal matrices
5.11 Exercises
5.12 Quadratic forms
5.13 Reduction of a real quadratic form to a diagonal form
5.14 Applications to analytic geometry
5.15 Exercises
*5.16 Eigenvalues of a symmetric transformation obtained as values of quadratic form
*5.17 Extremal properties of eigenvalues of a symmetric transformation
*5.18 The finitedimensional case
5.19 Unitary transformations
5.20 Exercises
6. LINEAR DIFFERENTIAL EQUATIONS
6.1 Historical introduction
6.2 Review of results concerning linear equations of first and second orders
6.3 Exercises
6.4 Linear differential equations of order n
6.5 The existenceuniqueness theorem
6.6 The dimension of the solution space of a homogeneous linear equation
6.7 The algebra of constantcoefficient operators
6.8 Determination of a basis of solutions for linear equations with constant coefficients by factorization of operators
6.9 Exercises
6.10 The relation between the homogeneous and nonhomogeneous equations
6.11 Determination of a particular solution of the nonhomogeneous equation. The method of variation of parameters
6.12 Nonsingularity of the Wronskian matrix of n independent solutions of a homogeneous linear equation
6.13 Special methods for determining a particular solution of the nonhomogeneous equation. Reduction to a system of firstorder linear equations
6.14 The annihilator method for determining a particular solution of the nonhomogeneous equation
6.15 Exercises
6.16 Miscellaneous exercises on linear differential equations
6.17 Linear equations of second order with analytic coefficients
6.18 The Legendre equation
6.19 The Legendre polynomials
6.20 Rodrigues' formula for the Legendre polynomials
6.21 Exercises
6.22 The method of Frobenius
6.23 The Bessel equation
6.24 Exercises
7. SYSTEMS OF DIFFERENTIAL EQUATIONS
7.1 Introduction
7.2 Calculus of matrix functions
7.3 Infinite series of matrices. Norms of matrices
7.4 Exercises
7.5 The exponential matrix
7.6 The differential equation satisfied by e^{tA}
7.7 Uniqueness theorem for the matrix differential equation F'(t) = AF(t)
7.8 The law of exponentsfor exponential matrices
7.9 Existence and uniqueness theorems for homogeneous linear systems with constant coefficients
7.10 The problem of calculating e^{tA}
7.11 The CayleyHamilton theorem
7.12 Exercises
7.13 Putzer's method for calculating e^{tA}
7.14 Alternate methods for calculating e^{tA} in special cases
7.15 Exercises
7.16 Nonhomogeneous linear systems with constant coefficients
7.17 Exercises
7.18 The general linear system Y'(t) = P(t) Y(t) + Q(t)
7.19 A powerseries method for solving homogeneous linear systems
7.20 Exercises
7.21 Proof of the existence theorem by the method of successive approximations
7.22 The method of successive approximations applied to firstorder nonlinear systems
7.23 Proof of an existenceuniqueness theorem for firstorder nonlinear systems
7.24 Exercises
*7.25 Successive approximations and fixed points of operators
*7.26 Normed linear spaces
*7.27 Contraction operators
*7.28 Fixedpoint theorem for contraction operators
*7.29 Applications of the fixedpoint theorem
PART 2. NONLINEAR ANALYSIS
8. DIFFERENTIAL CALCULUS OF SCALAR AND VECTOR FIELDS
8.1 Functions from R^n to R^m. Scalar and vector fields
8.2 Open balls and open sets
8.3 Exercises
8.4 Limits and continuity
8.5 Exercises
8.6 The derivative of a scalar field with respect to a vector
8.7 Directional derivatives and partial derivatives
8.8 Partial derivatives of higher order
8.9 Exercises
8.10 Directional derivatives and continuity
8.11 The total derivative
8.12 The gradient of a scalar field
8.13 A sufficient condition for differentiability
8.14 Exercises
8.15 A chain rule for derivatives of scalar fields
8.16 Applications to geometry. Level sets. Tangent planes
8.17 Exercises
8.18 Derivativesof vector fields
8.19 Differentiability implies continuity
8.20 The chain rule for derivatives of vector fields
8.21 Matrix form of the chain rule
8.22 Exercises
*8.23 Sufficient conditions for the equality of mixed partial derivatives
8.24 Miscellaneous exercises
9. APPLICATIONS OF THE DIFFERENTIAL CALCULUS
9.1 Partial differential equations
9.2 A firstorder partial differential equation with constant coefficients
9.3 Exercises
9.4 The onedimensional wave equation
9.5 Exercises
9.6 Derivatives of functions defined implicitly
9.7 Worked examples
9.8 Exercises
9.9 Maxima, minima, and saddle points
9.10 Secondorder Taylor formula for scalar fields
9.11 The nature of a stationary point determined by the eigenvalues of the Hessian matrix
9.12 Secondderivative test for extrema of functions of two variables
9.13 Exercises
9.14 Extrema with constraints. Lagrange's multipliers
9.15 Exercises
9.16 The extremevalue theorem for continuous scalar fields
9.17 The smallspan theorem for continuous scalar fields (uniform continuity)
10. LINE INTEGRALS
10.1 Introduction
10.2 Paths and line integrals
10.3 Other notations for line integrals
10.4 Basicproperties of line integrals
10.5 Exercises
10.6 The concept of work as a line integral
10.7 Line integrals with respect to arc length
10.8 Further applications of line integrals
10.9 Exercises
10.10 Open connected sets. Independence of the path
10.11 The secondfundamental theorem of calculus for line integrals
10.12Applications to mechanics
10.13 Exercises
10.14 The first fundamental theorem of calculus for line integrals
10.15 Necessary and sufficient conditions for a vector field to be a gradient
10.16 Necessary conditions for a vector field to be a gradient
10.17 Special methods for constructing potential functions
10.18 Exercises
10.19 Applications to exact differential equations of first order
10.20 Exercises
10.21 Potential functions on convex sets
11. MULTIPLE INTEGRALS
11.1 Introduction
11.2 Partitions of rectangles. Step functions
11.3 The double integral of a step function
11.4 The definition of the double integral of a function defined and bounded on a rectangle
11.5 Upper and lower double integrals
11.6 Evaluation of a double integral by repeated onedimensional integration
11.7 Geometricinterpretation of the double integral as a volume
11.8 Worked examples
11.9 Exercises
11.10 Integrability of continuous functions
11.11 Integrability of bounded functions with discontinuities
11.12 Double integrals extended over more general regions
11.13 Applications to area and volume
11.14 Worked examples
11.15 Exercises
11.16 Further applications of double integrals
11.17 Two theorems of Pappus
11.18 Exercises
11.19 Green's theorem in the plane
11.20 Some applications of Green's theorem
11.21 A necessary and sufficient condition for a twodimensional vector field to be a gradient
11.22 Exercises
*11.23 Green's theorem for multiply connected regions
*11.24 The winding number
*11.25 Exercises
11.26 Change of variables in a double integral
11.27 Special cases of the transformation formula
11.28 Exercises
11.29 Proof of the transformation formula in a special case
11.30 Proof of the transformation formula in the general case
11.31 Extensions to higher dimensions
11.32 Change of variables in an nfold integral
11.33 Worked examples
11.34 Exercises
12. SURFACE INTEGRALS
12.1 Parametric representation of a surface
12.2 The fundamental vector product
12.3 The fundamental vector product as a normal to the surface
12.4 Exercises
12.5 Area of a parametric surface
12.6 Exercises
12.7 Surface integrals
12.8 Change of parametric representation
12.9 Other notations for surface integrals
12.10 Exercises
12.11 The theorem of Stokes
12.12 The curl and divergence of a vector field
12.13 Exercises
12.14 Further properties of the curl and divergence
12.15 Exercises
*12.16 Reconstruction of a vector field from its curl
*12.17 Exercises
12.18 Extensions of Stokes' theorem
12.19 The divergence theorem (Gauss' theorem)
12.20 Applications of the divergence theorem
12.21 Exercises
PART 3. SPECIAL TOPICS
13. SET FUNCTIONS AND ELEMENTARY PROBABILITY
13.1 Historical introduction
13.2 Finitely additive set functions
13.3 Finitely additive measures
13.4 Exercises
13.5 The definition of probability for finite sample spaces
13.6 Special terminology peculiar to probability theory
13.7 Exercises
13.8 Worked examples
13.9 Exercises
13.10 Some basic principles of combinatorial analysis
13.11 Exercises
13.12 Conditional probability
13.13 Independence
13.14 Exercises
13.15 Compound experiments
13.16 Bernoulli trials
13.17 The most probable number of successes in n Bernoulli trials
13.18 Exercises
13.19 Countable and uncountable sets
13.20 Exercises
13.21 The definition of probability for countably infinite sample spaces
13.22 Exercises
13.23 Miscellaneous exercises on probability
14. CALCULUS OF PROBABILITIES
14.1 The definition of probability for uncountable sample spaces
14.2 Countability of the set of points with positive probability
14.3 Random variables
14.4 Exercises
14.5 Distribution functions
14.6 Discontinuities of distribution functions
14.7 Discrete distributions. Probability mass functions
14.8 Exercises
14.9 Continuous distributions. Density functions
14.10 Uniform distribution over an interval
14.11 Cauchy's distribution
14.12 Exercises
14.13 Exponential distributions
14.14 Normal distributions
14.15 Remarks on more general distributions
14.16 Exercises
14.17 Distributions of functions of random variables
14.18 Exercises
14.19 Distributions of twodimensional random variables
14.20 Twodimensional discrete distributions
14.21 Twodimensional continuous distributions. Density functions
14.22 Exercises
14.23 Distributions of functions of two random variables
14.24 Exercises
14.25 Expectation and variance
14.26 Expectationof a function of a random variable
14.27 Exercises
14.28 Chebyshev's inequality
14.29 Laws of large numbers
14.30 The central limit theorem of the calculus of probabilities
14.31 Exercises
Suggested References
15. INTRODUCTION TO NUMERICAL ANALYSIS
15.1 Historical introduction
15.2 Approximations by polynomials
15.3 Polynomial approximation and normed linear spaces
15.4 Fundamental problems in polynomial approximation
15.5 Exercises
15.6 Interpolating polynomials
15.7 Equally spaced interpolation points
15.8 Error analysis in polynomial interpolation
15.9 Exercises
15.10 Newton's interpolation formula
15.11 Equally spaced interpolation points. The forward difference operator
15.12 Factorial polynomials
15.13 Exercises
15.14 A minimum problem relative to the max norm
15.15 Chebyshev polynomials
15.16 A minimal property of Chebyshev polynomials
15.17 Application to the error formula for interpolation
15.18 Exercises
15.19 Approximateintegration. The trapezoidal rule
15.20 Simpson's rule
15.21 Exercises
15.22 The Euler summation formula
15.23 Exercises
Suggested References
Answers to exercises
1.51.13
1.172.8
2.12
2.16
2.20
2.213.17
4.44.8
4.10
5.55.11
5.15
5.206.3
6.96.15
6.166.24
7.47.12
7.15
7.177.24
8.3
8.58.9
8.148.17
8.22
8.249.8
9.13
9.1510.5
10.910.20
11.911.15
11.1811.22
11.2511.28
11.3412.4
12.612.10
12.1313.4
13.713.11
13.1413.18
13.2013.23
14.414.8
14.1214.16
14.1814.24
14.2715.5
15.9
15.1315.21
Index
Index
Calculus)))
M.
Torn
Apostol)
VOLUME
II)
and
Calculus
MultiVariable
with
Algebra,
Differential
Applications
JOHN
EDITION)
New
WILEY &
York.
Chichester.
SONS) Brisbane
to
and Probability)
Equations
SECOND
Linear
\302\267 Toronto.
Singapore)))
EDITOR)
CONSULTING
George
Copyright
Springer, Indiana
@ 1969 All
by
rights
University)
John Wiley & Sons, Inc. reserved.
part of this publication
may be reproduced, stored in a retrieval system or transmitted fom1 or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as pennitted under Sections 107 or 108 of the 1976 United States No
in
any
either the prior written pennission of the Publisher, or fee to the Copyright payment of the appropriate percopy Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 7508400, fax 7504470. (978) Requests to the Publisher for permission should be addressed to the Pennissions Department, John Wiley & Sons, Inc., III River Street, Hoboken, NJ 07030, Copyright
Act, without
authorization
through
(201) 7486011, To order
books
fax
(201) 7486008, EMail:
or for customer Printed
[email protected])
service please,call
and bound by the
Hamilton
32 33 34 35 36 37 38 39
1 (800)CALL
Printing 40)))
WILEY
Company.)
(2255945).
To
Jane
and
Stephen)))
PREFACE)
This book is a continuation present volume has beenwritten Sound
first.
in
training
is
to
As
in
in
divided
with
I,
philosophy
a
strong
of modern
I, historical
Volume
that
remarks
in the
prevailed
theoretical
mathematics
The
Edition.
Second
development. undue
without
are included to
give
the
of ideas.
evolution into three
the
Topics. The last two of Volume II so that all
chapters
Volume
Nonlinear parts, entitled Linear Analysis, I have of Volume been repeatedas the chapters the material on linear algebra will be complete
volume.
Part 1
an
contains
determinants,
matrices,
analysis,
in
particular
equations are
are
is combined the spirit convey
and Special
Analysis, first two
in one
author's Calculus, same underlying
the
with
technique
Every effort has been made on formalization. emphasis student a sense of participation
The secondvolume
the
of
by
proved
treated
Picard's
language of contraction
to linear
introduction
linear transformations, algebra, including are to and forms. given Applications eigenvalues, quadratic of to the study of linear differential differential Systems equations. with the help of matrix calculus. Existenceand uniquenesstheorems cast in the is also of successive approximations, which method operators.
of several variables. Differential calcul us is calculus of functions chain aid linear It includes rules for scalar and and with the of unified algebra. simplified and extremum to differential vector and fields, probJems. equations applications partial surface and calculus includes line integrals, multiple integrals, with integrals, Integral Here the treatment is along more or less classical lines and to vector analysis. applications forms. does not include a formal development of differential material The 3 are Probability and NumericalAnalysis. The special topics treated in Part with finite or countably infinite one dealing on probability is divided into two chapters, the other with uncountable spaces, random variables, and dissample spaces; sample in the study of both one and The use of the calculus is illustrated functions. tribution
Part
2 discusses the
random
twodimensional
The
last chapter
being on
different
variables.
contains kinds
an
introduction
of polynomial
to
numerical
approximation.
analysis,
Here again the
the chief ideas
a The book concludes with algebra. by the notation and terminology of linear and a discussion such as Simpson'srule, approximate integration formulas,
summation
emphasis
are
unified of
treatment
of Euler's
formula.)
vii)))
\\\"111)
Preface)
There
is ample
per week. It
material
in
this
volume
a knowledge
presupposes
The author recitation periodsper week,
calculus courses.
has
taught
allowing
for a full year's course meeting three or four times in most of onevariable calculusas covered firstyear this material in a course with two lectures and two and about ten weeks for each part the omitting
starred sections.
This secondvolume
has been chapters planned so that many For example, the last chapter of each of the presentation. Part 1 by itself continuity
courses.
shorter
of
the
disrupting
bined coursein linear can choose topics to page
shows
which
Once again
the
algebra suit
logical
I acknowledge
In preparing the
his
and ordinary differential and preferences
needs
equations. by
can be
consulting
interdependence of the chapters. with pleasure the assistance of many
second edition
I
received
valuable
omitted for a variety
can be skipped without material for a comprovides instructor The individual
part
the
diagram
friends
and
help from Professors
on the next colleagues.
Herbert S.
of the of Washington, and Basil Gordon of the University of University a of Thanks Los Angeles, each of whom number are improvements. suggested for their assistance and cooperation. also due to the staff of Blaisdell Publishing Company to my wife for the many As it gives me special pleasure to expressmy gratitude before, I happily which has In dedicate this in she contributed. ways grateful acknowledgement
Zuckerman
California,
book
to her.)
T. M. Pasadena,
California
September
16,
1968)))
A.)
of the
Interdependence
Logical
Chapters)
IX)
1
LINEAR SP ACES I I
15
2
INTRODUCTION
LINEAR
TRANSFORM AND
TO
ATIONS
MATRICES
NUMERICAL ANAL
YSIS
3 DETERMINANTS
LINE
DIFFERENTIAL LINEAR
CALCULUS OF
DIFFERENTIAL EQUATIONS
I
I
7 SYSTEMS
INTEGRALS
SET
AND
FUNCTIONS
ELEMENTARY
PROBABILITY
FIELDS
VECTOR
4

AND
SCALAR
13
10
8
6
EIGENVALUES AND EIGENVECTORS
11
OF MULTIPLE
DIFFERENTIAL
EQUATIONS
INTEGRALS
5
14
EIGENV ALUES OF OPERATORS
ON
ACTING
EUCLIDEAN SPACES
I
CALCULUS
I
PROBABILITIES
12
9 APPLICATIONS DIFFERENTIAL CALCULUS)))
OF
SURFACE INTEGRALS
OF
CONTENTS)
1. LINEAR
PART
ANALYSIS)
1. LINEAR SPACES)
1.1
Introduction
1.2
The definition
1.3 1.4
3
of a linear space of linear spaces Examples consequences
Elementary
1.5
Exercises
1.6
Subspaces
1.7
Dependent
6
of the axioms
7
linear space
of a
8
sets
and independent
1.8 Bases and 1.9
3 4
in
a linear
9
space
12
dimension
13
Components
13
1.10 Exercises
1.11
Inner
Euclidean
products,
1.12 Orthogonality 1.13 Exercises
1.14
1.15Orthogonal 1.16 Best
space
20 of
Construction
14 18
Norms
spaces.
a Euclidean
in
sets.
orthogonal
complements.
approximation of
The GramSchmidt
22
process
26
Projections elements
in
a Euclidean
space
by
elements
in a
finite
28
dimensional subspace
1.17Exercises
2.
30)
TRANSFORMATIONS
LINEAR
2.1
Linear
2.2
Null space and
2.3
Nullity
transformations
and
rank)
range
AND
MATRICES) 31
32 34) XI)))
Contents)
XlI)
35
2.4
Exercises
2.5
Algebraic operations
2.6
Inverses
transformations
36
38
2.7 Onetoone linear
41
transformations
2.8
Exercises
2.9
Linear transformations
2.10
Matrix representations of a matrix Construction
2.11
linear
on
42 with
prescribed
of linear
44
values
45
transformations
form
in diagonal
representation
2.12 Exercises
2.13
Linear
50
of matrices
spaces
51
2.14 Isomorphism between linear 2.15 Multiplication of matrices 2.16
matrices
Inverses
54
of linear equations
58
61 65 67
techniques
of square
matrices
2.20 Exercises 2.21
on matrices)
exercises
Miscellaneous
68)
DETERMINANTS)
3.
3.1
Introduction
3.2 Motivation for the choice of axioms for 3.3 A set of axioms for a determinant function 3.4
52
57
2.18 Computation 2.19
and
transformations
Exercises
2.17 Systems
48
a determinant
function
73
of determinants
Computation
71 72 76
3.5 The uniquenesstheorem
79
3.6
Exercises
79
3.7
The product formula for determinants of a nonsingular The determinant of the inverse
81
3.8
3.9 Determinantsand 3.10 The determinant 3.11 Exercises
3.12 Expansion
independence of
a blockdiagonal
for determinant
3.14 The
determinant of a
3.15
cofactor
The
3.16 Cramer's
matrix
rule
3.17 Exercises)
matrix
83
83 84 85
formulas
3.13 Existenceof the
matrix
of vectors
determinants. function
transpose
Minors and cofactors
86
90
91
92
93 94)))
XIII)
Contents)
4.1
Linear
4.2
Eigenvectors
with
transformations and
eigenvalues
of 4.3 Linear independence
4.4
Exercises
4.5
The tinitedimensional
4.6
Calculation
96
matrix representations diagonal of a linear transformation to distinct
corresponding
eigenvectors
97 eigenvalues
Matrices
102
case
finitedimensional
107 same linear
the
representing
Similar matrices
transformation.
OF
EIGENVALUES
inner
and
Eigenvalues
ON
ACTING
OPERATORS
SPACES)
EUCLIDEAN
114
products
5.2 Hermitian and skewHermitiantransformations
115
and eigenvectors of Hermitian 5.3 Eigenvalues 5.4 Orthogonality of eigenvectors corresponding
and
skewHermitian
to distinct
operators
eigenvalues
118 set of
orthonormal
an
of
Existence
eigenvectors
for
and
Hermitian
5.10
acting on finitedimensional spaces Matrix representations for Hermitian and skewHermitianoperators and skewHermitian The adjoint of a matrix matrices. Hermitian a or of Hermitian skewHermitian matrix Diagonalization Unitary matrices. Orthogonal matrices
5.11
Exercises
skewHermitian 5.7
5.8 5.9
operators
of a
Reduction
5.14 Applications Eigenvalues
quadratic
to
analytic
form
to a
of
130
geometry
a symmetric
transformation
obtained as
values
*5.18
properties
transformations
of
its
135
Extremal
5.20 Exercises
123
128
diagonal form
form
The finitedimensional Unitary
122
122
134
*5.17
5.19
121
126
real quadratic
5.15 Exercises
*5.16
120
124
5.12 Quadratic forms 5.13
117
117
5.5 Exercises
5.6
108 112)
Exercises
5.1
103 106
matrix
4.10
5.
100 101
case. Characteristic polynomials of eigenvalues and eigenvectorsin the
4.7 Trace of a 4.8 Exercises
4.9
EIGENVECTORS)
AND
EIGENVALUES
4.
of eigenvalues
case
of a
symmetric
transformation
136
137 138 141)))
Contents)
XIV)
DIFFERENTIAL
LINEAR
6.
6.1
Historical
introduction
6.2
Review of
results concerning
6.3
Exercises
6.4
Linear
6.5
6.6 6.7 6.8
EQUATIONS) 142
linear
second orders
of first and
equations
of order n
differential equations
145
147
theorem
The existenceuniqueness of the solution dimension The
The algebra of of Determination
space of a homogeneouslinear
constantcoefficient
coefficients by
147
equation
148
operators
a basis of solutions
for
linear
equations
with
constant 150
of operators
factorization
6.9 Exercises
6.10 6.11
The
6.14
and nonhomogeneousequations
solution of the
a particular
of
of parameters matrix of Wronskian
the
of
homogeneous linear equation Special methods for determining a particular of firstorder equation. Reduction to a system for
method
annihilator
The
determining
157 n
solutions
independent
of a 161
solution
linear
of the
nonhomogeneous
163
equations
a particular
solution
of
the
163
equation
nonhomogeneous
166
6.15 Exercises
6.16
6.17 Linear 6.18 The 6.19
with
analytic
coefficients
the Legendre
for
polynomials
182
equation
188)
7. SYSTEMS
7.2
Calculus
7.3
Infinite
7.4
Exercises
7.5
The
174 176
180
of Frobenius
6.24 Exercises
Introduction
169
177
Bessel
7.1
167 171
polynomials
formula
The method The
differential equations
Legendre equation
The Legendre
6.23
order
of second
equations
6.20 Rodrigues' 6.21 Exercises 6.22
on linear
exercises
Miscellaneous
154 156
nonhomogeneousequation.
of variation
method
6.12 Nonsingularity 6.13
the homogeneous
between
relation
Determination
The
143 144
OF DIFFERENTIAL EQUATIONS) 191
193
of matrix functions series
of matrices.
Norms of
matrices
194
195 exponential
matrix)
197)))
7.6
The
differential
satisfied
equation
Contents)
xv)
by etA
197
7.7 Uniqueness theorem for the matrix differential equation F'(t) = 7.8 The law of exponents for exponential matrices 7.9 Existence and uniqueness theorems for homogeneous linear with constant coefficients
7.10 The
problem
7.12
systems
200 201
theorem
203
Exercises
205
7.13 Putzer's method 7.14
199
etA
of calculating
7.11 The CayleyHamilton
198
AF(t)
for
calculating for
methods
Alternate
etA
205
etA in special
calculating
cases
208
211
7.15Exercises
7.16
linear
Nonhomogeneous
213
coefficients
with constant
syst6ms
215
7.17Exercises
7.18
The
7.19
A
= pet) yet) + Q(t) homogeneous linear systems
linear
217
system Y'(t) method for solving
general powerseries
220
7.20 Exercises 7.21
Proof
7.22 The
221
of the existence
of successive
method
7.23 Proof
theorem by
of an
the
approximations theorem
ueness
existenceuniq
of successive
method
approximations nonlinear applied to firstorder systems for firstorder nonlinear systems
7.24 Exercises Successive
*7.26
Normed
*7.27
Contraction operators
*7.28
Fixedpoint
and fixed points
approximations linear
233
of
Applications
the
234 235 237)
contraction
for
theorem
fixedpoint
operators theorem
2. NONLINEAR
ANALYSIS)
8. DIFFERENTIAL CALCULUSOF 8.1 Functions from
Rn
8.2 Openballs
open
Exercises
8.4
Limits
8.5
Exercises
and
and
to Rm.
Scalar and vector fields
Exercises)
AND
243
244
sets
245 247
continuity
8.6 The derivative of a scalar field with 8.7 Directional derivatives and partial 8.8 Partial derivatives of higher order
SCALAR
FIELDS)
VECTOR
8.3
232
of operators
spaces
PART
8.9
227
229 230
*7.25
*7.29
222
respect
derivatives
to a
vector
251 252 254
255 255)))
Contents)
XVI)
8.10
Directional
8.11
The
derivatives and
8.12 The gradient 8.13 A sufficient 8.14 Exercises
8.15 A 8.16
a scalar
of
to geometry.
of scalar fields Level sets. Tangent
8.21
Matrix
8.22
Exercises
planes
fields
implies
271 272
continuity
for derivativesof vector form of the chain rule
fields
273
275
Sufficient conditions for the 8.24 Miscellaneousexercises)
*8.23
9.1
Partial
9.2
A
9.3
Exercises
9.4
The
differential
firstorder
mixed
of
equality
derivatives
partial
277 281)
THE
OF
APPLICATIONS
9.
263 266 268 269
ses
Differentiability The chain rule
8.20
259
261
262
8.18 Derivativesof vector 8.19
field
for differentiability
for derivatives
Applications
8.1 7 Exerci
258
condition
rule
chain
257
continuity
derivative
total
CALCULUS)
DIFFERENTIAL
283
equations
partial
differential
equation
with
284
coefficients
constant
286 wave
onedimensional
288
equation
9.5 Exercises
9.6
Derivatives
9.7
Worked
9.8
Exercises
292 of
functions
defined
294
implicitly
298
examples
302 303 308
9.9 Maxima, and saddle points minima, 9.10 Secondorder formula for scalar fields Taylor 9.11 The nature of a stationary point determined by
the
eigenvalues
of the
test for extrema
Secondderivative
9.12
of functions
of
two
312
variables
313
9.13 Exercises
9.14
Extrema
with
constraints.
314
multipliers
Lagrange's
318
9.15 Exercises
9.]6
The
9.]
The
7
Hessian 310
matrix
extremevalue
smallspan
for continuous
theorem
theorem
for continuous 10.
1 0.1
Introduction
10.2
Paths and line
LINE
319
scalar fields
scalar fields
(uniform
continuity)
321)
INTEGRALS) 323
integrals)
323)))
.. Contents)
10.3
Other
line
of
326
integrals
Exercises
328
10.6 The concept 10.7 Line integrals
as a line integral to arc length respect
of work with
10.8 Further applications of line 10.9
324
for line integrals
notations
10.4 Basicproperties 10.5
XVll)
328
329
330
integrals
331
Exercises
10.10 Open connected sets. 10.11 The second fundamental
10.12 Applications
to
of
Independence
the
332
path
of calculus
theorem
333
for line integrals
mechanics
335
10.13 Exercises
10.14 The first
336
of calculus
theorem
fundamental
10.15Necessary
for line integrals
vector field 10.16 Necessaryconditions for a vector field to be a gradient methods for constructing potential functions 10.17 Special 10.18
10.19
for a
conditions
sufficient
and
to
be a
337
339
gradient
340
342
345 346 349
Exercises
Applications
to exact
differential
of first
equations
order
10.20 Exercises
10.21Potential
sets)
on convex
functions
INTEG
MULTIPLE
11.
11.1
Introduction
11.2
Partitions
11.3
The double integral of a step function The definition of the double integral
11.4
350)
RALS) 353
of rectangles.
Step functions
353 355 of
a function
defined and
bounded on a 357
rectangle
11.5 11.6
Upper and lower double integrals Evaluation of a double integral
11.7 Geometric interpretation 11.8
Worked
of
357 by
repeated
integral as
double
the
onedimensional
a volume
examples
362
11.10
Integrability
of
continuous
11.11
Integrability
of
bounded
11.12 Doubleintegrals 11.14 Worked 11.15
359
360
11.9 Exercises
11.13Applications
integration
358
extended to
area
363
functions functions
with discontinuities
more
over
general regions
and volume
365 368 369
examples
Exercises
11.16 Further applications 11.17 Two theorems of 11.18 Exercises
364
of double
Pappus
integrals
371 373 376 377)))
Contents)
XVlll)
Green's theorem
11.19
11.21
the
in
378
plane
applications of Green'stheorem
11.20 Some
A necessary and
a twodimensional
for
condition
sufficient
382
vector field to be a 383
gradient
11.22
385
Exercises
* 11.23 Green'stheorem * 11.24The winding * 11.25 Exercises
11.26
of
Change
11.27Special 11.28Exercises 11.29
for
connected
multiply
387
regions
389
number
391 in a
variables
392 396
double integral formula
transformation
of the
cases
399
of the transformation formula
Proof
11.30 Proof of
to
401
case
general case
higher
in
an nfold
407
integral
409
examples
413)
11.34 Exercises
INTEGRALS)
12. SURFACE
12.1
Parametric
of a
representation
12.2 The fundamental
vector
12.3
The fundamental
12.4
Exercises
12.5
Area
of
surface
417
420
product
vector product as a normal
Surface
12.8
Change
12.9
Other
of parametric for
representation
surface
integrals
12.10 Exercises
12.11The
theorem
12.12 The curl and 12.13
of Stokes
divergenceof a vector
field
Exercises
12.14 Further properties 12.15 Exercises
* 12.16Reconstruction
of
of the a vector
curl
and
divergence
field from
its
* 12.17 Exercises
12.18Extensionsof 12.20
Stokes'
theorem
divergence theorem (Gauss'theorem) of the divergence theorem Applications
12.19 The
12.21Exercises)
the
surface
423
424
integrals
notations
to
424 surface
a parametric
12.6 Exercises
12.7
403 405
dimensions
11.32 Change of variables
11.33 Worked
in the
formula
transformation
the
11.31Extensions
a special
in
curl
429 430 432 434 436 438 440 442 443 447 448 452 453 457 460 462)))
Contents)
AND ELEMENTARY PROBABILITY)
13. SET FUNCTIONS
13.1
469
introduction
Historical
Finitely
additive set functions
13.3
Finitely
additive
13.4
Exercises
13.2
13.8
Worked
13.9
Exercises
470
471
measures
472
13.5 The definition 13.6 Specialterminology 13.7 Exercises
of
for finite sample
probability
to probability
peculiar
473
spaces
theory
475
477
477
examples
13.10 Some 13.11
TOPICS)
SPECIAL
3.
PART
XIX)
479
basic principlesof combinatorial
481
analysis
485
Exercises
13.12 Conditional probability
13.13 Independence
13.14
Exercises
13.15
experiments
Compound
13.16 Bernoulli trials
13.17 The
most
of successes
number
probable
in
n Bernoulli
trials
13.18 Exercises
13.19 Countable and 13.20
Exercises
13.21 The definition 13.22
sets
uncountable
of
infinite
sample
spaces
Exercises
14. The
Countability
14.3
Random
of
the
set of
507)
probability)
OF
CALCULUS of probability
definition
14.2
506
507
13.23 Miscellaneousexerciseson
14.1
for countably
probability
486 488 490 492 495 497 499 501 504
PROBABILITIES)
for uncountable
points
with
positive
variables
Distribution
14.6
Discontinuities
14.7
Discrete
14.8
Exercises
14.9
Continuous
510
probability
511
512
14.4 Exercises
14.5
sample spaces
513
514
functions
of
distribution
distributions.
functions
Probability mass functions
517 520 523
distributions. Density functions)
525)))
Contents)
xx)
14.1 0
Uniform
14. 12
over
distribution
14.11 Cauchy's
526
an interval
530
distribution
532
Exercises
533
14.13 Exponential distributions
14.14 Normal
distributions
14.15
Remarks
14.16
Exercises
general
539
distributions
540
of functions of random
14.17 Distributions 14.18
535
on more
541
variables
542
Exercises
of
14.19 Distributions
545
discrete distributions
14.20 Twodimensional
14.21 Twodimensional 14.22 Exercises
distributions.
continuous
Density
functions
546
548
14.23Distributions 14.24Exercises 14.25
543
variables
random
twodimensional
of two
functions
of
random variables
550
553
556
and variance
Expectation
14.26 Expectation of
of a
a function
random variable
559
14.27Exercises 14.28
562
inequality
Chebyshev's
14.29 Laws 14.30
560 of large
The central
564
numbers
of the
theorem
limit
calculus of
probabilities
566
14.31Exercises
568
569)
References)
Suggested
TO NUMERICAL
15. INTRODUCTION
15.1
Historical
15.2
Approximations
571
introduction by
572
polynomials
approximation and normed Fundamental problems in polynomial 15.5 Exercises
15.3
Equally spaced
15.8
Error
15.9
Exercises
15.12
analysis
Factorial
577 579
interpolation points
in polynomial
582
583
interpolation
interpolation spaced
588
formula
interpolation
points.
The forward difference
A minimum
operator
590
592
polynomials
15.13 Exercises 15.14
575
approximation
585
Newton's
15.11Equally
574
spaces
polynomials
Interpolating
15.7
15.10
linear
Polynomial
15.4
15.6
ANALYSIS)
593
problem relative
to the
max
norm
595)))
. Contents)
15.15
596
polynomials
Chebyshev
15.16 A minimal 15.17
XXi)
property Chebyshev Application to the error formula of
598
polynomials for
interpolation
15.18 Exercises
15.19 Approximate integration. 15.20 Simpson's rule
The
The
610
summation
Euler
15.23 Exercises
Index)
formula
613
618
Suggested References Answers
rule
605
15.21 Exercises
15.22
trapezoidal
599 600 602
to
exercises
621
622 665)))
Calculus)))
PART
LINEAR
1
ANALYSIS)))
1)
LINEAR SPACES)
1.1
Introduction)
mathematics
Throughout
we encounter
added to each other and are such objects. themselves vectors numbers, infinite series, discuss a general mathematical can be
many examples
Other
mathematical
realvalued
are
examples
of
concept,
numbers
real
the complex chapter we
functions,
and vectorvaluedfunctions. called a linear space, which
in nspace,
that
objects
numbers. First of all, the
by real
multiplied
this
In
all
includes
these
others as special cases. examples and many of any kind on which certain (called Briefly, a linear spaceis a set of elements operations a linear addition and multiplication can be performed. In defining by numbers) space, we do not specify the nature of the elements nor do we tell how the operations are to be on them. the have certain properties which Instead, we require that performed operations we take as axioms for a linear space. We turn now to a detailed description of these axioms.)
The
1.2
Let space
V
definition
of
a linear
space
a nonempty set of objects, calledelements. The set V is called satisfies the following ten axioms which we list in three groups.)
denote
if it
a
linear
Closure axioms) AXIOM
1.
CLOSURE
corresponds a unique AXIOM
2.
CLOSURE
every real number by
For
ADDITION.
UNDER
element
in V
called
the
sum
every pair of x and
BY UNDER MULTIPLICATION
a there correspondsan
element
NUMBERS.
REAL
in V
of elements x y, denotedby x
called
the
For
and
y
+ Y
in
V there
.
every x in V and and x, denoted
of a
product
ax.)
Axioms
for addition) LAW.
AXIOM
3.
COMMUTATIVE
AXIOM
4.
ASSOCIATIVE LAW.
For all
x and y
For all x,y,
and z
in
in
V,
we have
V, we
have (x
x + y
= y + x.
+ y) +
z
=
x +
(y +
z). 3)))
Linear
4)
AXIOM 5.
spaces)
OF ZERO ELEMENT.
EXISTENCE
for all
x+O=x) AXIOM 6.
x +
AxionlS
For etery
ASSOCIATIVE LAW.
7.
AXIOM
( l)x
x
DISTRIBUTIVE
8.
AXIOM
by 0, such
V, denoted
that)
V.)
in
V, the
elenlent
has
(l)x
the
property)
= o.)
LAW
V and
in
IN V.
ADDITION
FOR
all real
numbers a and b,
For all
x
have)
we
= (ab)x.)
a(bx)
}fe
in
x
in
by nunlbers)
nlultiplication
for
x
For etery
OF NEGATIVES.
EXISTENCE
elenlent
is an
There
and
y
in
V and
all real
a,
hate)
9.
AXIOM
a and b,
DISTRIBUTIVE
LAW
FOR
+
ay.)
all x
For
OF NUMBERS.
ADDITION
in
V and
all real
have)
H'e
(a + 10.
AXIOM
= ax
+ y)
a(x
OF IDENTITY.
EXISTENCE
b)x =
ax + bx .)
For every
x
in
V,
we have
Ix =
x .)
to emphasize above, are sometimescalled real linear spaces V If real real number is the elements of numbers. by multiplying in structure is called a Axioms and the number 2, 7, 8, 9, resulting replaced by complex vector or is to as a a linear referred linear linear Sometimes space complex space space. used as multipliers are also called scalars. A real linear simply a rector space; the numbers linear space has complex numbers as scalars. space has real numbers as scalars; a complex with linear spaces, all the theorems are deal of real we shall examples Although primarily further we use the term linear space without valid for complex linear spaces as well. When it is understood that the can be real or to be complex.) designation, space Linear
the fact
1.3
If
as defined
spaces,
are
we
that
Examples
of linear
we
the
specify
set
addition
EXAMPLE addition
and
Let
V
2.
Let
how to
satisfies
examples
V
=
V
add its elements and how to multiply The reader can easily space. the axioms for a real linear space.)
a linear
all
set of all real of real numbers.)
R, the
multiplication
of complex
tell
and
concrete example of
numbers, we get a each of the following EXAMPLE 1.
spaces
numbers,
and let
= C, the set of all numbers, complex numbers, and define ax to be multiplication
them verify
x + y and
ax be ordinary
x + y
to be ordinary
define of
the
complex
by that
number
x)))
of linear
Exalnples
a. Even
real number
the
by linear
space
V =
Let
3.
EXAMPLE
and multiplication
is a
it
for by
off reader
The
EXAMPLE
6.
EXAMPLE
7.
consider this
The
set of all
polynomials of
understood
9.
realvalued
V are
way:)
n is
not a
The set The
n is
where
n,
is also
polynomial
continuous on
a
interval.
given
(Whenever we
fixed.
included.)
linear spacebecause the closure of n degree need not polynomials
of two
sum
differentiable at a
set of all functions
The
of all
functions
we
f
at
defined
The
of
set
axioms are not have degree n.)
If the interval is
1 with
point.)
interval.)
= o.
f(l)
a nonzero number
0 by
replace
given
on a given
integrable
set of all functions
example. If
in this
the
space by C(a, b).)
this
11.
to
degree
1 . Yn on continuous 14. Let V be the set of an real functions [0, + 00) I
(b)
Prove that
(c)
Compute
2
f\037 ett
(t) dt
V
(x,
converges. Define ([, g)
= f;>
e'l\"(t)g(t)
dt
.)))
and such
that
the
integral
Linear
22)
(a) Prove that in
the
Use the
(b) Prove that
(c)
15.
CauchySchwarz
V is
a linear
pair of
each
for
(a) (ax, by)
16.
that
Prove
(a) (b)
ab(x, y). the following
+ y\\l2 \\Ix + yll2 \\Ix
(c) Prove
\\Ix
=

that the

IIx
if we
all
f and
functions
g
IIy\\l2
yl12 yl12
+
w
+
1.14 Constructionof
f:
+ bz)
= a(x,
Euclidean
space.
continuous
on
ay
general
theorem
space,
finite or
onalization
on [a,
in honor
of J.
y) +
b(x,z).
construction
P. Gram
an
interval
[a, b]
becomes
b].)
GramSchmidt
The
The
dimensional.
infinite
process,
for
dt,)
)g(t)
w(t)f(t
basis. linear space has a finite an orthogonal basis. This result will whose proof shows how to construct
construct
properties
formula)
finitedimensional
Every
always
.
following
Ily\\l2.
continuous
sets.
orthogonal
. ..
the
x).
2(y,
2
functions complexvalued inner an product by the
function,
has
I/(t)g(t)\\ dt.]
+ (y, x).
define
fixed positive
is a
in every
valid
+
(f,g) = where
product
b. (b) (x,
(x, y)
= 2(x, y) = 2 IIx\\l2
inner
the
t
f \037 e
a and
complex
identities are
space of all
space
unitary

Ilx
+
+ yl12
+
IIx\\l2
g) as an inner product. = tn, where n = 0, 1, 2,
(I,
g(t)
space, prove that
z, and
all
and
the integral
estimate
to
inequality
space with
= e t
g) if I(t)
(I,
Compute
a complex Euclidean elements x, y and =
In
a
g) convergesabsolutely
V.)
[Hint:
17.
for (I,
integral
spaces)
(18501916)
process
space is Euclidean, we can as a consequence of a sets in any Euclidean orthogonal is called the GramSchmidtorthogand E. Schmidt (18451921).) If the
be deduced
Let Xl' X 2 , . . . , be a finite or infinite and let L(x l , . . . , xk ) denote the subs pace space V, sequence k these elements. Then there is the a of spannedby first corresponding sequence of elements . \302\267 in V which has the each \302\267 , for YI , Y2, following properties integer k: The is to in element element the subs \302\267 \302\267 , Ykl). (a) Yk orthogonal every pace L(YI, \302\267 The subs . . is the same \302\267 \302\267 . as that , X k :) , Yk (b) pace spannedby YI, spanned by Xl' \302\267 THEOREM
1.13.
a
in
Euclidean
L(YI,
(c)
The sequence
YI,
Y2,
. . . , is
sequence of elementsin V there is a scalarC k such that Y\037 =
another
when
(1.14))
= k =
\302\267 \302\267 \302\267 ,
Yk)
unique, satisfying
= L(x l ,
\302\267 \302\267 \302\267 , X k) .)
exceptfor scalar factors. That is, ify\037, (a) and (b)for all k, then properties
Y\037,
. . . , is
for
each
k
CkYk.)
the elements YI, Y2, . . . , by induction. Now assume we have constructed YI, . . . , Y r so r. Then we defineYr+l by the equation)
We
Proof
take YI
THEOREM.
ORTHOGONALIZATION
of elements
construct
XI.
Yr+l
=
xr+l

r
!
i=l)))
aiYi
,
that
To start the process, (a) and (b) are satisfied
we
Construction
the
where
al , .
scalars
Y i is given
with
. . , ar
< r,
For j
be determined.
to
are
The GramSchmidtprocess)
sets.
of orthogonal
inner
the
23)
of
product
Yr+l
by)
(Yr+l'
Yi) =
(x r + l
, Yj)
r

!
=
Yj)
ai(Yi'
(x r+ 1 ,

Yj)
aj(Yi' Yj),
i=1)
since
(Yi' Yj)
= 0 if i
\037j.
If
Yi
\037
( 1.15))
make
we can
0,
=
ai
r+l
(X
,
Yr+l
i)
Y
to Yi by
orthogonal
taking)
\302\267
(Yi,Yj))
= 0,
If Yj
then
aj = o. Thus, elementsYl ,
to Yi for any choice of ai' and in this case we choose is to each well defined and is orthogonal of the earlier Yr+l it in is to the element Therefore, orthogonal every subspace)
is orthogonal
Yr+l
element
the . . . , Yr.
L(yl,...,Yr).)
This proves (a) when To prove (b) when given
that
L(Yl,
. . . ,
k = r + 1. k = r + 1, = L(x 1, Yr)
we must . . . , xr ).
that L(Yl, . . . , Yr+l) The first r elements Yl,
show
L(x l , hence
and
they are
in
the
differenceof This proves that) (1.14) is a
larger subspace in two elements
Y r+l)
(1.14)
Equation
argument gives
shows that xr+l the
inclusion
is the
in the
sum
proves
Finally
(b) when k = we prove (c) by
is true for k
r + I.
C L(x 1, of
we can
+l )
1 given
by
1 , . . . , x r + 1).
\302\267)
in L(Yl,
elements
two
. . . , Y r+l)
so a
similar
on k. element
r+ 1) .)
\302\267 \302\267 \302\267 , Y
(a) and
both
Therefore
The case k Y;+I.
L(Yl, so
\302\267 \302\267 \302\267 , xr
Yr+
other direction:)
induction
= r and considerthe
+ l ),
in)
are
\302\267 \302\267 \302\267 , x r ))
C L (yl, L (x1, \302\267 . \302\267 , X r+ 1) This
\302\267 \302\267 \302\267 , xr
L(x l ,
L(x l , . . . , xr + 1). The new element L(x l , . . . , X r + 1) so it, too, is in L(x
\302\267 \302\267 \302\267 ,
L(Yl,
=
. . . , Yr
Because
\302\267 \302\267 \302\267 ,
Y r+l)
(b) are
= 1 is trivial. of (b),
proved by induction this
element
on
assume
Therefore,
is
k.
(c)
in)
,)
write) r+l
Y;+l =
!
CiYi
=
Zr +
cr + 1Yr+l,
i=1)
. . . , Yr). We wish to prove that Zr = o. By property (a), both Y;+1 and are their difference, Zr, is orthogonal to Zr. In other orthogonal to Zr. Therefore, Cr+lYr+l = to so o. This itself, words,Zr is orthogonal Zr completesthe proof of the orthogonalization theorem.))) where
Zr E L(Yl,
Linear
24)
that
Xr+l
is
combination
a linear
elements Xl' . . . , Xr+l are independent, then
coefficientsa i in
= 0 for some r. Then have (1.14) Yr+l of Yl, . . . , Yr, and hence of Xl' . . . , X r , so the In other words, if the first k elements . . , Xk Xl' \302\267 elements Yl, . . . , Yk are nonzero. In this case the . . . 'Yk become) (1.15), and the formulas defining Yl,
construction, supposewe
In the foregoing shows
spaces)
(1.14)
are the
dependent. corresponding
are
given
by
r
(1.16)
Yl
=
Xl')
Y r+l
=
Xr+l _
\037
Yi)
(Xr+l'
\037
i=l
(Yi'
. Y\037
r =
for
1, 2,
...,k

1 .)
Yi))
formulas describe the GramSchmidt process for constructing an orthogonal set of the a . . . which same as set nonzero elements , Yk Yl, spans subspace given independent if X a for a X . In . . is basis finitedimensional Euclidean . . . . , k , k Xl' Xl' particular, space, basis for the same space. We can also convert this to an then Yl, . . . , Yk is an orthogonal it by its norm. element Yi' that each orthonormal basis by normalizing is, by dividing we have the as a of Theorem 1.13 Therefore, following.) corollary These
1.14.
THEOREM
If
X
and
yare
Every finitedimensional
elements
in a
Euclidean space has an
Euclidean space, with
Y :;e 0,
basis.)
orthonormal
the element)
(X, y) Y
(y,
is called the projection of X the element Yr+l by subtracting
elements Yl, . . . , Yr. space
Figure
y))
In the GramSchmidt process (1.16),we construct y. each of the earlier from xr+l the projection of Xr+l along the construction geometrically in the vector 1.1 illustrates
along
Va.)
X3)
Y3 = X3
 a 1Y I a
2Y2,
a.I _ (X3, Yi)
(\". . \\'y1, YI))
Y2
FIGURE
1.1
The GramSchmidt processin V s. constructed from a given independent
= X2 
An orthogonal set set {Xl' X2' Xs} .)))
CY.,
C
{Yl' Y2'
__ (X2, Y.) (y., Y.))
Ys}
is
Construction
1. In
EXAMPLE
vectors
=
Xl
V4 ,
(1,
Solution.
Y1
Y2
=
Xl =
=
X2 
(1,
1, 1, 1),
(x
, Yl)
2
=
Since
dimension
and
Yl
Yl
=
find)
(4 , 2, 0,
(x 3 ,
Yl)
(Yl,
Yl)
(x 3 , Y2)

Yl
=
Y2
(Y2,
X3

2'(1
_
_ 1 \" 1
\037
1)
The Legendre polynomials. In the linear dt , consider the infinite (x, y) = S\0371 x(t)y(t) product = tn. When the orthogonalization theorem is Y2, . . . ,
another sequence polynomials Yo, Yl, in work his matician A. M. Legendre(17521833) are easily calculated by the GramSchmidt polynomials of
1.
xo(t) =
=
l = i 1) dt
Yo) =
Yl
2
and)
( t) =
Xl ( t )

by
theory. First of
process.
The first all, we
few have
Yo) =
l i 1) t
=
dt
0,
Yo
Yo)

(t)
x1(t) =
=
t.
Yo))
relations)
we use the
(X2,
l i1)

2 \"3,
Y2(t) =
X2(t)
t
2
d t
(X2,
=
Yl)
l 3 t dt i 1)
= 0,
(Yl ,
Yl)
=
l i 1)
2 t dt
= i,
obtain)

(x 2 ,
Yo)
Yo(t)
(Yo, Yo) Similarly,
the
with
. . , where
it yields sequence the French mathe
this
to
applied
on potential
(Xl'
(Xl' Yo) (Yo,
to
of all polynomials, Xo, Xl' X2, . sequence
space
encountered
first
1).
1, 0,
(2,
6)
'1
that)
find
Next,
0) .
Since)
(Yo,
we
, 0 \" 0
1 /_
IIY211
EXAMPLE 2.
Yo(t)
Y2 = (0
Y2 =
and)
IIYllI)
xn(t)
+
Yl
Y2))
=
1:l.. 
inner
2) ,
since 0, the three vectors X2, X3 must be dependent. But Xl' Yl and Y2 are the vectors Xl and X 2 are independent. Therefore L(x l , x 2 , x 3) is a subspace of The set {Yl, Y2} is an orthogonal basis for this subspace. 2. each of Dividing basis consisting of the two vectors) Y2 by its norm we get an orthonormal
Y3
nonzero,

X3
process, we

X2
three
, Yl)
(Y1
Y3
=
Yl
the
by
and
(5,1,1,1),
GramSchmidt
the
Applying
=
X2
25)
for the subspace spanned X3 = (3, 3,1, 3).)
basis
orthonormal
an
find
1, 1, 1),
The GramSchmidtprocess)
sets.
of orthogonal
_ (x 2 , (Yl,
Yl)
Yl(t) =
2 t
_
!.
Yl))
we find that)
Y3(t) =
t
3

!t
,)
Y4(t)
=
t4

\037t2
+
:ls,)
Ys(t)
=
tS 
1./t3
+
t .))) \037}l
26)
Linear
these polynomials
We shall encounter
spaces)
6
in Chapter
again
further
our
in
of differential
study
equations, and we shall prove that) n. ,
Yn ( t ) =
The
Pn
polynomials
as the
known
sequence ro,
qJo(t)
=
Legendre
\037t
,)
qJt(t)
qJ4(t) = 1.15.
for
=
(35t
l\037i
(2n)! 21l (n
by
given
formulas
the
(2n)! dt
2 (t 
n
l)
n
\302\267)
4
. . . , Ys
+
dt
t n (
2
)
l)
n
The polynomialsin the corresponding orthonormal are called the normalized Yn/IIYnll Legendrepolygiven above, we
=
qJ2(t)
 30t 2
2 n n!
=
f/Jn
Yo,
t ,)
\037i
1 dn
Yn( t ) =
!)2
polynomials.
\302\267 \302\267 \302\267 ,
r2,
CPt,
From
nomials.
n
by)
given
p n( t) =
are
d

2 t\037f
3),)
(3t
qJ5(t) =
that)
find
1),)
1\037 :1l'
=
qJ3(t)
!\037t (5t
 70t 3
(63tF)
+
3

3t),)
1St).)
Orthogonal complements. Projections
Let V be a Euclidean space and let 8 be a finitedimensional We wish to subspace. Given an element x in V, to deterconsider the following type of approximation problem: as possible. mine an element in 8 whose distance from x is as small The distance between to be the norm II x  Y II . x and Y is defined two elements Before form, we consider a specialcase,illustrated discussing this problem in its general in Figure 1.2. Here V is the vector space V 3 and 8 is a twodimensional subspace, a plane through
nearest
the
x
Given
origin.
in
V,
the
problem
is to
in
find,
the
8,
plane
that
point
s
to x.
If x is not in 8, then If x E 8, then clearly s = x is the solution. the nearest point s from x to the plane. This simple example suggests a perpendicular is obtained by dropping an approach to the general approximation problem and motivates the discussion that follows.)
Let S
DEFINITION.
orthogonal to S is denoted
S if
to
by
It is a simple In caseS is a If
E\037AMPLE. through
pretation
the
for
be a subset of a
exercise subspace, S is
to
An element
V.
space
The
set
a plane
the
through
to
this
origin,
as shown
plane.
This
in
example
Figure
in
V is
oj' all elements
that 81 is a subspace of V, whether 81 is called the orthogonalcomplement
verify
then
perpendicular the next theorem.)))
origin
Euclidean
it is orthogonal to every elementof S. 81 and is called \"8perpendicular.\
or not of
1.2,
said
to be
orthogonal
S itself
is one.
S.)
then
S1 is a line
also gives a geometric inter
27)
Projections)
complements.
Orthogonal
s.l) . _
s.l
1.15.
THEOREM let
and
(1.1
a sum
as
of
s+
sL
2
( 1.18)) we prove
First
Proof
S is
IIxl1
finitedimensional,
an
that
it has
a
s E
=
IIsll2
+
element
every
V be
x
is, we
V 3.)
a Euclidean space V can
in
in
be represented
have)
s1 E S1.)
formula) Ils1112.)
orthonormal
(1.17) actually exists. x, say {e1, . . . , en}.Given
basis,
sand s1 as follows:)
the elements
That
SL.
and)
S)
theorem
Since
decomposition
orthogonal
finite
Let
THEOREM. Then
the Pythagorean
by
I I I , I I I I)
decomposition
and one in
where
,)
is given
the norm ofx
in S
one
elements,
x =
7))
Moreover,
two
subspace of V.
x=s+s.l)
orthogonal
DECOMPOSITION
ORTHOGONAL
a finitedimensional
S be
uniquely
interpretation of the
Geometric
1.2
FIGURE
_ .,
define
n S =
(1.19))
! i=l)
(x,
s1 =
ei)e i ,
X

s.)
s is the sum of the (x, ei)e i is the projection of x alongei . The element combination of the basis along each basis element. Sinces is a linear To prove that elements, s liesin S. The definition of s1 shows that Equation (1.17) holds. s1 lies in S1, we consider the inner e j . We have) of s1 and any basis element product
Note
that
projections
each
term
of x
from
But
(1.19),
we
to every Next we prove that
is orthogonal has
(1.20))
two such
that
find
element the
= (x
ej)
(s1,
x =
=
(x, ej )
 (s, ej)
s1is orthogonal
(s,e j ) = (x,e j ), so in S, which means that
sL E
say)
s +
s1)
and)
x =
.)
to
ej .
Therefore
s1
S1 .
(1.17) is
decomposition
orthogonal
representations,
 s, ej)
t + t1 ,)))
unique. Supposethat
x
Linear
28)
where sand
t
are
in S,
From (1.20), we have s  t E Sand t1.  sL This shows that
the
Finally, we prove

=

is both
t
only
wish
orthogonal orthogonal
=
= t.l . But o. only prove to t1.  sL and equal to t.l  s.l . to itself, we must have s  t = o. to
that s
prove
so we need
s.l,
element
that
t
=
t
the
by
We have)
formula.
Pythagorean
s.l, s + s.l)= (s,s) + (s.l,
s.l),)
since sand s.l are orthogonal.
This
(1.18).)
proves
.finitedimensional subspace of a Euclidean space basis for s. If x E V, the element s defined by
V, and
an orthonormal
be
s.l
and

s
is unique. of x is given
= (s +
(x, x)
be a
S.l. We
t.l are in
decomposition that the norm
S
Let
{e1, . . . , en}
so S
being zero
terms
DEFINITION.
tL
is the
2
remaining
=
t
E S1.
IIxl1
the
and

s
element
zero
the
Since
and s 1
spaces)
the
let
equation)
n
! i=l)
S =
is called
the
on
of x
projection
We prove next that stated at the beginning
the of
x than
any
is,
all t
in
Proof
S;
By
the equality Theorem
for any t in S, we
sign holds
1.15
its norm

t
E S
is given
lis
s = t.
elements
by
problem
approximation
a finite
in
S be a .finitedimensional subspace of Then the projection of x on S is nearer to of x on S, we have) projection
sll
0, so we have Ilx This completesthe proof.)))
2 tl1

=
(x
=
Ilx 2
tl1
>
 s) + is an
Ilx
Sll2
(s

t).)
orthogonal decomposition of
+
lis

 s112,with
2 t1l
x
 t,
so
.)
equality
holding
if and
only
if
Best
1.
EXAMPLE
we exhibited
2n +
The
linear
prod uct
an orthonormal
CPo(x)=
(1.21))
1
1 elements
the
by
set of
CPo,
kx
cos
S of
!
of Ion
where
CPk)({Jk'
(f,
the
numbers
(I,
CPk)
can rewrite (1.22) in
are
called
the
form)
dimension 2n
(f,
+
1.
The ele
Fourier
Then
we
have)
27T
=
CPk)
S.
subspace
Using the
coefficients off.
dx
f(X)CPk(X)
J.0
k=O)
The
1 .)
k >
for
TT
polynomials.
the projection
denote 2n
fn =
( 1.22))
kx ')
.J a subspace
CP2, . . . , where)
CPl,
sin
=
/P2k(X)
') TT
. . . , CP2n span
CPl,
trigonometric In
functions
trigonometric
.J
CPo,
29)
space)
[0, 217]by trigonometric polynomials. continuous on the interval [0, 217], 1.12 g) = S\0377TI(x )g(x) dx. I n Section
(I,
equation
/P2kI(X) =
j ' y217)
ments of S are called If IE C(O, 217),let
a Euclidean
in
of continuous functions on space of all real functions
Approximation
Let V = C(O, 217),the and define an inner
of elements
approximation
formulas
.)
in (1.21),
we
n
(1.23))
=
fn(x)
!
ta o +
(a
kx + b k
k cos
sin
kx),
k=1)
where) 27T
1
ak =
17
1 0)
f(x) cos kx dx,
n.
The k = 0, 1, 2, . . . , nomial in (1.23) approximates the norm the sense that III
for
bk =
17 f 0)
In II
than
kx
the
that
dx
poly
trigonometric
polynomial
trigonometric
in
S,
in
as possible.)
small
is as
other
any
f(x) sin
tells us
theorem
approximation
I better
27T
1
on [1, 1] by polynomials 2. Approximation of continuous functions of < n. Let V = C( 1, 1), the space of real continuous functions on [1, 1], and let dx. The n + 1 normalized Legendre polynomials \302\267 . , CPn, (I, g) = S\037l I(x)g(x) CPo, CPl, \302\267 n + 1 consisting of all polyin Section 1.14, span a subspaceS of dimension introduced the projection of I on S. Then of degree we nomials < n. If IE C(1, 1),let In denote EXAMPLE
degree
have) n
!
fn =
where
CPk)CPk'
(f,
(/,
=
/Pk)
k=O)
This is the
polynomial of degree
I
for
nt)
J\037
SA
dt,
x(t)y(t)
let
VS (6t 2  6t + 1))
set spanning the same subspaceas {xo,Xl' x2 }. linear space of all real functions f continuous on [0, + (0) and such that the = dt , and let Yo, Yl , Y2, . . . , be dt Define integral (f, g) S\037 efJ(t)g(t) converges. S \037 elj'2(t) n the set obtained by applying the GramSchmidt process to xo, Xl' X2' . . . , where xn(t) = t 3 2 2 = = = = > t for n o. Prove that yo(t) t t 4t + 2, Y3(t) 9t + I, YI (t) I, Y2(t)
form
5. Let
be
the
 6.
18t
6. In
orthonormal
an
V
real
the
and
linear space C(I, 3) with the constant polynomial
inner
that
show
this g.
7.
In
the
show
real linear space that the constant
product
C(O,2) with
inner
polynomial
g nearest
=
(f, g)
g nearest to f is g
=!
this g.
8. In
the
real
find the
and
linear
linear
space C( 1, polynomial
1) with g nearest
inner
to f
3.
= S\037f(x)g(x) (f, g) to f is g = !(e2  I).
product
product
(f,g)
Compute
Ilg
= 
f
dx,
S\037f(x)g(x) log
Compute dx,
S\037lf(x)g(x) 2 this 11 for
= S\0371Tf(x)g(x) inner linear space C(0,27T)with product (f, g) = I , Ul (x) = cos x, u 2 (x) = sin x, find spanned by uo(x) subspace polynomial nearest to f 10. In the linear space V of Exercise 5, let f(x) = e X and find the linear polynomial 9.
In
the
In
the
to f.)))
real
Ilg
let f(x)
Compute
= I/x
let f(x)
Ilg

f
= 
f
2 11
for
e\037and 2 11
for
dx,
let f(x)
=
dx,
let f(x)
= x.
the
trigonometric
g.
that
e\037
is nearest
2)
2.1 Linear
MATRICES)
AND
TRANSFORMATIONS
LINEAR
transformations
One of the ultimate goals of domains and ranges are subsetsof
linear
study of functions
a comprehensive
is
analysis
spaces.
This chapter treats the simplest examples, branches of mathematics. Propertiesof more general tions, which them tions are often obtained by approximating by linear transformations. or operators. in all occur
mappings,
First we introduce somenotation
Vand
W
two
be
and
used to
each x
For
image of A
that
indicate
V, the
in
x onto
maps
element
T(x). If A T and
under
T is
a
T(x)
is any
Now
we
W is
in
of
is denoted
T, and
set of all
the
The
T(A).
by
whose valuesare in
the image of x under images T(x) for x in image of the domain V, T(
called
V,
is Vand
domain
that Vand Ware linear spaceshaving as follows.) transformation
assume
a linear
functions. Let
W)
whose
function
subset
+
V
of T.
define
transforma
The symbol)
sets.
T: will be
arbitrary
concerning
terminology
whose
are called transformations, called linear transforma
Such functions
the
same
set of
T: V + W is If V and Ware linear spaces, a function two W it the V into has if of formation properties: following for all x and y in V, (a) T(x + y) = T(x) + T(y) for all in V and all scalars c.) (b) T(cx) = cT(x) DEFINITION.
W.
T we say that A is called the V), is the range
scalars, and
called a
linear
we
trans
x
These properties are verbalized by saying The two properties can be combined scalars. T(ax for
all x,y
in Vand all
+
scalars a and b.
into
by) = By
aT(x) +
=
any
n elements
Xl' . . . , X n
in
V and
one
formula
aiT(x
multiplication
which states
by
that)
bT(y))
we also
induction,
have the more
general relation)
i ))
i\037l
TC\037aiXi) for
addition and
T preserves
that
any
n scalars
aI, .
. . , an
.)
31)))
Linear
32)
The reader can easily
The identity transfornlation. is called the identity
EXAMPLE 1. each
x in V,
EXAMPLE
2.
for
element of EXAMPLE
3. V.
T:
transformation
The
V
is denoted
and
transformation
transformations.)
+ V,
where
I or by
by
The transformation T: V + V which =ero transformation. and is denoted transformation the zero is called 0 by o.)
The
V onto
for all x in
examples are linear
the following
that
verify
and matrices)
transformations
by afixed
Multiplication When c =
1,
scalar c.
Herewe
identity
transformation.
is the
this
V +
T:
have
T(x) = Iv.)
maps each
T(x) = cx is the zero
V, where
When c
x
= 0, it
transformation.)
i = =
x
vector
Linear
4.
EXAMPLE
where
W =
Let V = Vn and 1, 2, . . . , n, define
equations.
1, 2, . . . , m and (Xl' . . . , xn ) in
k =
Vn
the
onto
y =
vector
Given
V m. Vn
T:
+
Vm
(YI, . . . , Ym)
,
T maps each to the
V m according
in
a ik
numbers
real
mn
as follows:
equations) n
=
Yi
5. Inner
EXAMPLE
z in
element
fixed
of x
product
'with
T:
V
a fixed + R
i =
for
k
aikx
1, 2, . . . , m
Let element. as follows: If
be
V
xE
.)
a real
V,
then
Euclidean space. T(x) = (x, z), the
For a inner
z.)
with
subspace. Let
Projection on a
EXAMPLE 6. dimensional
product V, define
! k=l)
subspace
T:
Define
V.
of
V
a Euclidean
be
V
as follows:
+ S
space and let
If x E
then
V,
S be a finiteT(x)
is the
projection of x on S.) operator. Let V be the linear space of all real functions each The linear transformation which on an open interval (a, b). maps and is denoted by V onto its derivativef' is called the differentiation functionfin operator of D. we have D: V + W, where D (I) = f' for each f in V. The space W consists Thus, all derivatives f'.) EXAMPLE
f
7.
The differentiation
differentiable
8.
EXAMPLE
The
on an
continuous
interval [a,
b]. Iff
g(x)
This transformation 2.2
Null
In
space
this
section,
THEOREM the
zero
2.1.
element
T is
and
Let
operator.
integration
called
=
E
V,
f' J(t)
V
define
be
if
dt)
the integration
linear
the
g = T(f) to
a
1. Prove that Tis linear and describe the null space and range of T. 29. Let V denote the linear space of all real functions continuous on the interval [ 7T,7T]. Let S be that subset of V consisting of all f satisfying the three equations) Let
27.
f(t)
f\"
dt = 0,)
f\" f(t)
that S is a subspaceof V. Prove that S contains the functions f(x) Prove that S is infinitedimensional. Let T: V + V be the linear transformation
dt
=
0,)
f\"
sin t dt
f(t)
=
O.)
Prove
(a)
(b) (c)
g(x)
30.
cost
=
f\" {1
= cos
nx
and
= sin
f(x)
as follows
defined
+ cos (x

t)}f(t)
: Iff
for
nx
E
V,g
each =
n =
2, 3, . . . .
T(f) means
that)
dt \302\267)
that T( V), the range of T, is finitedimensional and find a basis for T( V). the null space of T. that such an f (f) Find all real c \037 0 and all nonzero f in V such that T(f) = cf. (Note lies in the range of T.) of a linear space V into a linear space W. If V is Let T: V + W be a linear transformation infinitedimensional, prove that at least one of T( V) or N(T) is infinitedimensional. (d)
Prove
(e)
Determine
dim N(T) = k, dim T( V) = r, let e 1 , . . . , e k be a basis for N(T) and , el'...' ek ek+l' . . . , ek + n be independent elements in V, where n > r. The elements Use this fact to obtain a T(ek+l)'...' T(ek+n) are dependent since n > r. Assume
[Hint:
let
contradiction.
2.5
])
operations
Algebraic
by
DEFINITION.
and
values
with
product
cT by
the
x
in
V.)))
in a given linear space W can be addedto eachother in Waccording to the following definition.)
Let
S: V +
in a
linear space
the
transformations
lie
scalars
T:
Wand W.
If
V
+
be
W
c is any
two functions
scalar
in
W,
a
with
we define
T)(x) =
Sex) +
T(x),)
(cT)(x)
=
common
the sum S
equations)
(S +
(2.4))
for all
values
whose
Functions
be multiplied
on linear
cT(x))
and
can
domain
+ T and the
V
are especially
We
scalars as into W.
this
In
W.
on linear
operations
Algebraic
interested in the case we denote
also a linear set of all linear
where V is 2 (V, W) the
case by
37)
transformations)
the same
having transformations
space
of V
in 2( V, W), it is an easy exerciseto verify If Sand Tare two linear transformations that in 2(V, With T and cT are also linear transformations this is true. W). More than the operations just defined, the set 2( V, W) itself becomesa new linear The zero space. T transformation serves as the zero element of this space, and the transformation (1) that all ten axioms for a linear is the negative of T. It is a straightforward matter to verify are satisfied. Therefore, we have the following.) space S +
space A
with
the
more
2(
operations
This
linear space
and can be defined
ST:
2.1
FIGURE
DEFINITION.
values
in
V,
and
Let U, let S:
the composition
V,
ST is the
be
W W
ST: =
(ST)(x)
to map is illustrated
Thus, This
x
by in
U
function \037
composition 2.1. Figure
or
is composition structure
first
transformations.)
U and
domain a function with V and values in
W.
domain
by the
for every x
ST, we
the
be
V
with
W defined
S[T(x)])
(2.4).)
W)
Let T: U \037
sets.
be another
function

in
as follows.)
the composition of two
Illustrating
V \037
U
a linear
W is
into
useof the algebraic
makes no
generally
quite
V
transformations
linear
on
operation
of
defined as
by scalars
multiplication
algebraic operation
transformations.
of
linear transformations
of all
W)
V,
of addition and
interesting
multiplication
of a
The set
2.4.
THEOREM
map
Then
equation)
in
x by
U.)
T and then map
T(x) by
S.
of realvalued functions has been encountered repeatedly in our study of Composition is, in general, not commutative. However, calculus, and we have seenthat the operation as in the case of realvalued functions, composition does satisfy an associative law.) THEOREM
2.5.
If T:
U \037
V,
S:
V
\037
W,
R(ST)
and
R: W
\037
= (RS)T.)))
X are
three functions,
then
we have)
Linear
38)
Both functions R(ST) and
Proof. in
= R[(ST)(x)] =
[R(ST)](x) which
that
proves
V \037
T:
powers of T inductively
R[S[T(x)]]
the
I is
Here
s:
Tn =
I,)
+ by)
W),
= S[T(ax
and let c be any
For any function
R
(S +
(b) For any
linear
with
+
=
by)]
itself.
We define integral
1
>
a and b, we S[aT(x)
the
algebraic
following.)
.)
that
law
associative
the
integers m
n.
and
transformations is again
linear.)
and if T: U \037 scalars, U \037 W is linear.)
same
composition
be linear spaces with
V
ST:
have)
+ bT(y)]
= aST(x)+
operations
of addition
same
the
scalars,
bST(y)
.)
and multiplica
assume Sand
Tare
scalar. values
transformation
in V, we have)
TR)
and)
R: W \037
U , we
T) = RS + RT)
straightforward
n
the
with the
the
with
us
for
of linear
then
T)R = SR +
R(S +
The proof is a
= R[S[T(x)]],)
(RS) [T(x)]
V into
maps
TTnl)
spaces
all scalars
U and
in
U, V, W
Let
2.7.
THEOREM
which
composition
transformations,
can be combined in 2(V, W) to give
of scalars
=
linear
Ware
V,
linear
For all x, y
Composition
2(V,
If U,
Ware
(ST)(ax
=
[(RS)T](x)
The reader may verify Tm+n for all nonnegative
transformation.
identity
2.6.
V \037
Proof.
(a)
each x
For
X.
in
as follows:)
implies the law of exponents TmTn The next theorem shows that the THEOREM
and
a function
V be
TO =
in
values
U and
domain
= (RS)T.)
R(ST)
Let
DEFINITION.
tion
(RS)Thave
we have)
U,
and
and matrices)
transformations
application
and)
of the
(cS)R
= c(SR).)
have)
R(cS)
= c(RS).)
definition of
composition and is left
as
an exerCIse.)
2.6
Inverses)
In our
study
of
realvalued
inversion of monotonic functions. more general class of functions.)))
functions Now
new we learned how to construct to extend the process of
we wish
functions inversion
to
by a
lnverses)
T, our goal is to find,
a function
Given
we have to distinguish which we call left
and right
and
ST
between
another
if possible,
Given two sets V and Wand a function T: a left inverse of T if S[T(x)]= xfor all x in V, ST =
\",here
Iv
of T if
is the
identity
T[R(y)] =
Y
all y
V.
on
transformation
for
in
that
is the
T(V),
=
and let W {O}. Define T: right inverses R: W \037 Vand =
V \037
R':
have a
follows:
W
V given
\037
=
S[T(l)] =
1=
This simpleexampleshows that be
1
S: T(V)
Afunction
\037
V
if)
is,
R: T(V) \037
V is
called a
inverse
right
with
two
T(l)
= T(2)
=
R'(O)
,)
right
Let
inverses.
= O.
V
has
function
This
=
{I, 2} two
by)
2 =
and)
S(O))
need not
inverses
left
W.
2.)
this would require)
S since
inverse
left
but
Was
R(O)
It cannot
V \037 that
IT(V),)
inverse
with no left
A function
EXAMPLE.
commutative, of inverses
kinds
on T(V).)
transformation
identity
not
Iv,)
Afunction is, if)
TR
where IT(v)
whosecomposition
inverses.)
DEFINITION. is called
function S
Since composition is in general TS. Therefore we introduce two
transformation.
the identity
T is
with
39)
S[T(2)] =
exist and that
S(O).) right
need not
inverses
unique.)
inveI see In fact, each y in T( V) has the = x , then select one such x and define R(y) = T(x) = y for eachy in T(V), so R is a right inverse. Nonuniqueness may occur T[R(y)] because there may be more than one x in V which maps onto a given y in T(V). We shall one x in V, 2.9) that if each y in T( V) is the image of exactly prove presently (in Theorem Every
'r: V
function
form y = T(x) for
then
right
at
\037
least
at least
W has
one x
in
are unique. if a left inverse
inverses
First we prove that
one
right
If we
V.
exists
it
is
unique
and,
at the same time, is
a
right
Inverse.)
A function
THEOREM 2.8.
inverse S,
S is
also a
T: right
V
\037
W
can
have at
most one left
Assume
S[T(x)]
If T
inverse.
has a left
inverse.)
S': T has two left inverses, S: T(V) \037 Vand = = We shall that Now T(x) for T(V). S'(y). y prove S(y)
Proof.
y in
then
=
x)
and)
S' [T(x)]
= x ,)))
T(V) some
\037
V.
Choose
x in V,
so we
any have)
Linear
40)
since both y in
all
for
=
T[S(y)]
=
x
T(x) =
we get
T,
Applying
Since y
y.
E
=
S'(y)
inverse.
Choose
, we
have y =
T(V)
so S(y) =
x ,
S'(y)
are unique.
left inverses
that
proves
and
any element y
in
T(x) for somex in
a left inverse, so)
S is
But
V.
that
which
x
inverse S is alsoa right
every left
that
prove
shall prove
We
Therefore S(y) =
inverses.
Therefore S = S'
T( V).
Now we T(V).
S' are left
Sand
and matrices)
transformations
= S(y).)
S[T(x)]
T[S(y)].
But
=
Y
T(x),
so
=
Y
which completes the
T[S(y)],
proof.)
The next
all functions
characterizes
theorem
A function T: 2.9. V elements onto distinct of
THEOREM elements
W has
+
V
(2.5) is
Condition
Note:
that
x = y.
this
implies
r!=
(2.6)for
all
T(y).)
the statement) x =
implies)
T(y))
if, for
distinct
x and y in
V
y.) to be
is said
onetooneon
V.)
S, and assume that T(x) = T(y). We wish to prove = x and S[T(y)] = y, S, we find S[T(x)] = S[T(y)]. SinceS[T(x)] Applying = x with a left inverse is onetooneon its y. This proves that a function Thas
Assume
Proof.
(2.5) or
T satisfying
function
A
to
equivalent
T(x) =
(2.6))
is, if and only T(x)
only if T maps all x and y in V,)
if and
inverse
left
implies)
xr!=y)
(2.5))
a
of W; that
inverses.)
left
having
a left inverse
domain.
we prove the converse. Assume + V which is a left inverseof T.
Now S:
T(V)
there is exactly
(2.6),
we define
S on
one x in
V for
S(y) =
defined is a
S[T(x)] = x for
know
Tl
the
The linear
a inverse of is also
results
right
which y =
We
V.
then
=
y
T(x). Define S(y) to
a
exhibit
shall
T(x) for
function
some x in
be this
x.
the
function
V.
By
That
is,
x)
each
means
that)
ST = Iv.
V, so
x in
T(x)=y.)
Therefore,
S so
inverse)
be onetooneon is denoted
V.
by Tl.
The
unique
We say that
left inverse T is
invertible,
of T and
(which we
call
T.)
of this
transformations.)))
T(V),
T.)
T: V + W
Let
DEFINITION. we
of
inverse
left
If Y E
as follows:)
T(V)
Then we have
Tis onetooneon
section refer to
arbitrary
functions.
Now
we apply
these ideas to
linear
Onetoone
2.7 Onetoonelinear
41)
transformations)
transformations
the same section, V and W denote linear spaces with scalars, and T: V a linear transformation in .P(V, The us of T enables to express W). linearity onetoone property in several forms.) equivalent
this
In
+
denotes
THEOREM
statements
T: V +
Let
2.10.
W be a linear
W). Then
in .P( V,
transformation
the
W
the
following
are equivalent.
(a) T is onetoone
V.
on
and its inverse TI: (b) T is invertible (c) For all x in V, T(x) = 0 implies the zero element of v.)
T(V) + V is x = o. That
linear. the null
is,
space N(T)
containsonly
and (c) implies (a). First Proof We shall prove that (a) implies (b), (b) implies (c), assume (a) holds. Then T has an inverse (by Theorem 2.9), and we must show that TI is linear. Take any two elements u and v in T(V). Then u = T(x) and v = T(y) for some x and y in V. For any scalars a and b, we have)
+ bv =
au
T is
since
aT(x) + bT(y) =
Hence, applying TI,
linear.
+ bv)
rl(au
we
= ax
+
T(ax +
by),)
have)
=
by
+ bTI(v),)
aTleu)
Therefore so TI is linear. (a) implies (b). Next assume that (b) holds. Take any x in V for which T(x) = o. Applying Tl is linear. find that x = Tl(O) = 0, since Therefore, (b) implies (c).
Take any two elements  v) = T(u)  T(v) = O,sou of the theorem is complete.)
(c) holds.
assume
Finally,
linearity,wehaveT(u
on
V,
and
the
V is
When
V is
T: V + W
be a
say dim
= n.
Let
2.11.
finitedimensional,
(a) T is onetoone (b) If el , . . . , ep elements (c)
dim T(V)
in
V
=
v
V with T(u) = T(v). By Therefore, Tis onetoone
the
in
of
terms
next theorem.)
the
in .P( V,
transformation
Then
in
O.
property can be formulated
by
linear
v
we
following
statements
W) and assume that are equivalent.
V.
on are
elements
independent
in
V,
then
T(e l ),
...,
T(ep )
are
independent
T(V).
= n.
(d) If {el, . . . , en} Proof.
the onetoone
finitedimensional,
and dimensionality,as indicated
independence
THEOREM
proof
u and

TI,
We shall
(a). Assume (a)
is a
prove
holds.
basis for that Let
(a) el,
V,
implies
{T(e l ),
then
(b), (b)
. . . , e1J be
...,
T(en )}
is a
basis for
implies (c), (c)implies
independent
elements of
(d), V
and
T(V).)
and
(d) implies
consider
the)))
42)
Linear
elements T(el), . . . , T(e
) in
p
and matrices)
transformations
T(V).
Suppose
that)
p
= 0
ciT(e i)
I i=l)
cl , .
scalars
certain
for
. . , cp
.
= 0,
i ciei ( t=l)
T
)
since T is
onetoone.
(a) implies (b). (b)
T(el), . . . , T(e
independent, so C l
dim
Therefore
T(V). Then y = T(x) for
be
some
for
basis
dim
{T(e l ),...,
{T(el),
. . . , T(e
n )}
Finally, assume be a
basis for
=
y
=
T(x)
(d) holds. We
will
Therefore
Therefore x 2.8
then
Cl
=
Theorem
any
element
y in
I ciT(ei). dim
= o.
T(V)
Let
=
n,
{e l , . . . ,
so en}
write)
n
I ciei ,
\302\267 . = \302\267
(d).
implies
and
hence)
=
T(x)
i=l)
0,
Take
V.
= 0 impliesx
that T(x)
prove
, we may
(c)
n
=
by
implies (c).
But we are assuming
T(V).
spans
basis for T(V).
X =
If T(x)
But,
i=l)
If x E V
V.
the n elements
(b),
By
> n.
T(V)
o. Therefore
have)
and hence)
t'
T(en )}
is a
cp =
\302\267 . \302\267 =
n
\037 c.e. t \302\243.
i=l) Therefore
0
V.
n X =
t =
=
= n, so (b) a basis for
T(V)
and let {el , . . . , en} x in V, so we
(c) holds
assume
Next,
n.
dim T(V)
n
TmTn = Tm+n. If
of exponents:
(x +
=
(x, x
=
(x
T is invertible,
If ST
T commute,
(S +
(ST)n = prove that ST is
these formulas
T be the
Let Sand
(z, y, x)
and
T(x,
y,
(a) Determine the
ST 
of (x,
image
of
x
y, z)
(TS)2, (ST
T2, (ST)2,
S2,
TS,
+ y,
z) = (x, x
if ST
altered
be
must
transformations
linear
T)a =
(S +
and)
T2)
+ y
Va
\037
+ z), each
of
 TS)2.
(ST)l
= T1S1. In other
reverse
order.
commute.
also
by
the
Sand
that
Prove
(c) Determine
Determine
29. Let Vand
xp(x). for
all n
31. Let
V be arbitrary
those
p in
V
for
those
p in
V
for
D be as in
Prove
30. Let Sand T an
+ Ta .)
formulas
S(x, y,
ST, TS,
under each
v, w)
z) =
of Va.
of the
transformations:
following
(d)
+ 3ST2
TS.
T are onetooneon Va and find the image of (u, Sl, Tl, (ST)l, (TS)l. (c) Find the image of (x, y, z) under (T  I)n for each n > 1 . Let D denote the 27. Let V be the linear space of all real polynomials p(x). which maps each polynomial and let T denote the integration operator = dt. Prove that DT = Iv but that TD \037 Iv. q given by q(x) S\037p(t) and range of TD. 28. Let Vbe the linear space of all real polynomials p(x). Let D denotethe that maps p(x) onto xp'(x). and let T be the linear transformation (a) Let p(x) = 2 + 3x  x2 + 4x a and determine the image of p under 2 transformations: D, T, DT, TD, DTTD, T2D2  D T2. = in V for which . Determine those (b) p T(P) P (b)
In
V.
in
(x, y, z) is an arbitrary point the following transformations:
where
under
V and values
Sa + 3S2 T
Va defined
into
inductively by the for composition
> o.
that
T)2 = S2 + 2ST+
how
Indicate
26.
prove
+ z). + z).)
Tn is also invertible
that
domain
snTn for all integers n also invertible and that 23. If Sand T are invertible, of inverses, taken in words, the inverse of ST is the composition and commute, inverses 24. If Sand T are invertible prove that their that) 25. Let V be a linear space. If Sand T commute, prove
22. If Sand
 1). z + 3).
1, Y + 2, + y, x + y + y,y + z, x
prove
z) =
(x, y,
1, z
+
Powers are defined the associative law
with T denote functions 25, Sand = TS, we say that Sand T commute.
through
TS.
+ l,y
=
= (T1)n.)
(Tn)l
In Exercises22
z) = (x
17. T(x,y,
=
given for oneto
Tis
that be
DT
in !fJ(
Exercise
p(x)
real
DTn
polynomials
= Co +
operator
the polynomial the null space Describe
p onto
differentiation
operator
of the
following
that maps
p(x) onto
each
o.
which
 TD = I and that V, V) and assume that
> 1. the linear space of all polynomial
(DT  2D)(p) = = which (DT  TD)n(p) Dn(p). 28 but let T be the linear transformation
differentiation
c1x +
ST
 TnD = nTn1  TS = I. Prove
. . .p(x). + cnx
for that
n >
2.
STn
 TnS
Let R, S, Tbe the functions n in V onto the polynomials
=
nTn1
which
map
r(x),
s(x),)))
Linear
44)
and matrices)
transformations
and t(x), respectively,where) n r(x)
=
p(O)
=
s(x)
,)
I
Ck
Xk
1
n
,
=
t(x)
k=l)
I
Ck
Xk + 1 .
k=O)
= 2 + 3x  x2 + x 3 and determine the image of p under each of the following 2 2 T S , S2T2, TRS, RST. R, S, T, ST, TS, (TS)2, (b) Prove that R, S, and T are linear and determine the null space and range of each. (c) Prove that T is onetoone on Vand determine its inverse. (d) If n > 1 , express (Ts)n and sn Tn in terms of I and R. Refer to Exercise28 of Section 2.4. Determine whether T is onetooneon V. If it is, describe
Let p(x) transformations: (a)
32.
inverse.)
its
transformations
Linear
2.9
with
If V is tinitedimensional, with prescribed values at the
be
n
el, .
Let
2.12.
THEOREM
u 1 , . . . , un
arbitrary V + W
. . , en
an
ndimensional linear space V. Let Then there is one and only one linear
k =
for
Uk)
x in V
1, 2,
. . . ,n
n
Ixke
k
T(x) =
then
,
k=l)
Every x in Xl'... multipliers
Proof the
x =
If
V can
,
X
n
.)
as follows:)
n
(2.8))
+ W
theorem.)
next
the
in
V
that)
element
arbitrary
described
V, as
be a basis for an in a linear space W.
T(e k) =
T maps
of
T:
transformation
a linear
construct
always
basis elements
such
(2.7))
This
we can
elements
T:
transformation
values
prescribed
be expressed uniquely the components being
I XkU k=l)
a linear
as
of x
k .
of e1, . . . , en
combination the
to
relative
ordered
,
basis
matter to verify that T is (el , . . . , en). If we define T by (2.8), it is a straightforward linear. If x = ek for some k, then all components of x are 0 exceptthe kth, which is 1, so (2.8) gives T(e k ) = Uk' are required. To prove that there is only one linear transformation (2.7), let T' be another satisfying and compute We find that) T'(x).
n
n
T'(x)
Since T'(x)
(1,0)
and
= j
( k\037lxkek all
(0,
n
=
=
x in V,
the linear
Determine
EXAMPLE.
i=
= T(x) for
= T' )
k\037/kT'(ek)
we have
T' = T, which
transformation
T:
V 2
=
completes
+ V 2 which
1) as follows:)
T(i) = i + j ,)
T(j)
=
T(x).)
k\037lXkUk
2i
 j .)))
the
maps
proof.) the basis
elements
Matrix
If x
Solution. =
T(x)
+
Xl;
=
of linear
representations
WI'
. . . , Wm
has
values
that a
shows
2.12
Theorem
linear space V is e1, . . . , en. Now,
linear
completely the
suppose
basis elements WI'
. . . , W m'
=
T(x) is given
, then
2x 2)i
+
(Xl
T:
transformation
V
\037
W
of
a given W is also finitedimensional,say
space
basis for W. (The each element T(ek )
 j)
V 2
by its action on
nand be expressed
m mayor
dimensions can
+
(Xl

by)
x 2 )j.)
transformations
determined
be a
in W,
+ x2 (2i
+ j)
x l (;
45)
transformations)
element of
x 2j is an arbitrary
+ x 2 T(j)
xIT(i)
Matrix
2.10
=
of linear
representations
as
uniquely
a finitedimensional of basis elements W = m, and let dim
set
may not be equal.) SinceT a linear combination of the
say)
m
T(e k) =
. . . , t mk are the components We shall display the mtuple (t lk , . . .
! i=I)
of T(ek )
where t lk ,
tikw
i
,
to the
relative
ordered basis (WI'
. . . , W m ).
as follows:)
, t mk ) vertically,
t 1k)
t 2k) (2.9))
_t
This
one
is called a n elements
array
each of
the
of
pair
column
or a
vector
column matrix.
T(e 1), . . . , T(e n ). brackets to obtain the following
a matrix
mk)
We place
We
have
them side by
such side
a column and
enclose
vector for them
array:)
rectangular
t il
t I2
tIn)
t 21
t 22
t 2n)
t mi
t m2
t mn _)
We call it an m by n matrix, of m rows and n columns. The m X first row is the 1 x n matrix (t l1 , t 12 , . . . , tIn). matrix so the first subscript The scalars t ik are indexed displayed in (2.9) is the kth column. in which t ik occurs. i indicates the row, and the second subscriptk indicates the column We call t ik the ikentry or the ikelement of the matrix. The more compact notation) This
is also
is called
array
or an
m
X
n
used to
matrix.
consisting
The
denote the
in
matrix
( (ik) ,)
or)
whose
ikentry
( tik)F:k: I
is t ik
.)))
,)
1
Linear
46)
Thus, every
T of an ndimensional space V into an mdimensional matrix (t ik ) whose columns consist of the components of to the basis (WI' . . . , W m). We call this the matrix representation of ordered bases (el , . . . , en) for V and (WI' . . . , w m ) for choice of any element matrix to the T(x) relative (t ik ), the components can be determined as describedin the next theorem.) transformation
linear
space W gives rise to an T(e I ), . . . , T(en ) relative of T relativeto the given the W. Once we know
basis
. . . , W m)
(WI'
dim
=
W
and let
a linear
V = nand in 2( V, W), \302\273,heredim w and (WI' . . . , bases for Vand W, respectively, m ) be ordered whose entries are determined by the equations)
Let (el , . . . , en) x n matrix
m.
(t ik )
x n
m
T be
Let
2.13.
THEOREM
and matrices)
transformations
be the m
transforl11ation
m
(2.10))
Then
k)
T(e
! i=l)
=
tikw
i
1, 2, . . . , n
k =
for
,
.)
element)
an arbitrary
n
X =
(2.11))
k
!xke k=l)
in
V with
components (Xl'
. . . , xn )
. . . , en)
to (el ,
relative
by T onto
is mapped
the
element)
m
!
=
T(x)
(2.12))
YiWi
i=l)
in
W
with
(YI,
components
of X by
components
. . . , Ym)
the linear
relative to
I , . . . , w m ).
(HJ
related to
the
equations)
n
!
Yi =
(2.13))
Yi are
The
1, 2, . . . , m
i =
for
(ikXk
.)
k=l)
T to
Applying
Proof
each member of
where
each
Having
Yi
a pair
chosen
of bases(el, . . . , en)
every linear transformation we start ordered
with
any
bases
formation
elements of
T:
mn
T:
scalars
for Vand V
\037
W
=i\037
W,
having
V
\037
W
and
arranged as a then
it
is easy
this matrix
)
=i\037lYiWi')
proof.)
. . . ,
(WI'
W
m) for
Vand
rectangular that
representation.
there We
W, respectively,
if Conversely, choose a of and (t ik ) pair
representation matrix
to prove
the equations in (2.10). Then, T: V \037 W with these transformation linear one arbitrary point x in V is then given by Equations V by
( k\037/ikXk
a matrix
has
obtain)
m
n
Wi
This completesthe
by (2.13).
(2.10), we
using
m
=k\037lxkT(ek)=k\037lXki\037tikwi
is given
and
m
n
n
T(x)
(2.11)
(t
ik ).
is exactly one linear transT at the basis define simply
2.12, there is one and only of an The image values. T(x) prescribed by Theorem (2.12)
and (2.13).)))
Matrix representationsof linear EXAMPLE
of a
1. Construction 2 x 3 matrix)
linear
a given
from
transformation
matrix.
Suppose we
the
with
start
47)
transformations)
1 o)
G)
of unit
the usual bases
Choose
represents a linear in V3 onto the
T:
vector (YI, Y2)
in
V3 \037
Y2
of a
V 2
.
Then
maps an arbitrary the linear equations)
V 2 which
V 2 according
YI
V3 and
for
vectors
coordinate
transformation
\037l)
to
=
3x I +
X 2
=
Xl +
OX 2

2x 3
+
4X3.)
the given matrix vector (Xl' X 2 , x 3)
,)
of a given linear transformation. of space p(x) polynomials degree < 3. This space has dimension 4, and we choose the basis (I, x, x 2 , x 3). Let D be the differentiation which operator each in V onto its derivative We can D as a linear maps polynomial p(x) p' (x). regard transformation of V into where W is the 3dimensional of all real W, space polynomials of degree < 2. In W we choose the basis (I, x, X2). To find the matrix representation of D relative to this choice of bases, we transform (differentiate) each basiselement of Vand it a linear as combination of the basis elements of W. we find that) Thus, express V be
Construction
2.
EXAMPLE
Let
the linear
all
of
nlatrix
representation
real
2
D(I)=0=0+Ox+Ox D(x
coefficients
The
D.
2
2x =
of these the
Therefore,
) =
0 + 2x +
Ox
2
D(x
,)
polynomials determine the
o
1 =
1
3
) =
3x 2
the
by
Ox +
1 + =
0 +
of the
columns
is given
representation
required
=
D(x)
,)
following
Ox
2 ,)
Ox + 3x2 .) matrix representation 3 x 4 matrix:)
of
0
0
0020.
000 To also ordered
that the matrix representation emphasize on their order, let us reverse the order of the
dependsnot
basis
2 (x ,
nomials obtained basis
3)
2 (x ,
X,
x, I).
Then
above,
1) appear
but
basis
the
components
of these
order. Therefore, the
becomes)
000
3
0020.
o
1 0
0)))
only
elements
of Vare
elements
the
in reversed
basis
on the basis elements but in Wand use, instead, the
transformed into
the
same
polynomials relative to matrix
representation
the
polynew
of D now
Linear
48)
Let us compute x +
1 +
x
2
matrix
a third
3 + x ) for
V,
and
representation the basis (1, x,
for D,
x) 2
D(1 +
D(I)=O,)
matrix
the
x + x2 ,
x ,1+
basis elements of
W. The
for
(1, 1 +
basis
trans
Vare
1
x 3)
+
0
1
1
Since it is
1 +
2x,)
+ 3x2 ,)
.
form
diagonal
of representations to try to choose the
matrix natural
simple form.
a particularly
=
3)
matrix representationin
different possible to obtain choices of bases,it is
tion by different matrix will have
x2 )
1
0 0 0 of a
x +
= 1 + 2x
0 0 2 2
Construction
+
D(1
,)
is
case
this
in
representation
=
x)
x + x2
D(1 +
2.11
the
using
as follows:)
formed
so
and matrices)
transformations
The next
a given shows
theorem
linear
bases so that that
transformathe
resulting
we can
make
from the upper lefthand corner 0 except possibly along the diagonal starting will a there be of ones of the matrix. this followed by zeros, the string Along \037.iagonal the to the of A number of ones rank transformation. matrix (t ik ) with all being equal matrix.) entries t ik = 0 when i \037 k is said to be a diagonal entries
the
all
THEOREM
dim
=
W
Let 2.14. m. Assume
a basis
exists
V
and
TE
!f
(e1, . . . , en)for
finitedimensional linear spaces, with dim W) and let r = dim T( V) denote the rank of T. that) and a basis (WI' . . . , wm)for W such
W be (V, V
T(ei) =
(2.14))
Wi)
for
i =
1, 2,
V =
nand
Then
there
. . . , r ,)
and)
T(e
(2.15)) the
Therefore,
diagonal
matrix
(t ik )
=
i)
of T
0)
for
i = r
relative to these
+ 1, . . . , n
bases has all entries
First
Proof.
t 22 =
elementsform
for
the r
t rr = 1 \302\267)
dim T(V) = we construct a basis for W. Since T(V) is a subspaceof Wwith w r . Theorem has a basis ofr elements in W, say WI'..., 1.7, these By a subset of some basis for W. Therefore we can adjoin elements w r + 1 , . . . ,
that) \302\267 \302\267 , W r' (WI' \302\267
(2.16)) is a
except
\302\267 \302\267 \302\267 =
r, the space T(V) m so
zero
entries)
t 11 =
W
.)
basis for
W.)))
W r+1,
. \302\267 \302\267 , W m))
matrix representation
of a
Construction
in
diagonal
49)
form)
first r elements Wi in (2.16) is the image of at in V and call it e i . Then T(e i ) = Wi for i = 1,2, . . . , r so (2.14) Now let k be the dimension of the null space N(T). is satisfied. By Theorem 2.3 we have n = k + r. Sincedim N(T) = k, the space N(T) has a basis of k elements in V which we designate as er + l , . . . , e r+ k . For each of these consisting is satisfied. elements, Equation Therefore, to complete the proof, we must show (2.15) that the ordered set) element
one
in V. Choose
for
a basis
V.
one such element
V =
dim
Since
independent. Suppose
that
some
of the
Each
V.
er ,
,...,
(e l
7))
(2.1
is
a basis for
we construct
Now
least
n
=
linear
er + k ))
e r + 1 ,...,
r + k, we need only that show of them is zero, combination
these
elements are
say)
r+k \037
k
(2.18))
0 .
= c.e. \037\037
i=l)
T and
Applying
using
Equations
and (2.15), we
(2.14)
r
r+k \037 c.T \302\243.,l
i=l
that)
find
( e.l )
=
\037 c.w. \037l \302\243.,
=
0
.
i=l)
and hence WI' . . . , W r are independent, to) reduces terms in (2.18) are zero, so (2.18) But
C1
=
cr =
\302\267 \302\267 \302\267 =
o. Therefore,
the
r
first
r+k \037 i=rtl)
e r+ l
But
,.
. . , er + k are independent o. Therefore, all the
c r+k =
\302\267 \302\267 \302\267 =
basis for
V.
This
completes
since C i in
cie i
they
(2.18)
=
0 .
a basis
form
are zero,
for N(T), and
so the
elements
in
hence cr + l (2.17)
=
form a
the proof.)
D is the differentiation to Example 2 of Section 2.10,where operator V of the space W of polynomials of of 3 into maps space degree polynomials the method the range T( V) = W, so T has rank 3. Applying < 2. In this degree example, the for basis basis for used to prove Theorem 2.14, we choose W, (1, 2 ). example any 3 We A set of polynomials in V which is onto these elements given by (x, !X2, lx ). map is a basis the constant polynomial 1, which extend this set to get a basisfor V by adjoining for the null space of D. Therefore, if we use the basis (x, !X2, lx 3 , 1) for Vand the basis We
EXAMPLE.
refer
1 the
affected,
involve
A 2
of each minor we have)
If the
each kth
multiplied
affectedbut row.)))
row
A 2
by t, some
so fk row
is
homogeneous
of Ail
gets
in by
Ail)
= tfi(Al'
t, where
in the multiplied
I ,  .,
tfl(A
A 2
\302\267 \302\267 , An) , \302\267
by
is not
All
t, so
we
An).)
by t and the
multiplied
first
the
=
tall detAIl gets
Ail
all is multiplied
coefficient
The
row.
first
\302\267 \302\267 , \302\267 , An)
fi is homogeneous of A is multiplied
det
1 and 2 the same will for f. be true first row of A by a scalar t. The minor
An) =
,...,
fi(tAl' Therefore
the
row
first
so again
= (I)i+lail
\302\267 \302\267 \302\267 , An)
we verify that eachh satisfiesAxioms Consider the effect of multiplying the
affected have)
kth
I,
coefficient ail
is not
\302\267)
row.
minor
k > I , the row.
kth
by
t.
If j
Hence
\037
Akl
k,
is not
the
every fi is
affected
coefficient
but a kl is ail is not
homogeneous in
the
The
A similar
2.
shows
argument
We prove next
that
transpose) in every
is additive
eachh
Axiom 3', the
I satisfies
that
of a
determinant
weak version
91)
row, soI
1 and
Axioms
satisfies
3.
Axiom
of
From Theorem
3. that I satisfiesAxiom Axiom 3', assume two adjacent rows of A are equal, I satisfies say Ak = has two equal rows so A k+l . Then, except for minors Akl and A k+1,1, each minor Ail det Ail = o. Therefore the sum in (3.26) consists only of the two terms corresponding to
3.1, it
follows
then
verify that
To
j = k andj = k
+
1,)
I(A l ,
(3.27))
=
\302\267 \302\267 \302\267 , An)
Akl +
(1 )k+lak 1 det
(1
k + l , 1 det Ak+1
)k+2a
,)1
\302\267
ak + l 1 since Ak = A k + l . Therefore the two terms in (3.27) so I(A l , . . . , An) = o. Thus, I satisfies Axiom 3'. = 1 and a,l = 0 for = Iwehaveall Axiom 4. A When Finally, weverifythat/satisfies in each term I. is the matrix n so of order > Also, All 1, (3.26) is zero except identity j 4.) the first, which is equal to 1. Hence1(11, . . . , In) = 1 so I satisfiesAxiom and akl =
A k1 = Ak+l 1
But
only in sign,
differ
have In the foregoing proof we could just as well of the firstcolumn kthcolumn minors Aik instead
defined
a functionl
used
minors Ajl. In
in
if we
fact,
of the
terms
let)
n
(3.28))
= !(l)i+k
, An)
f(Al'...
A ik
det
aik
,
i=l)
this I satisfies all four same type of proof shows that functions are unique, the expansion determinant
the
exactly
those in The but
also
are
(3.21)
expansion reveal
3.14 The
all equal to det A. formulas (3.28) not
a new
propertiesand
aspect of the
with
are
definition may
Although primarily not
alter
another
I
2
3
[4
5
6)
,
be given
as
whose
i, j
square
transpose
of
A
and
denoted
if)
1
4)
2
5
3
6)
.)
follows.)
can be applied to any We prove next matrices.
its determinant.)))
At =
then
the
example,
]
The transposeof an is au .) entry
transposition with
called
matrix
of A. For
OF TRANSPOSE.
DEFINITION
At
connection
columns
the
A_ A formal
existence
of determinantsa
theory
A is
matrix
each
and
a transpose
of
rows of At
in (3.28)
formulas
of determinant functions connection between rowis discussed further in the next section.) the
establish
only
This
columnproperties.
determinant
Associated by At. The
matrix
axioms for a determinant
Since
function.
m
X n
matrix
rectangular that
transposition
A
matrix
=
(aii);:j\037l
we shall of a
is the
n
X
m
be concerned
square matrix
does
92)
Determinants) any n
For
3.11.
THEOREM
A we
n matrix
X
A =
det
B =
proof is that the
The
Proof
then,
Assume,
=
At
we
minors
det
For
on n.
induction
by
have)
n
At.)
=
(hi').
= 2 the
1 and n
result is easily verified.
 1. Let A = minors and det B by
theorem is true for matrices of Expanding det A by its firstcolumn
n
order
) and
firstrow
have)
n
n
det
A =
'+1
I\037 ( \302\243.,
1)'
j=l)
a jl det
A
det
j1 ,
B =
\037 ( \302\243., j=l)
I)'
'+1
Blj .
h lj det
of transpose we have b lj = ail and B1i= (AjI)t. Since n  1 we have det Bli = assuming the theorem is true for matrices of order the foregoing sums are equal term Hence so det A = det B.) by term, the definition
from
But
3.15 The
let
(a ij its
we det
are Ail.
matrix
cofactor
Theorem 3.5 showed that if A is nonsingular then det A \037 o. The next theorem proves the converse. That is, if det A \037 0, then AI exists. Moreover, it gives an explicit formula for expressing AI in terms of a matrix formed from the cofactors of the entries of A. In Theorem 3.9 we proved that the cofactor of is to i, j aij equal (1 )i+i det A ij , where is the minor of Let us A. denote this cofactor i, Ai' j by cof aij. Thus, by definition,)
=
cof aii
OF THE COFACTOR
DEFINITION
cofactor matrixt
cof
The apart
next theorem from a scalar
THEOREM
A =
shows that
the
factor, the
identity
For any n
3.12.
X
if det
In particular,
the
much of the older
\037 a.l1,) 't,''
literature
an entirely
different
A
matrix
\037
0 the
literature
calls object,
it
I =
of product matrix
A
n matrix
A (cof
(3.29))
t In
( cof
by
A.
cof
inverse
of
AI
=
A
with
n >
2 we
aij is
called the
?2
) 1 . \"_ 't,.) \037,
transpose
of its
cofactor
matrix
is,
have)
(det A)/.)
exists
1
det A)
the transpose of the
the
and
is given
hy)
(cof A)t.
cofactor
adjoint of A. However, current discussed in Section 5.8.)))
the
i+i det A,
with
is cof
entry
have)
I.)
A)t = A
we
Thus,
(( _l )
ii .)
whose i, j
matrix
The
MATRIX.
is denoted
of A and
(1 )i+i det A
matrix
is called the
nomenclature
of A. Some of the term adjoint for
adjugate
reserves
rule)
Cramers'
3.9 we
Using Theorem
Proof.
of its kthrow cofactors by
A in terms
det
express
93)
the
formula) n
(3.30
!
A =
det
))
a
k;
cof a k ;
.
;=1)
k fixed
Keep
row of
A
and apply i
some
for
\037
the ith det B = 0 because ithrow cofactors we have)
whose
and
B whose ith
new matrix
to a
relation
this
k,
rows are
remaining
rows of B are equal.
and kth
to the kth of A. Then in terms of its
is equal
row
same
the
as those B
det
Expressing
n
=\037b..cofb.. k 0
detB
(3.31))
;=1)
since
But
the ith row boo'l:1
Hence (3.31)
of B is equal to =
a k1).
cof
and)
boo t1
=
o.
of A we have)
row
kth
the
0=
cof a..t1)
for
every
j.)
states that) n \037
k
(3.32))
a k . cof
j=l)
Equations
(3.30) and
1
a..'l:1 =
0
k
if
(3.32) together can be written
(3.33))
But fore
\037 j=l)
appearing on the
the sum (3.33)
implies
3.16
3.13.
Cramer's
left
of (3.33)
det
A
{0
if
i =
k
if
i
k.)
is the k, i entry
\037
of the
product
A(cof A)t.
There
(3.29).)
As a direct corollary sufficient condition for
THEOREM
a..t1 =
a k 1' cof
i.)
follows:)
as
n \037
\037
of
3.5 and
Theorems
a squarematrix
A square
matrix
to
A
3.12 we have
the
necessary
following
and
be nonsingular.)
is nonsingular
if and
only
if det
A
\037
O.)
rule
Theorem 3.12can alsobe used to give explicit formulas for the solutions of a system of with are a coefficient The formulas called matrix. Cramer's nonsingular equations rule, in honor of the Swiss mathematician Gabriel Cramer (17041752).) linear
THEOREM
Xl'
3.14.
If a system
RULE.
CRAMER'S
of
n
linear
\302\267 \302\267 \302\267 , x n ,)
n \037
k
;=1)
a..x. t1
1
=
b.t
(i =
1, 2,
. . . , n))))
equations
in n unknowns)
Determinants)
94)
has a
gil'en
coefficientmatrix
nonsingular
A =
(aii)'
k cofa
k J';'
is a
there
then
by
\037b
A
det
The
Proof
can be
system
L
the system
for
j =
for
1, 2, . . . , n
as
written
a matrix
=
equation,)
B,) b l)
Xl)
X and
matrices, X =)
B are column
B=)
.
x n)
is a
unique solution X
det
It should
in
follow
(3.34)
be noted
that
the
A is
nonsingular)
b n)
1
AIB =
X =
The formulas
Since
by
given
(3.35))
.)
k=l)
AX
there
solution
n
1
x,;J =
(3.34))
where
unique
the formulas)
components
by equating for x j
formula
(cof AYB.
A)
in
in
be expressed
can
(3.34)
(3.35).)
as the quotient of
two
determinants,)
x.1 = where
C j is
the matrix
obtained from
A
det C j
')
det A
of A
the jth column
by replacing
by
column
the
matrix B.)
3.17 Exercises
1.
Determine
the
cofactor
[:
:l
matrices:
following
3
0
1
1
1 2
0
2
1
3
2 1 (b)
(a)
each of the
matrix of
0 5 1 1 2
2 (c)
2
2
3
matrices in Exercise the inverse of each of the nonsingular Determine 3. Find all values of the scalar A for which the matrix AI  A is singular,
2.
1
(b)
(\037) [\037
\037l)
0
0
1
2 2)
2
2 o)
4
(c))
11
2
19
3
8)
2
8
14 . 5)))
1
6 3)
1. if A
is equal
to)
95)
Exercises)
of its cofactor matrix: 2, prove each of the following properties (cof A)tA = (det A)/. with the transpose of its cofactor matrix). (A commutes (c) A(cof 5. Use Cramer's rule to solve each of the following systems: 2x  y + 4z = 7, (a) x + 2y + 3z = 8, y + z = 1 . = = z 3 2x + 5 Y + 3z = 4. , (b) x + y + 2z Y for a straight line in the 6. (a) Explain why each of the following is a Cartesian equation xyplane
4. If A is an (a)
matrix
x n
n
with
two
passing through
distinct
[
and
State

I Xl

Y
Y2

YI)
I
and
(X 2 , Y2).)
= 0;
det
YI) y]
prove corresponding relations for
a
X
Y
1
Xl
YI
1
X2
Y2
in
plane
= o.
1)
3space
passing
through three
points.
(c) State noncolinear
and prove correspondingrelations functions
X in
(a, b).
n
a circle in
for
the xyplane passing
points.
2
7. Given each
X2
(Xl'
points
XX
det
distinct
(b)
3x 
0,
(b)
n >
= (cof A)t. A)t = (cof A)t A
cof (At)
fij,
each differentiable on an interval p' (x) is a sum the derivative
(a,
Prove that
of n
b), define
through
F(x) = det
three
[fij(x)] for
determinants,)
n
p' (x)
=
! i=l)
det
Ai(x)
,
obtained in the ith row of [fij(x)]. the functions where Ai(x) is the matrix by differentiating of the form W(x) = in which n x n matrix of functions each row after the first [U\037iI)(X)], matrix in honor of the Polish of the previous row, is called a Wronskian matheis the derivative Prove that the derivative of the determinant of W(x) matician J. M. H. Wronski (17781853). of the matrix obtained by differentiating each entry in the last row of W(x). is the determinant
8. An
[Hint: Use Exercise
7.])))
4)
4.1 Linear
with
transformations
EIGENVECTORS)
AND
EIGENVALUES
matrix
diagonal
representations
linear V. Those Let T: V \037 V be a linear transformation on a finitedimensional space V are called T which are of coordinate for of (basis) independent any system properties all a basis are shared the matrix of T. If intrinsic properties of T. They by representations can be chosenso that the resulting matrix has a particularly simple form it may be possible to detect some of the intrinsic properties directly from the matrix representation. ask the simplest types of matrices are the diagonal matrices. Therefore we might Among 2 In has matrix whether linear transformation a Chapter every diagonal representation. we treated the problem of finding a diagonal matrix a linear transforof representation that T: V \037 W, where dim V = n and dim W = m. In Theorem 2.14we proved mation the a w that there always exists a basis(el , . . for Vand basis . . . for W such , (WI' en) m) = bases is a In if W V matrix of T relative to this of matrix. pair diagonal particular, will be a square diagonal we want matrix. The new feature now is that to use the the matrix it is not always possible to find a diagonal same basis for both Vand W. With this restriction for T. We turn, then, to the problem of determining matrix representation which transdo have a diagonal matrix formations representation.
.,
If A =
Notation:
(aij) is a diagonal
easy to give a necessary diagonal matrix representation.) It is
matrix
diagonal
in V and
a correspondingset of scalarsAI, T(u k )
Conversely,
. . . , An
is an
if there
satisfying
=
set
(4.1),
the
then
is a 96)))
representation
of T relative
to
V
exists an
U 1 , . . . , Un
V,
diag
the basis
(AI'
. \302\267 \302\267 , ann) .
to have
set
dim V = n. of elements
a
If T has a UI
, . . . , Un
that)
1,2,...
,n.)
in V and
a correspondingset of scalars
. . . , An))
(uI , .
a22 ,
transformation
where
independent
such
k =
for
\037
matrix)
A =
= diag (all'
condition for a linear
. . . , An
Aku k)
independent
we write A
T:
then there
representation,
(4.1 ))
AI,
and sufficient
linear transformation
Given a
THEOREM 4.1.
matrix,
. . , un).)
and
Eigenvectors
first
Assume
Proof.
(el , . . . , en).
some basis
97)
transformation)
a diagonal matrix representation A = (a ik ) relative to the action of T on the basis elements is given formula) by has
T
that
of a linear
eigenvalues
The
n
!
=
1\"(e k )
=
i
aike
akke k
i=l)
a ik =
since
Now
0 for i
(4.1).
Since
u
1, . . .
0 for i
\302\245= k,
with
Uk
=
e k and
=
Ak
a kk
.
and scalars AI' . . . , An exist satisfying a basis for V. If we define a kk = Ak and form , Un are independent they T then the matrix A = (a ik ) is a diagonal matrix which represents
to the basis (u1,
relative
(4.1)
proves
elements
independent
suppose
a ik
=
This
k. \302\245=
U
1 , . . . , Un
. . . , un).)
matrix representation the problem a diagonal of a linear of finding transformation of finding been transformed to another problem,that elements U 1 , . . . , Un independent and scalars AI' . . . , An to satisfy (4.1). Elements Uk and scalars Ak satisfying (4.1) are called of T, respectively. In the next section we study and eigenvalues eigenvectors eigenvectors Thus
has
and eigenvalues
t
a more
in
4.2 Eigenvectorsand In
and
not
are
Let T:
S +
eigenvalue of T if there
space and S denotesa subspace
transformation of S element x in S such that)
a linear
be
V
T(x) =
(4.2)) x is
element
The
transformation of
V.
S
The spaces
dimensional.)
finite
nonzero
is a
a linear
a linear
to be
required
DEFINITION.
of
eigenvalues
V denotes
discussion
this V
setting.)
general
AX
called an eigenvectorof T belonging
A scalar
V.
into
A
is called
an
.)
to
The scalar
A.
A
is called
an eigenvalue
corresponding to x.)
one eigenvalue
is exactly
There
have T(x) = Note:
AX
and
Although
excludes0 as an A
eigenvalue The
following
EXAMPLE
defined every
by
1. the
nonzero
T(x)
=
flX
Equation
eigenvector. with associated
Multiplication equation
some
(4.2) One
\302\245= 0,
AX
x.
eigenvector
given
then
eigenvector x.
the meaning
=
0 and
tlX so any
against
A
=
In fact, if we
fl.
A, the definition 0 is to have exactly one scalar
of these concepts.)
scalar. Let T: S + V be the linear transformation each x in S, where c is a fixed scalar. In this example
by a fixed
T(x) = cx for
element of
x
= always holds for x reason for this prejudice
a given
illustrate
examples
corresponding to a
for
S is an
eigenvector
belonging
to the
scalar
c.)
are partial translations of the German words Eigenvektor and words eigenvector and eigenvalue use the terms characteristic vector, or proper vector as synonyms for Eigenwert, respectively. Someauthors roots.))) values, proper values, or latent Eigenvalues are also called characteristic eigenvector.
t The
98)
and eigenvectors)
Eigenvalues
be a
that It
A.
is
T(x) = AX. easy to prove
to
corresponding
general,
4.
EXAMPLE
becauseit scalars.
Al
The
that
plane
in
form,
z =
Thus, each z \037 0 not
real unless Now
of
the
plane
of
through
V =
space,
ajixed
angle
of eigenvectorsmay as a linear space V 2 (R),
with
This
rx.
depend
in two
two basis
if
rx
is
not
eigenvectors. Thus of scalars for V.)
If
T rotates
z
through
an
angle A = e irl .
is an
eigenvector with eigenvalue multiple of 1T. the plane as a real linear space,
\037is
consider
re i8 .
be a =
T(i)
in
reflection
T(j) = j,
i,
is of special example on the underlying
different ways:
rx
of
then
Note
field of
(1) As a 2
elements (1, 0) and linear
interest
(0, 1),and
V =
space,
VI
(C)
,
can be expressed eirlz. the eigenvalue eirl is
VI (C) T(z)
that
= rei(8+rl) =
an integer
numbers the rotation words,
let T
1. xyplane is an eigenvectorwith eigenvalue the form where has c each of them \037 0; ck,
as scalars;
numbers
polar
O. In
eigenvalue
.)
in the
vector
the existence can be regarded
linear
real
real
the
with
eigenvector
be zero, by eigenvalue for null space of T
if 0 is an
Conversely, if
or (2) as a Idimensional complex with one basis element 1, and complex numbers as scalars. Each element z \037 0 Consider the second interpretation first.
with
by))
it cannot
exists In fact,
Let S = V = V 3(R) and basis vectors i, j, k as follows:
eigenvectors are those
shows
dimensional
+
.)
5. Rotation
EXAMPLE
yare
subspace. The spaceE(A) is infinitedimensional. If E(A) at least one nonzero element x
contains
null space of T. each of these is an
xyplane. on the
T act
nonzero
Every
1
eigenvalue
of T 
the
in
Reflection
remaining
belonging in E(A) we have)
eigenvectors
so E(A) is a may be finite or
the
then
null space
That is, let
T(k) = k.
It
all
and
= A(ax
bAY
E E(A)
> 1 , since E(A)
in
is
elements
is the
the xyplane.
The
0, so x
nonzero
E(A)
+
aAx
of zero eigenvalues.If an eigenvector zero scalar can be an eigenvalue.
the
Ox =
=
T: S \037 V be the set of all elements x in
A.)
However,
x then T(x) contains any
aT(x) + bT(y) =
by) =
3. Existence
EXAMPLE definition.
Let E(A)
= AX. Let
T(x)
the zero element 0 and subspace of S, becauseif x
is a
E(A)
then dim E(A)
is finitedimensional
that
set contains
This
Hence (ax + by) corresponding to A.
the eigenspace
all x such A.
eigenvalue
a and b.
all scalars
for
an
having
that
+
T(ax
called
E(A) consisting of
eigenspace
linear transformation
S such
to
The
2.
EXAMPLE
T has real
eigenvalues
only
an integer multiple of 1T then the existence of eigenvectors
V 2 (R).
if
rx
Since is an
the scalars
integer
of
V 2 (R)
multiple of
1T.
are In
real other
T has no real eigenvaluesand hence no and eigenvaluesmay depend on the choice
The differentiation Let V be the linear space of all real functions operator. f derivatives of order on a transforhaving every given open interval. Let D be the linear = I'. mation which maps eachlonto its derivative, The eigenvectorsof D are those D(f) nonzero functions f satisfying an equation of the form))) EXAMPLE 6.
and
Eigenvectors
=
I' some
for
a
an
c is
where
functions
I(x)
examples
like
= ceAX
c
o. \302\245=
The
a
function
V is
where
one
this
with
continuous
=
g(x)
The eigenfunctions
T (if any
of
space
I: f(t)
=
relationf(x)
the
only X/ A
with
eigenfunction exists we
real A. If an Af'(x), for
candidates
c
from A
0 and \302\245=
we
which
eigenfunctions o. \302\245=
However,
o =
e a/ A is never
zero f, so T has
zero we no
see that
the
g =
define
a
o. Therefore the the same ellipse in the to c = 9.)
if c
ellipse
c, represents corresponding
ellipse
The reduction of a quadratic form all points (x, y) in the plane which
ax2
YlY2system.) XAXt
equation
= c,
coordinate
original
written
system.
geometry
analytic
(5.10))
the
in
+
diagonal form
to a
a Cartesian
satisfy
bxy +
to
the set
identify
of
of the form)
= o.)
+f
dx + ey
+
cy2
can be used
equation
set is always a conicsection,that is, an ellipse, hyperbola, parabola, cases (the empty set, a singlepoint, or one or two straight lines). form the The type of conic is governed that the is, terms, by quadratic by seconddegree ax2 + bxy + cy2. To conform with the notation used earlier, we write Xl for x, X 2 for y, and express this quadratic form as a matrix product,) We
shall
'or one of
find
that this
the
degenerate
XAX
a where
X
=
[Xl' X2] and
a diagonal form of eigenvectors
AlY\037 UI
+
A
=
A2Y\037'
t
=
ax\037
b/2 .
[ b/2
c
where
AI'
, U 2 determines
a new
]
+
bXIX2 +
By a
A2 are
cx\037,)
rotation
the
Y
=
XC we
of A.
eigenvalues
set of coordinate axes,relative
reduce An
to
this
form
set
orthonormal
which
to
the Cartesian
equation (5.10) becomes) (5.11))
AIY\037
with
new
term
YIY2,
coefficients d' so the type
+
A2Y\037
+
and e' in the linear of conic is easily
d'YI + e'Y2 terms. identified
In
this
+ f
=
0,)
equation
by examining
there is no mixed the eigenvalues Al
product and
A 2 .)))
to analytic
Applications
131)
geometry)
If the
if AI' A 2 have conic is not degenerate, Equation (5.11) representsan ellipse if either if A a and have a Al or A 2 is 2 AI' parabola signs, opposite sign, hyperbola three cases correspond to AIA2 > 0, AIA2 < 0, and AIA2 = o. We illustrate specific
2 4xy + 5y
2X2 +
1.
( 5.12)) form
quadratic
U2
.
C = t
+ 4X1X2
2x\037
has
=
2), where the
reduces
This
[ determine the effect on the form X = yct and obtain) [YI,
_ 1
Y2]
.J5
[
2]
1)
4
the
completing
13
in
squares
6y\037
system. and We
of an
equation
The positive
ellipse
is the
Geometrically,this the
(

YIY2 axes
with
but
equation of the
YI
)5
ellipse
the
Yl
its
with
the

+
all
three
YI
+
this
! = o.)
as
follows:)
6.J5
at the
point (lJs,
XC in the
+
YI
2Y2)')
Y2')
6z; =
systems
in
the
)'1)'2
eigenvectors
U1
writing)
I)S,)
9,)
!)5) by the
determined
are
axes
Z2
=
Y2 +
!)S.)
to a new system of coordinate axesparallel In the zlz2systerTl the center of the ellipse.
is simply)
coordinate
(
To
= 9.)
lJS)2
center Y2

2 Z2
2
The ellipse and
1
6y\037.
.J5
Y2
+
6(Y2
Yl and
by
6)5
rewrite
same as introducing new origin at the
Z\037+
+
Y =
rotation
X2 =
Y2),)
y\037
to)
2Y2) =
+
Yl
Y2 we
in Figure 5.2. U2 , as indicated the equation further can simplify Zl =
form
the
is
matrix
.J5
transformed
 t)S)2 +
directions of
(5.12) to
(2YI +
eigenvectors
becomes)
and
Yl
0'1
This is the
1
the foregoing
diagonalizing
.J5
equation Y\037+
By
Xl =
2 of
Example
orthonormal set of
equation of
.J5
Cartesian
in
o.)
orthogonal
part of
, 2
.J5
transformed
1/\0375.
we write the
Y2) +
+
(2YI
An
part
13xis
part 4x1 +
Therefore the linear
!=

13x2
this as)
rewrite
We
O.
6, and an
=
A2
quadratic
linear
2
1
=
=
t
=
one treated
the
is
5x\037
!
4x 1 +
+
5x\037
Al = 1 ,
2]
1
[Xl'X2]
+
eigenvalues
t(l,
13y 
4x +
+
4X1X2 +
+
2x\037
section. Its matrix Ul = t(2, 1), 2 1
The
some
with
examples.
EXAMPLE
The
same The
the zero.
or)
ZI
+
9
are shown
=
1.
3/2) in
Figure
5.2.)))
132)
acting on Euclidean
of operators
Eigenvalues
/
/
/
,
spaces)
Y2
/)
  ...



Xl)
\"
\"
,
,
\" Yl)
2 1)
is 2.
EXAMPLE
and
Rotation
5.2
FIGURE
by the translation
followed
2X2
4xy 


2x\037
The Al
=
Zl
Yl
4X1X 2 
x\037

3,
where
t
A2
=
of rotation
=
2.
2
1/)5.
X =
set
An orthogonal YCt
1
=
(2YI +
the

=
2y\037

2
3YI
(2YI +
+
Y2)
squares
2]
2Y2)')
+ 2Y2) 13
(YI \037\037

2 2Y2

16
18 YI
+
../5 the
1),
2 1
or)
completing
t(2,
C = t
(YI +
the eigenvalues
has
/S
;5
By
as)
equation becomes)
transformed
3y\037
this
[ 1) 1
X2 =
Y2),)
../5 Therefore
UI
us
gves
Xl
is
is
!\0375.)
matrix
This
[ 2 ] of eigenvectors
diagonalizing matrix
in
Yl
3(yl
and
Y2 we
 1)5)2
Y2

13 =
../5
obtain the equation)
2(y2

XC
13 = O.)

.
1
Y =
rotation
rewrite
We
10X2
=
A
orthonormal
An
!\0375,
4x I + 2
where
is XAXt,
part
quadratic
= Z2 Y2 +
13= o.
4x + lOy 

y2
=

The
axes.
coordinate
of
translation
tJS)2 =
12,)))
o.)
=
0,)
.
U2
The
=
t(l, 2),
equation
to analytic
Applications
.
133)
geometr.v)
X2
I I I I I I I I I I)
Xl)
, \"
,
,
\"
\"
Zl
\"Yl)
3ZI
(a) Hyperbola:

which
Zl
=
 tJs,
Yl
(b) Parabola:
curves
in
with its center at
a hyperbola = Z2
represents
= 12
The
5.3
FIGURE
lation
2z\037
Y2
 tv
5
(!Js, tJS) in this
simplifies
2
3z;
The
is shown
hyperbola
tions of the
positiveYl 3.
EXAMPLE
9x
2
+
= 12,)
 2z:
in
2
4
A2
=
O.
24xy + 16y2
The eigenvectors
+
 20x +
24x1X2 +
for the quadratic
set of
An orthonormal
15y
16x\037

part is A
eigenvectors
3 onal
is C
matrix
diagonalizing
=! [
Xl = Therefore
the
25y\037
This simplifies
to y\037 +
parabolais shown
in
Y2

!(3Yl
Cartesian
transformed

\037j(3Yl
= 0, the
Figure
.
1
6)
=
U
1 and
We rewrite
O.
20x 1 +
15x2=
9 matrix
symmetric
The trans
to)
U2 determine
the
direc
Y2 axes.)
and
9x\037
The
0
YlY2system.
further
Z2
Zl =
or)
5.3(a).
Figure
the
equation
+ Y2 =
3.)
2 and
Examples
YT
5.3(b).)))
4 4Y2)
12
=
[
12 16]
is U 1 =
!(3, 4),
this
O.)
. Its eigenvalues U2
=
!(
4
3'] ,)
The
X2
of rotation
equation
=
!(4Yl
+
as)
are
4, 3). X =
An
yct
Al =
25,
orthog
gives
us)
3Y2).)
equation becomes) 
4Y2) +
.lj(4Yl +
3Y2)
equation of a parabola.with
=
its
o.) vertex
at the
origin. The
134)
4.
EXAMPLE
three
x2 +
equations
eigenvalues; the first (x, y) = (0, 0), and
represents third
the
alone does not reveal conic section. For example,the 2 2 2 2 the same 1, x + 2y = 0, and x + 2y = 1 all have a nondegenerate ellipse, the second is satisfied only by The as set. last two can be the regarded empty represents
degenerate cases of the ellipse. The graph of the equation y2 = two lines y = 1 and y = parallel The
parabola.
if either x 
2y
x2 equation = 0 or x +
hyperbola.
polynomial A
 a
b/2
det
A
[ b/2 the
Therefore
1.
=
2y
o.
the
eigenvalues
xaxis. The equation
1=
0 representsthe cases of the
as degenerate
be regarded
can
These

y2
lines since 0 representstwo intersecting This can be regardedas a degenerate
satisfied
it is
of the
case
bxy + cy2 + dx + ey + f = 0 represents section, then the type of conic can be determined quite easily. ax 2 + bxy + cy2 is) of the matrix of the quadratic form
conic
characteristic
0 is the
4y2 =
However, if the Cartesian a nondegenerate
spaces)
a degenerate
represents
=
2y2
of
A knowledge
cases.
Degenerate
whether the Cartesian equation
acting on Euclidean
of operators
Eigenvalues

ax 2 +
equation
=
A2 
(a +
]
 ib2 )
!(4ac
 b2 ).)
=

(A

A1)(A
A 2 ).
C)
of the eigenvalues
product
(ac
+
C)A
The
AIA2
=
ac
is)
 !b2
=
Since the type of conic is determined by the algebraic sign of the product Al A 2 , we see that the conic is an ellipse,hyperbola, or parabola, as 4ac  b 2 is positive, negative, or according 2 2 ax 4ac zero. The number b is called the discriminant of the quadratic form + bxy + has the values 34, 24, and 0, respectively.) cy2. In Examples 1, 2 and 3 the discriminant
Exercises)
5.15
In each of Exercises of A ; (c) an
eigenvalues
1.
XIX2.
3.
x\037 +
4.
34x\037
In each the
4X 1X2 +
+
4x\037
2.
2X 1X2 

1 through orthonormal
find
(a)
set of
a symmetric matrix A for eigenvectors; (d) an orthogonal
x\037.
x\037.
24x1X2 +
18,
identify
y2

what
value
o be a pair 20. If the equation bounds
7.
3x\037
+
X2X3.
4X1X3 + x\037 x;. + 4X 1X 2 + 8X1X3 +
4X2X3
+
3x;.
a sketch of the
section
conic
represented
by
equation.

19. For
2x\037 +
and make
 5 = O. 2xy + 2X2 = 9. y2 O. 2xy + 5x 10.y2  2xy + X2  5x = O. 11. 5x2  4xy + 2y2  6 = O. 12. 19x2 + 4xy + 16y2  212x+ 104y = 356. 13. 9X2 + 24xy + 16y2  52x + 14y = 6.
8.
x\037
C.
matrix
diagonalizing
+ X 1X 2 + X 1X3
5.
6.
form; (b) the
the quadratic
41x\037.
of Exercises 8 through
Cartesian
7,
of
(or values) lines?
is 21T/vi
ax 2 + 4ac
of c will
the
graph
14.
5x
15.
2
x
16. 2X2
17. x2 18. xy of the
2
+ 5y2 + 2xy + y2 + 4xy + 5y2 +
6xy
+ Y
12
_2y2
+4xy
 2x
= O. 2x + 2y + 3 = o.  2x  Y  4 = O.  2

=
2
=0.))) O.
Cartesian equation 2xy
an ellipse, bxy + cy2 = 1 represents  b2 . This a gives geometric meaning
prove that to
the
the

area
discriminant
4x +
7y
of the
4ac
+
region
c =
 b2 .)
it
Eigenvalues of a symmetric Eigenvalues of a symmetric
* 5.16t
the requirement
we drop
Now
the eigenvalues
Suppose x is
of a symmetric an
operator
belonging to
between
a relation
form.
and
with norm 1
eigenvector
and we find
finitedimensional its quadratic
V be
that
135)
of its quadratic form
as values
obtained
transformation
of its quadratic form
as values
obtained
transformation
an
T(x) =
Then
A.
eigenvalue
Ax
we have)
so
since
x) =
values
T: V \037
Let
Q(x) = (T(x), x). that Q takes on the V =
Let
EXAMPLE.
V be the
Then unit
2
I
i=l
of T are Al = 4, A 2 the minimum and maximum
eigenvalues
respectively, = 1. In fact, x\037
on this
Q(X) = has
its smallest
points satisfying circle
are
x\037)
=
Q(x)
eigenvectors
ellipse are eigenvectors
on
unit sphere
the
called
Euclidean to be found
a real
are
in
space among
V.
V,
the
the
and
with
matrix
A
as inner
dot product
usual
4
0
[0
8)
.
=
Then
the
]
2
IaijXiXj =
j=l)
=
4xi +
It is easy
8.
which
values
8x\037.
to see that
these
Q takes on the
eigenvalues
are,
circle
x\037 +
unit
+
4x\037
=
4 +
4x\037,)
where
1
Then if
not change sign,
the
of
V. Choose
in
element
inner
any
and
product,
t 2Q(y)
= at +
t
ty)
t 2(T(y), bt
on V we
all real
for
+
x)
t(T(y),
is nonnegative
0)
x+
+ tT(y),
(T(x)
2
y)
,)
have the
inequality)
.)
2 polynomial p(t) = at + bt has its minimum at t = o. = a 2(T(x),y), so (T(x),y) = O. Since y was arbitrary,
(T(x),
at +
bt
T(x\302\273 2
(5.14)) A =
Let
or minimum) at a point is Q(u), the
all x
for
have
Q(u)
(u, u)
= 1.
value
of Q on the
Then
u is
an eigen
unit
sphere.)
we have)
Then
u.
at
Q(u))
x) = 1 we
If (x,
Q(u).
u with extreme
eigenvalue
a minimum
137)
case)
finitedimensional
with (x,
=
= A(X, x)
x) = 1.) x)
(AX,
(5.14) can
so inequality
be
as)
written
(5.15))
Then x =
ay,
where
(T(x), x) But (T(y),
y) >
Ily
=
(Ay,
II
=
that
(5.15)
y)
since
(y, y)
= ay. get (5.15) for x Since(T(x),x)  (AX, x) = (T(x)  AX, x) > 0, or)
a 2(T(y),
=
ay)
(T(ay),
= 1.
(T(x) 
(S(x), x) >
(5.16)) x =
have equality
u we
x))
(Ax,
for all x in
is valid
Ilxll =
Suppose
both
Multiplying
AX,
S =
where
0,)
= a 2 (AY,
members
of this
rewrite
we can
x),
(Ax, x)
and)
y))
T

and hence also in (5.16). states that the quadratic (5.16) V. When x = u we have Ql(U) =
S is
V.
a.
Hence)
1.
a 2 we
When
x} >
= 1. Now we prove
(x, x)
provided
(T(x),
in
(5.14)
y).)
(5.15)
inequality
by
inequality
in the
form
AI.)
The
linear
transformation
form Ql given Ql(X) = symmetric. Inequality by is on o. Theorem Therefore, by (S(x), x) nonnegative 5.13 we must have S(u) = o. In other words, T(u) = AU, so u is an eigenvector for T, and A = if Q has a minimum Q(u) is the corresponding eigenvalue. This completesthe proof u.
at
If there
*
is a maximum
at
5.13
to the
apply
Theorem
5.18
The
order,
increasing
in
the
foregoing
form
proof
are reversed
and we
Ql.)
case
dim
V
=
n.
Then T
has n
Al
0
and AX
(b),
write
be an
exist
there
matrix
\302\273'here Ak
eigenvectors
of T relative to is the eigenvalue
AX) = =
T(x)
to A. Then
belonging
eigenvector
x
0 \302\245=
and T(x)
=
Ax.
(x, x))
or)
x) =
AX( x,
implies IAI = T(y) = I\"y and
IAI2, this = AX,
(x, x) .)
1.
the inner
compute
product (T(x),
T()'))
We have)
ways.
T is
The
(5.11) we get)
(T(x), T(y)) since
then
complex
an orthonormal
form
(a), let x
x in Equation (Ax,
Since
n, and
Uk.)
To prove
Proof
=
V
which
diagonal matrix
is the
basis
this
dim
of T
\302\267 \302\267 \302\267 , un
unitary .
We
also
= (x, y))
have)
T(y)) =:
(T(x),
=
(Ax,I\"Y)
Afi(x,
y))
= 1. Afi yare eigenvectors. Therefore Afi(x, y) = (x, }'), so (x, }') = 0 unless (a), so if we had Afi = 1 we would also have AX = Afi, X = fi, A = 1\", which 1 and Therefore contradicts the assumption that A and I\" are distinct. \302\245= (x, .r) = O. Afi Part (c) is proved by induction on n in much the same way that we proved Theorem 5.4, is in that part The only change the correspondingresult for Hermitian operators. required of the proof which shows that T maps S.l into itself, where) x and
since
But
AX
=
1 by
s1 =
Here
Ut
is an
eigenvector of
T with
{x I
x E
AI.
eigenvalue
u1 =
1 At
(x,
V,
T(u
1)
=
u
t)
From
= O}.)
the equation
Al
T(u
1))))
T(u1) =
)\"lli
1 \\\\:e find)
140)
since Al'\0371 =
=
I Al12
Hence T(x) E
u l)
=
= (T(x), ):IT(u l)
note
S1. and
x in
any
= AI(X, u l ) = o.)
rest of the
The
itself.
spaces)
that)
T(u l )
AI(T(x),
if x E S1. , so T maps S1.into 5.4, so we shall not repeat the
S1.
of Theorem
that
choose
1 . Now
(T(x),
acting on Euclidean
of operators
Eigenvalues
proof is identicalwith
details.)
transformations next two theorems describe properties of unitary The dimensional space. We give only a brief outline of the proofs.)
a linear
dim V =
Assume
5.17.
THEOREM Then
T:
transformation
orthonormal
if E is
In particular,
T is
then
if and
is unitary
= (ei, ej))
(T(ei), T(ej)
( 5.18))
(el , . . . , en)
E =
let
and
n
+ V
V
all i
if and
unitary
a fixed
a finite
basis for
v.
if)
only
for
be
on
and j
ifT
only
.)
maps
E onto an
orthonormal
basis.)
Sketch of proof.)
x =
Write
\037
Xiei,
( \037lxiei'
and) T(y)) =
(x,y)
A =
(au) be
this basis.
the
T is
Then
jej
(T(x),
= n
V
matrix
representation
unitary
if and
n
=
n T(e;)).)
i\037
t:lXiYiT(ei)'
and let (el , . . . , en) be of a linear transformation if A is
only
e j ) is the
(ei,
unitary,
that
is,
an
orthonormal V
T:
if and only
+
the
of
ijentry
n
(e i
Since
A is
the matrix of
T we
(T( e i ), T( e j)
=
, e j)
have
identity
matrix,
compare
this
with
( k':?lakiek, (5.20)
\037
k=l
T(ei)
=
Qkiaki
and
\037 akiQki. k=l)
Lk=l akiek' n )
use Theorem
T(ej) =
\037\037=l arie
n
=
ar;e r I:l
to
Equation (5.19)
n =
n
n
N ow
=
V.
if)
implies)
( 5.20))
basis for V relative
= I.)
A* A Since
e j ),)
t:/iYiei'
T(y).)
( 5.19))
Sketchof proof.
i\037
)
( i\037lxiT(e;),t:/jT(ej) )
Assume dim
THEOREM 5.18. Let
with
n
=
n
n
Now compare
n
t:/
we have)
Then
\037 Yiej.
n
n
=
(x, y)
(T(x),
=
Y
n akiori
k\037l r\037l
5.17.)))
r , so)
e k , er)
= k\037lakiok;')
141)
Exercises) 5.19.
THEOREM
(a)
Every
is nonsingular
A
matrix
unitary and AI
= is a
has
A
the following
matrix. Each of At, A, and A* unitary The eigenvalues of A are complex numbers of Idet AI = 1,. if A is real, then det A = ::l: 1 .)
(b) (c) (d)
The
of Theorem
proof
properties:
.
A*
5.19 is left as an
absolute
for
exercise
1.
value
the reader.)
5.20 Exercises)
1. (a) Let
= cx, where c is a fixed scalar. Prove given by T(x) = T is unitary if and only if lei 1. that on V are those des(b) If V is onedimensional, prove that the only unitary transformations cribed in (a). In particular, if V is a real onedimensionalspace,there are only two orthogonal = x and T(x) = x. transformations, T(x) n x n matrix A. 2. Prove each of the following statements about a real orthogonal
(a) If
4.
(
the transformation
A
is a
A
is a
linear and normpreserving, prove that T is unitary. = I. V is both unitary and Hermitian, prove that T2 bases for Let (e l , . . . , en) and (Ul, . . . , un) be two orthonormal that there is a unitary T which maps one of these transformation a real a such that matrix is unitary:) Find the following T is
6. If T:
8.
V be
 l)k .
5. If
7.
\037
real eigenvalue of A, then A = 1 or A = 1. (b) complex eigenvalue of A, then the complex conjugate \037is also an eigenvalue of A. In other words, the nonreal eigenvalues of A occur in conjugate pairs. (c) If n is odd, then A has at least one real eigenvalue. n. An orthogonal transformation T: V \037 V V be a real Eucljdean space of dimension Let 1 is an eigenvalue for T. This 1 is called a rotation. If n is odd, prove that with determinant shows that every rotation of an odddimensional space has a fixed axis. [Hint: Use Exercise 2.] k. Prove that det A = Given a real orthogonal matrix A with 1 as an eigenvalue of multiplicity If
3.
V
T:
V \037
1.
a ia
1(1
a 9. If A
(I
10. If
A
is a
skewHermitian
+
A)(I
is a
A)l
unitary
matrix,
A
matrix and
13.
A
square
matrices
prove that
is Hermitian,
12. Prove
that
any
matrix are
 1)
+ i)
la(l
 i)
1
la(2 
\"2)
prove
that both
1
A
unitary
matrix
is called
normal
1 +
if
A is
Prove
the other.
i)
and
prove
nonsingular,
onto
bases
1 + A
is unitary.
Hermitian.
11. If
ia(2i
2:1
a Euclidean space V.
that
(I
are
and
nonsingular
 A)(I +
A)l
that
is skew
 il is is and that (A  iI)l(A + unitary. nonsingular can be diagonalized by a unitary matrix. * = A *A. Determine which of the following types if AA
il)
A
normal.
Hermitian matrices. matrices. (b) SkewHermitian (c) Symmetric matrices. 14.If A is a normal matrix (AA* = A* A) and
matrices. (d) Skewsymmetric matrices. (e) Unitary (f) Orthogonal matrices.
(a)
if
U is
a
unitary
matrix,
prove
that U* A
U is normal.)))
of
6)
DIFFERENTIAL
LINEAR
EQUATIONS)
6.1 Historical introduction The
of differential equations began in the 17th century when Newton, Leibniz, and solved some simple differential of the first and second orderarising equations in geometry and mechanics. These early about 1690, discoveries, beginning
history
Bernoullis
the
from
problems
seemed to suggest that
calculus.Therefore, for solving
traction,
at
means,
elementary
by
and composition, of calculus.
division,
multiplication,
was aimed
early work
equations
of times to the
number
expressed in
of the
much
differential
of all differential equations of the familiar terms
solutions
the
could be
problems
physical
based on
and
geometric
elementary functions of ingenious techniques say, by addition, sub
developing is to
that
applied
integration,
a
only
finite
functions
familiar
Special methods such as separation more or less haphazardly
devised
variables
of
end
the
before
and the
use of integrating
of the 17th century.
factors
were
During the 18th
and were developed, primarily by Euler, Lagrange, few differential equations could be solved relatively means. Little by little, mathematicians it was hopeless by elementary began to realize that to try to discover methods for solving all differential Instead, they found it more equations. or not fruitful to ask whether a given differential has any solution at all and, equation when it has, to try to deduce properties of the solution from the differential itself. equation Within this framework, mathematicians began to think of differential as new equations more
century,
procedures
systematic
Laplace. It soon becameapparent
that
sources of functions. An
in the theory
phase
important
general trend toward a the
obtained
more
developed early in approach
rigorous
first \"existence theorem\" of the form)
for
19th
the
century,
paralleling
the
calculus. In the 1820's, Cauchy He proved that every equations.
to the
differential
firstorder equation
y' =f(x,y))
has a
the
whenever
solution
example is the
One important
right Ricatti
member, f(x,
y' = P(X)y2 where
of
the
142)))
P,
Q, and
Ricatti
R are
equation
functions.
given in
any
open
y),
certain
satisfies
general
conditions.
equation)
+ Q(x)y +
Cauchy's
interval
(r,
work
R(x),) implies the existence
r) about
the
origin,
provided
of a solution P, Q,
and)
R have
cases
Experience of solutions
differential
linear
expansions
powerseries
in some
that
solution
this
has shown
differential
A
differential
linear
of
equation
Q are given
and
P
theorem for
this
the
satisfies given
by
the
P and
Assume
6.1.
THEOREM
in J and let b
be
f(x) =
( 6.2)) \037'here
A(x)
Linear
=
S\037pet)
equations
dt
beA(x)
on
are
the socalled
problems. linear of
Some
equations
the
principal
orders
form)
r
an existenceuniqueness
intert:al J.
only
condition
initial
eA(x)
proved here.)
an open
is one and
the
+
we
restate
we
Then there
equation (6.1) and explicit formula)
of the
I
Volume
which
differential
and second
first
about
generality
= Q(x),)
Q are continuous
number.
real
any
In
8.3)
(Theorem
equation
is one
+ P(x)y
functions.
much
scientific
of
variety
of
equations
order
first
y'
(6.1))
where
The
coefficients.
types.
showed
(18091882)
these
Among
143)
means.
equations offirst orderand next section gives a review
Ilinear
concerning linear
of results
Review
6.2
occur
in Volume
simple types of second order with constant results concerning these equations.)
for a few in a great
except
which
results of
to obtain
is difficult
equations,
discussed
r). In 1841 Joseph Liouville be obtained by elementary
in (r, cannot
it
that
equations were
offirst and secondorders
concerning linear equations
of results
Review
Chooseany
one function
f(a)
= b.
point
a
y =
j'(x) u'hich This junction is
Q(t)l0(t) dt,)
.)
of second
order are those of the
Po(x)y\"
+ P1(x)y'
form)
+ P 2 (x)y
=
R(x).)
If the coefficients Po, PI' P2 and the righthand member R are continuous on some interval if on and is never zero an theorem existence J, J, (discussed in Section 6.5) guarantees Po J. Nevertheless, there is no general formula that solutions always exist over the interval to (6.2) for expressing these solutions in terms of Po, PI' P 2 , and R. Thus, even analogous is far from complete, except in in this of the (6.1), theory relatively simple generalization if If R is and cases. the coefficients are constants zero, all the solutions can be special and in of functions determined terms explicitly polynomials, exponential trigonometric by I in was Volume the following theorem which (Theorem 8.7).) proved
THEOREM
(6.3)
6.2.
Consider
the differential y\" +
equation)
ay' +
by
=
0,)))
Linear
144)
 00, (
interval
constants. Let d = a2
given real
b are
a and
where
+ (0) has
equations)
differential

y = eaX/2[CIUl(X) +
(6.4))
where C1 and C2 are algebraic sign of d
(a) If d = (b) If d > (c) If d
=
f(x)
for
convergent

Ix
Xo)n,
of a homogeneouslinear
If the coefficients
r.
 xo)nl
nan(x
ex:>
y\"
=
\037 n=2
n(n
must
P 2 in
satisfy
given
ex:>
y'
series for PI and
the coefficients an

l)an(x

=
\037 (n
+
l)a
n+
l (x
n=O)
the power
series for
 xo)n,
ex:> xo)n2
=
\037 (n n=O)))
+
2)(n +
1)a n + 2(x  xo)n.
y
term
170)
The products Pl(x)y' and
Linear
differential
are
given by the
P2(X)Y
=
P I (x)y'
equations)
power seriest)

l)a HI b n  k )(x
+ n\037oct(k
xot)
and)
 xot.
Plx)y
=n\037ottakCnk)(X series
these
When
(n +
are substituted
2)(n +
1)an + 2
kt[(k
they
differential
be satisfied if
will
equation

k +
find)
=
xo)n
O.
akcnkJ}
0, M2
O.
>
recursion
formula
k
M
for
series 2

Ix
.
An
n+
t
Replacing
n
k
k=O)

 1 in
n
by
that (n +
2)(n +
and
(6.34)

n+2
1)A
A
l)nA
+ 2)Mt ') 2)(n + l)t
1
nt
(6.34)
l)n + (n
+
(n

[1 times the resulting equation from = M(n + 2)An+l. Therefore) n+ l
subtracting
[l(n +
(n + we
I)A kt
+
1 L\037(k
is dominated xo)n An for all n > 0, so the series 2 an(x by the  xoln converges if Now we use the ratio test to show that 2 An Ix
A n+2 and
n+l
t.
0 we have)
Pn(X)
This
satisfies the Legendre
which
polynomial
only
of degree
nll
n
2 =
[Pn(x)]2 I 1
2
dx =
+
2n
. 1)
as a linear combination be expressed is a iff polynomial of degree n we
can
Po, PI' . . . , Pn. In fact,
of
the
Legendre
have)
n
= ICkPk(X),
f(x)
k=O)
where) Ck
=
2k
relation
the orthogonality
it
,
every
polynomial
g of
polynomial P n
Legendre
degree lessthan
has
dx
This
n.
real zeros
n distinct
\302\267
that)
follows
t1 g(x)P n(x) for
dx
f(X)Pk(X)
I 1)
2
From
I
+ 1
=
0)
property
and
that
can be they
used to
all lie in
prove
the
that
the open interval
(1,1).)
6.21 1.
Exercises)
ex = 0 has the solution The Legendre equation (6.35) with polynomial in the series not a (6.41). Equation U2, polynomial, given by (a) Show that the sum of the seriesfor U2 is given by)
1 U2(X)
(b) ex
Verify =0.)))
directly that
the
function
=
1 +
x

x)
log 1 2:
U 2 in
for
part (a) is
Ul (x)
= 1 and
a solution
Ixl < 1 .)
a solution
of the
Legendre
equation
when
Linear
178)
2.
Show
that the
f defined
function
differential the
by
equation
3. The
Legendre
(a) If a, b, C
are
with a
constants
x)

(6.38)

b)y']'
and
=
cy
(x a Legendre two

2
x)y\" +
equation.
independent
powerseries
on an interval
nonnegative
of the
form
0)
valid
a powerseries
of
solution
the
valid on an two

l)y'
2xy' +
Hermite
=
2exy
one
=
2y
x =
At +
B,
0)
equation)
0)
solutions is a polynomial
of these
differential
when
ex
is a
interval
functions
of the
(3 +
r)y' + 3x
2
+ x 2y'
, r) .
B analytic
on

+
(exx
an interval
(x o
=
!
the C has
0)
!

r , Xo

xo)n,
B(x)
=
!
+
r) ,
for
all
x
r ,
Xo
\037 o.

bn(x
C(x) = A(x)B(x) is also analytic the powerseries expansion)
!
n=O)
say)
xo)n.
n=O)
on
product
=
valid
= 0)
(x
o
n
00
C(x)
n anx
00 an(x
n=O)
It can be shown that exercise shows that
2
2)y
00
A(x)
=
y
of the
form ( r
A and
equation)
solution of the form y = xdifferential equation)
for all x. Find a second a powerseriessolution
x 2y\"
7. Given
form
integer.
xy\" +
6. Find

, r). Show that
( r

(2x
solutions of the y\"
Find
equation of
a differential
that
to a Legendreequation by a change of variable of the Determine A and B in terms of a and b. Use the method suggested in part (a) to transform the equation)
Find
linear
(6.39).
> o.
A
(b)
as a
function
= o.)
l)y
1 > 0, show
> band 4c +  a)(x
+
ex(ex
Express this
1.
be transformed
with
5.

 l)y']'
2
type)
can
to
1
(6.35) with ex = Legendre equation solutions and in Equations Ul U2 given in the can be form) written (6.35) equation
[(x
4.
log
the
[(x
the
2\"
x
1 + 
x
f(x) = 1 for Ixl < 1 satisfies combination of the
equations)
cn(x

xo)n,
where
Cn
=
!
akbnk k=O)))
.

+
r).
This
179)
Exercises)
(a) Use gi ven
Leibniz's rule
the nth
for
derivative of a product
of Cis
derivative
nth
Now
use the fact
that
O)
A(k)(X
n
i (k) ) A(k) (x)B(nk) (x) . k=O
=
c(n)(x)
(b)
that the
to show
by)
= k! ak
= (n
and B(nk)(xo )
 k)!bnk
to
obtain)
n C(\037) (x
c(n) (xo) =
Since
of the
14, Pn(x) denotesthe properties of the Legendre
proofs
8. (a) UseRodrigues'
to show
formula
1
(b)
If m
\037 n,
orthogonality
the equation
in
10.
(a)
= (x2
Let f(x)

Use
l)n.
formula
dx
2(2n)!
(b)
The substitution
the
relation)
x =
cost
that
J;
TT /2
t dt
=
1o and
Rodrigues'
formula
to
tl
the
(1
 m(m
on the
integral

x2)n
I [Pn(x)]
that)
+
l)]PnPm.)
(x) dx.)
left
is equal
S\037(1
 x2)n dx to
 2) . . . 2 (2n + 1)(2n  1) . . . 3 2n(2n
2
dx =
2
2n
of the
to)
dx.)
obtain)
I 1
n)
that)
(x)f(nl)
f(nH)
=
O.)
to show
the integral
transforms
sin2n+1

=
show
ex
(with
to 1 to give an alternate proof
I
parts
by
integration
to deduce
repeatedly
1)
Pn(x)Pm(x)dx =
t/(n)(x)f(n)(x)
Apply this
+
[n(n
from
(a)
tl
I)Qn(x),)
Pm to
and
Pn
by
x2)(PnP'm P\037Pm)]' =
integrate relation)

of Legendre's equation
solution
satisfied
equations
of degree n. These exercises in Section 6.20.
= (I)n.
= 1.
x

[(1
+ (x
+ l)n
(x
2n
is a polynomial. = 1 and that Pn( 1) Pn(l) that P n(x) is the only polynomial
having the value 1 when 9. (a) Use the differential
described
polynomials
Qn(x) Prove
polynomial
Legendre
(b) Provethat (c)
.
that)
=
Pn(x) where
.
akbn_k
\037 k=O)
proves the required formula for Cn
, this
8 through
In Exercises outline
n! Cn
= n!
o)
+)))
1
\302\267
. 1)
SO/2
sin2n+It
dt.
Use
Linear
180)
11. (a) Show
equations)
differential
that)
(2n) !
=
Pn(x)
xn +
2n (n!)2
Qn(x),)
is a polynomial of degreelessthan n. Qn(x) Express the polynomialf(x) = X4 as a linear combination of Po, PI' n can be expressed as a linear (c) Show that every polynomial! of degree P . . . \302\267 , , n PI' Po Legendre polynomials where
(b)
12.
If! is
(a)
a
of degree
polynomial
n,
P , P 3 , and 2
P4 . of the
combination
write)
n
!
=
f(x)
\302\267
CkPk(X)
k=O)
because of Exercise11(c).]Fora fixed m, 0 < m < n, multiply 1 to 1. Use Exercises from and integrate 9(b)and 10(b)
is possible
[This
this
by P m(x)
equation
sides
both
of
the
to deduce
relation)
13. UseExercises9 less
show
11 to
and
=
2
that
l
+ I
2m
Cm
dx.) f _l!(X)Pm(x) dx =
J:.Ig(x)Pn(x)
0 for every
g
polynomial
of degree
n.
than
Use Rolle's theorem to show that P n cannot have any multiple zeros in the open (1 , 1). In other words, any zeros of P n which lie in (1 , 1) must be simple zeros. (b) Assume Pn has m zeros in the interval (1 , 1). If m = 0, let Qo(x) = 1. If m
14. (a)
Xl' X2, .
where
Qm(x) has
(c) Use part
. . , X m are
(b), This
tradiction.
of Pn
the m zeros
. . .
X2)
in (1
(x
let)
 x m ),)
Show
, 1).
> 1,
at each
that,
point x
( 1
, 1),
to a
con
in
sign as
same
the
 xl)(x 
= (x
Qm(X)
interval
along shows
Pn(x). with Exercise 13, to that Pn has n distinct
the inequality zeros, all of which that
show
real
m < lie in
n
leads
the
interval
open
(1,1).
15. (a) Show
6.22 In
that
Evaluate
(b)
the value
the
The method
of
6.17 we
Section
of
the
J\037l
integral
J\037l P n(x)P\037+l(x) integral X Pn(x)Pn_l(x) dx.)
learned how to find
about a analytic near
an interval is not
example, supposewe
point Xo Xo,
try
2
x)'
( 6.44))
nearXo
=
o.
If we assumethat
differential equation
a solution
we are ledto the a n +l
y =
!
n2
P2
are
Xo
mayor

n
k akx

exists
+
1)
may
differential
and substitute
1)
an \302\267))) n
analytic.
If either not exist.
PI or For
equation)
0
formula)
recursion
=)
PI and
y=)
y
equation)
0)
solution of the
a powerseries \",
=
valid near
solutions
powerseries
to find
+ P 2 (x)y
coefficients
the
where
of the differential
solutions
powerseries
+ PI(x)y'
y\"
in
of n.
independent
Frobenius
(6.43))
P2
dx is
this
series
in the
this
Although
test showsthat of
solution
us a
gives this
power
we
P2 are given
PI and
x =
for
about
Xo
(6.44)
Equation
put
satisfies (6.44), the ratio formally o. Thus, there is no powerseries = o. This example does not violate in the form (6.43) we find that the
which
akx
only
valid in any open interval
(6.44)
coefficients
k \037
series converges
6.13 because when
Theorem
=
power seriesy
181)
of Frobenius)
method
The
by)
 1
PI(X) =
and)
1

=
P2(X)
2.
x)
X2)
do not have powerseriesexpansionsabout the coefficient of y\" in (6.44) has the value 0 when
functions
These
is that
the x
origin. = 0;
The difficulty here in other words, the
point at x = o. of a complexvariable of is needed functions to appreciate the knowledge theory difficulties in the investigation of differential equations near a singular encountered point. some important special casesof equations with can be treated by However, singular points For example, suppose the differential equation in (6.43) methods. is equivalent elementary to an equation of the form) differential
has
equation
where

(x
( 6.45))
this
a singular
of the
A
P and
case
of (6.45)
Q have
we say (x
by

Xo is a
the
XO)2
x
\037
Xo.
If P(xo)
efficient of y' or the so Theorem 6.13
\037
0
regular
not
will
P(x)
x
Q(xo) of y
coefficient
0,)
in some
Xo
+
r).
both
In sides
becomes)
+
or
=
Q(x)y
 r, (xo open interval of the If we divide singular point equation.
equation
y\"
for
 xo)P(x)y'+
(x
expansions
powerseries
that
+
XO)2y\"

(x
Xo
\037
0,
not
wiJI
Q(x)
y' +
be applicable.

2
y
=
0
xo))
if Q(x o ) = 0 and Q' (xo) \037 0, either have a powerseries expansion about the
the co
or
In 1873 the
German mathematician
point
Georg
Xo , Fro
for treating such equations. We shall method (18491917) developed a useful describe the theorem of Frobenius but we shall not present its proof. t In the next section we give the details of the proof for an important special case, the Bessel equation. Frobenius' theorem splits into two parts, on the nature of the roots of the depending
benius
quadratic
equation)
t(t
( 6.46))
 1) +
P(xo)t
+
Q(x o)
= o.)
This quadratic
is called the indicial equation of the differential equation given equation The in coefficients and are the constant terms the ex(6.45). P(xo ) Q(x o) powerseries of and Let and roots of P denote the the indicial Theseroots (Xl (X2 Q. pansions equation. be real or complex, equal or distinct. The type of solution obtained may by the Frobenius
method dependson whether t For a proof see E. Hille, Introduction
to
Ordinary
or
Analysis,
Differential
not
these
roots
Vol. II, Blaisdell Equations,
differ
by
Publishing 1961.)))
PrenticeHall,
an integer.)
Co.,
1966, or
E. A.
Coddington,
An
Linear
182)
FIRST CASE OF
6.14.
THEOREM
indicial
and assume
equation
(6.45) has
two

(Xl
U2
00
(6.47))
=
uI(x)

Ix
xolCll.2 an(x
of

and
(Xl
Then
an integer. the form)
not
is
CX2
UI and
solutions
independent
Let
THEOREM.
FROBENIUS'
that
equations)
differential
(X2 the
roots
the
be
differential
xo)n,
with
ao =
1,)
 xo)n,
with
bo =
1
of
the
equation
n=O)
and) 00

= Ix
u 2(x)
(6.48))
bn(x
xolCl2.2
.)
n=O)
series
Both
in
converge
o < Ix 
r
0, so that
valid
keep
y'
=
tx
t
l
x
with
t =
Ixl
the
xt
00
.2 anx
n=O
.
possible exception of Differentiation
00 n
+
xt
.2
n=O
nanx
n l
of x
= O.
(6.50) gives us)
00
= xtl
cylindrical
researches
of
= 1 , and Q(x) = x2  cx 2 , analytic on the entire real line,
0, P(x) Q are
vibrations in
Bessel functions also arise is named after the German
earlier
it appeared
(1732)and Euler (1764). = Bessel has the form (6.45) with Xo equation is a P point Xo regular singular point. Since and to find
Bessel equation)
This equation is usedin problems concerning and propagation of electric currents cylinders,
in
Daniel Bernoulli so the
solve the
to
Frobenius
constant.
nonnegative
of membranes, heat flow conductors. Some of its in
C mayor lnay and the solutions
r.)
0, the constant  xol < r, Ix
.2 (n
n=O)))
+
t)anx
n
.
Bessel
The
183)
equation)
Similarly, we obtain) =
y\"
x
00
t 2
+
! (n
 1)an x n .
t
+
t)(n
n=O)
If
= x 2y\" + xy' +
L(y)
=
L(y)
xt

(x 2
find)
we
(X2)y,
00
!
t)(n +
+
(n

t
n
l)anx
x
+
n=O
x
!
nx
a
power of x will
t
first the choice become)
coefficients
[(1 +
(6.52))
Since
2.
n >
be written

(X)2
(X
t
=
(X2]a
1
a
a5 = a2 =
=
(X2

=
nx
n
a n x n +2 .
t
+ x !
n=O)
of each Since we seek a
the coefficient
that
so
=
(X2)ao
o.
o.)
(X
the
are
these
of
first
t the
only possible
values of
remaining equations for
[(n +
and)
0)

(X)2
(X2]a
that a 1 =
implies
a n 2
=
n
a7
that
t
can
the
determining
a n 2 =
n +
0)
o. The secondformula
+
o.
aO
2(2 +
(X)2
ao
=
2
2 (1
2(X)
n(n +
(X2)
+
a 4
,)
can
4(4 +

( 1)
we have)
subscripts
2ao
(1)

a2
, 2(X))
even
with

(X)
a4 6(6 + 2(X))
a n 2
=_
For the coefficients
\302\267 \302\267 \302\267 =
=
a 6 and,

this
For
(X.
(n
as =
(X2]a
as)
(6.53)) so

t)2
type.
0, the
>
n=O
Its roots (X and
desired
+
that)
t2
solution of the
00
x t ! [(n
determine the an constant term we need (t2 
this requires
indicial equation.
n =
nx
try to
and
))
give us a Consider
for
the
For
vanish.
a o =/:. 0,
with
This is the
n
t)anx 00
 x ! (X2a n=O
n +2
0, cancelxt ,
L(y) =
we put
(6.51
+
(n
00
00
t
n=O
solution
!
n=O)
+
Now
00
t
4
(1 +
2 2!
2(X)
, (X)(2
+
(X)
3ao ,)
6 2 3!
(1 +
(X)(2
+
(X)(3
+
(X)
in general,)
( 1)n
=
a2n
2
Therefore
the
t =
choice
(X
gives
2n
n! (1
us the
+
(X)(2
+
ao
\302\267 \302\267 \302\267 \302\267
(n
(X)
+
(X))
solution)
00 y
The
ratio
test shows
=
aox\302\2531
+
(
that
the
power
( \037 2
2n
n! (1
+
1X)(2
series appearing
1 ) nx2n +
\302\267 \302\267 \302\267 IX)
(n
+
in this formula
. IX)) )
convergesfor
all
real
x.)))
Linear
184)
In t
=
by (x)t. then obtain
we
rx
assumed that x > o. If x < 0 we can repeat the discussion with We again find that t must satisfy the equation t 2  (X2 = o. Taking the same solution, exceptthat the outside factor xex is replaced by
we
discussion
this
x t replaced
Therefore the function/ ex given
(x)ex.
equations)
differential
by the
equation)
00
(6.54))
is a I\037(O)
2n
2
\037
I:
[(1 

(X)2

t =
root
=
l
(X2]a
r/..
and)
0)
(1

l
2(X)a
=
and)
0)
an
ex)) )
For those values We obtain, in
equation.

(X)2
Since this
2.
n >
are led to
the
(6.55))
recursion
of
(X
for
which

(X2]a
an
n +
place
2
=
of (6.52),
the
0,)
= ao
1 +
lxia
( for
all real
solution
an
2
=
O.)
and)
a n 2
is the

2r/..))
same as
(6.53),with
(X
replaced
by
(X,
we
L
'>
n=l
x
(_1)n
\037
2..nn!
(1

(X)(2

2n

\302\267 \302\267 \302\267
(n
(X)
(X)) )
x=/:.O . obtained under for Iex is meaningful
the hypothesis
was
Iex
the
n +
solution)
fa(x)
The
2(X)a
= 0
al
us
=
formula

n(n
n(n
However,
+
(n
ex)
x=/:.o. O.
[en
these equations give
not an integer
2(X is
valid
\302\267 \302\267 \302\267
+
become)
which
for
2n
x =
for
indicial
1)n x
ex)(2
all real
for
of the
+
n! (1
(0) exist the solution is also valid
Now consider the equations)
If
(

(
1 +
of the Bessel equation valid
solution and
a o Ixl a
=
fa(x)
series
integer. It can
even
if
is
2(X
be verified
that
2rx
a positive
a positive integer. integer, so long as (X is
is not
such (X. that Iex satisfies the Bessel equation for and series solution have the Therefore, lex, given by Equation (6.54); if (X is not a nonnegative integer we have found another solution/_ ex given by Equation (6.55). The two solutionslex and Iex are independent, since one of them \037oo as x \037 0, and the other does not. Next we shall simplify the form of the solutions. To do this we need some to recall these properties.) and we digress briefly function, properties of Euler's gamma
not
a positive
for
each
For each real s
(X
>0
>
we
0 we
define
res)
by the
improper
integral)
OO
res)
=
i 0+))) tSlet
dt .
Bessel
The
This integral converges if functional
parts
by
Integration
leads
to the
equation)
res + 1) = s res).)
(6.56)) This
< o.
diverges if s
0 and
s >
185)
equation)
that)
implies
I)r(s + I) = 3) = (s + 2)r(s+ = (s
2) = (s +
res +
res +
(s
2)
I)s res),
+
(s + I)s r(s),)
+ 2)
in general,)
and,
res +
(6.57))
for every positive integer
=
n)
Since
n.
(s +
=
reI)

n
1)
S\037
\302\267 \302\267 \302\267
(s
e t dt
I)s res))
+
we put s
= I, when
= 1 in
(6.57)
we
find)
the
Thus,
of the factorial
function is an extension
gamma
= n! .)
+ I)
r(n
integers
to positive
of res)
to negative
from
function
real numbers.
The functional values of
can be
(6.56)
equation
are not
s that
integers.
used to extend the
write
We
+ 1).
r(s r(s) =
(6.58))
definition
in the form)
(6.56)
s)
The
is meaningful if s + 1 > 0 and s=/:.o. we can use this Therefore, 1 < s < o. The righthand member of (6.58) is now meanings \037 0, and we can use this equation to define for  2 < 1, res) in this manner, we can extend the definition of res) by induction to n < s < n + 1, where of the form n is a positive integer. The in and its extension are now valid for all real s for which (6.56) (6.57)
member
righthand
equation to define res) if s + 2 > 0, s =/:. ful
s
0,
+
X
(6.59))
rJ..)
the solution
J,ix)=
( \"2)
\302\253
\037
in
( I)n
\037n!r(n+1+Ot)\"2
series for fa. in Equation express this product
This gives
r( n + 1
Equation for x >
can
We
(6.57).
r(1 Therefore,
The
6quation. + rJ..).
\302\267 \302\267 \302\267 rJ..)
+
+
.
rJ..))
(6.54)
0 can x
rJ..)
in
us)
and denote the resulting
be written
2n \302\267)))
( )
as)
Linear
186)
The
the first
J
is given
p
J
function
of
rx
by this
defined
n=O
have
of the Besselequation The graphs of the
a solution constructed.
is also
This
been
called the Besselfunction = P , the Bessel function
r/..
2n+ p
x
=
(p
+ p)! ( 2) )
n! (n
0 is
>
r/..
integer, say
nonnegative
( _l)n
\037 P J\037)=\037
is a
r/..
> 0 and
for x
equation
kind of order rJ... When by the power series)
equations)
differential
2, . . .).)
o. Extensive tables of Besselfunctions J o and J 1 are shown in Figure
x
+ I 0,
r/..
a new rJ..)
=/:. 1,
Graphs of the J
function
. . . , we
1

rJ..
in
that the series
we see
and

2 Ot/r(I
is not
equation for
x
>
0
>
r/..)
=
2)
(I
for J_rx(x)
\037 n=O

rJ..)
is the
if
o.
stant
rJ..
is a
multiples
nonnegative valid
r/..
in
(6.59),
Equation
if
r/..
integer. Therefore, if
a positive
is such x
>
0
x 2n
n!

+ 1
r(n
r/..)
(2 )
.)
(2

as
same r/..
not
is

\302\267 \302\267 \302\267
(n
r/..)
that
for
rJ..)
frx(x)
a positive J rx(x)
and
the
r(1

rJ..))
in Equation
integer, J Ot is
(6.55) a solution
with
of
are linearly independent on the of the Bessel solution general
is)
y
If

(_l)n
Therefore, r/..), the Bessel equation for x > o. J rx(x) and If r/.. is not an integer, the two solutions is not axis their ratio real constant) (since positive x
by
we obtain)
(6.57)
r(n + 1 
ao =
r/..
x a\037 (
s =
r/..
replacing
J 1.)
define)
J a (\037=
Taking
by
that is, if
is meaningful;
2, 3,
rx
J o and
functions
Bessel
for
=
c 1J Ot(x)
+
c 2J_ rx(x).)
J p and its coninteger, say r/.. = P , we have found only the solution x > o. Another solution, independent of this one, can be found)))
Bessel
The
+ PlY' + y\" of Ul is given
P 2y
=
0 that
the
by
4 of Section6.16.This states that if U l is a solution of never vanishes on an interval I, a second solution U 2 independent
in Exercise
described
method
the
by
integral)
x =
u 2 (x)
where
=
Q(x)
and a
187)
equation)
second solution
U 2 is
f
C
Q(t)
we have PI(x) =
equation given by the formula) x
(6.60))
=
u 2 (x)
J p(x)
2 dt,
[ul(t)])
Bessel
the
For
eJP1(X)dX.
ul(x)
Ijx, so
=
Q(x)
Ijx
1 dt ,
2
Jc
t[J
pet)])
I in which J p does not vanish. and x lie in an interval This second solution can be put in other forms. For example, from
if c
may
Equation (6.59)we
write)
1
1 gv(t),) 2p t
=
[J v(t)]2
where gp(O)
:;f
In the
O.
interval I the
g p has
function
a powerseries
expansion)
00
be determined we assume the existence of could
which
by
=
t[J p (t)]2 from this formula term by term the power t l) plus a seriesof the
Integrating
takes the
in the
the
expansion,
1
(from
Antn
coefficients
equating
such an
! n=O)
=
gp(t)
2p
in (6.60)
integrand
=
[J p(t)]2
takes the
t 2p
.
If
log
x
form)
00
1 t
g pet)
identity
\037
+l
Ln. A
t
n
n=O)
c to
a x we obtain n 2P ! Bnx .
logarithmic Therefore
x
form
term
A 2p
(6.60)
Equation
form) 00
u 2(x)
= A 2p J p (x) log
X
+
Jp(x)x2p!Bnxn. n=O)
be shown
can
It
solution
the
that
is denoted
by
coefficient
Kp(x)
and
A 2p :;f o. has the form)
If we multiply
u 2 (x)
by IjA
2p
the
resulting
00
Kp(x)
=
Jp(x) log x
n + x P ! Cnx
.
n=O)
This
is the
Having
form of the arrived
at this
solution promisedby the formula, we can verify
by substituting the righthand member in C n so as to satisfy the equation. efficients
the
second that
Bessel
case of
Frobenius' theorem.
of this form actually and determining equation
a solution
The details of this
calculation
are
exists
the co
lengthy
and)))
Linear
188)
will
Ki
omitted.
x)
= Jix)logx 0 and
ho =
where
for all real
of the
_l
2 2) VI
(
h
n
=
hn +
\037
(
2 2 ) V!(_1)n
n ! (n
n=O
hn+v
\037
2n,
+ p)!
( 2) )
11n for n > 1. The series on the right converges for x > 0 by this formula is called the Besselfunction is not a constant multiple of J p , the general solution
x > 0 is)
case for =
+ c2 K
c1Jp(x)
Besselfunctions
of the
properties
!
_
( 2)
+
Y
Further
2n \037
\302\267 \302\267 \302\267
p. Since Kp this
in
1)!
n !
Kp defined
of order
kind
of the Besselequation

n
n=O
!+
1 +

(p
\037
x. The function
second
can be expressedas)
The final result
be
equations)
differential
p (x).)
set of exercises.)
in the next
discussed
are
6.24 Exercises
1.
(a)
Show
Let f that
be
of the
solution
any
g satisfies the
Bessel equation
differential
(b)
When
=
4(X2
differential
1 the
(c) Deduce Use the
the
series
Use this
2
=
J\037(x)
2.
x.
1 +
sin
in part for
representation
1

4(X2
y
4x2
(
> O.
=
O.)
)
equation in (a) becomes y\" + information and the equationt
and)
x)
(b)
from
directly
Bessel
J_Y2,(x)
functions
=
show
= 0;
TTX
its
general
= v;
r(l)
to
solution show
that,
1A
cos
(
the series
to
Y
2
\037
( TTX )
formulas
= xY1(x) for x
let g(x)
and
(X
equation)
y\" +
is y = A cos x + B sin for x > 0,)
of order
x.)
)
for J \0372(x) and
J_\0372(X).
that)
d
(a)
dx
(xaJa(x\302\273)
=
d
(b) dx (xaJa(x))
3.
xaJa_1(x),)
= xaJa+1(x).)
= Let xaJa(x) and Ga(x) = xaJa(x)for x > O. Note Fa(x) zero of Fa and is also a zero of Ga. Use Rolle's theorem and That is, there is a zero of tive of J a and J a + 1 interlace. zeros zerosof J a + 1 , and a zero of Ja + 1 between each pair of positive
t The change
of variable
t =
u
2
gives
r(!) = (See
Exercise
16 of
Section
11.28
each
that
J a between zeros of
us oo
f 0+
t\037fel
for a proof
dt = 2
f 0) that
2
S\037

oo
e
e u2 du
u2
du =
positive
Exercise2 to
V 7T .
= V;.))))
prove
zero of J a is a that the posi
each pair of positive J a . (See Figure 6.2.))
Exercises) 4.
From the relations
(a)
the recurrence relations)
2 deduce
Exercise
in
189)
ex
ex

+
x) Ja.(x)
relations
Use the
(b)
Ja.1 (x)
5.
in
+
1(b) and
Use Exercise
=
J\037(x)
a
 Ja.(x) x)
2
6. Prove
to
\037how
=
(x)
Ja.+1
2J\037(x).)
that)
cosx
)
.)
( \037
formula for J_%,(x). Note: Ja.(x)
a similar
)
is an
elementary
odd integer.
half an
Ja.+1 (x).
Sin x
\037
TTX
(

Ja.1 (x)
formula
J%(x) = Find
and)
recurrence
suitable
=
J\037(x)
formulas)
the
2ex
=
(x)
Ja.+1

Ja.(x)
x)
deduce
(a) to
part

and)
Ja.1 (x)
function for every
ex
which
is
that)
Id  2
(x)) =
2 ex Ja.(x) 
( xJ a.(X)Ja.+1(X)) =
x(Ja.(x) 
2
dx
2
+ Ja.+1
(Ja.(x)
ex+1
x
x)
2 Ja.+1
(x)
and)
d dx
7.
Use the
(a)
6 to
in Exercise
identities
show
2
that)
00 J\037(x)
From
(b)
part
+
2
! n=l)
00
=
J\037(x)
8. Letg\037(x)
=
b
) for
x >
0,
terms of
Besselfunctions
(a) y\" (b) y\"
+
xy
+
x2y
10.Generalize
Exercise
Then find
the
for
x >
O.
(a)
xy\"
+
xy\"
+
(b)
Bessel
= o. = O. general
for
+ (a
2b2 x 2b
the Bessel
of
solution
general
x >
+
!

!
V2
for
2 ex b
2
n =
1, 2,
3, . . . , and
all
g a. satisfies
= 0)
)y
equation of order cx. of each of the following
(c) y\" + (d) x2y\"
differential
equations
in
= O.
xmy + (x 4
+ l)y = O. and g a are related by the equation ga(x) = xCfa(ax b ) for x > O. in terms of Besselfunctions solution of each of the following equations
8 when fa
+ Y = o. = O. 6y' + xy function identity exists
(c)
6y'
a and
= !x.
o.
+
xy\"
(d) x2y\" of the J 2 (x)
where
n + 1(x)
nonzero constants. Show that
b are
a and
where
x 2y\"
A
IJn(x)1
9.
2 Ja.+1(X\302\273.)
c are constants. Determine

6y'
xy' + (x
form)

Jo(x) =
a and
c.)))
+ x 4y = O.
aJ: (x),)
+ l)y = O.
Linear
190)
12. Find

13.
a
of the differential + 00. Show that for x > 0 it can be linear secondorder differential equation series solution
power < x
should
infinite series defining k tkA, t A2, . . . , tkAnl. in A of the form)
the
in
n is
equations)
tkI,
nI (7.27))
=
etA
!
etA,
term
each
tkAk/k! expect that
we can
Hence
with
etA
k ,
qk(t)A
k=O)
coefficients qk(t)
scalar
the
where
expressing
a polynomial
as
etA
depend on t.
Putzer
useful
methods
for
simpler
of the
two
two
developed
The next theorem describesthe
in A.
methods.) AI, .
Let
7.9.
THEOREM
sequenceof polynomials (7.28))
. . , An
in A
Pk(A) =
=1,)
Po(A)
the
be
of an
eigenvalues
X n
n
A, and define
matrix
a
as follows:) k

TT (A m=l)
AmI),
k =
for
...,n
1, 2,
.)
we have)
Then
nl (7.29))
=
etA
!
rk+l(t)Pk(A),
k=O)
coefficients r l (t),
scalar
where
the
linear
differential r\037(t)
(7.30)) r\037+l(t)
Note: (7.27),
. . . , rnet)
are
determined
from
recursively
the
system
rl(O) = 1,)
=
Alrl(t),)
=
Ak+lrk+l(t)
rk+l(O)=
+ rk(t),)
(k
0,)
=
1, 2,
. ..,n

1).)
in powers of A as indicated does not express etA directly in (7.29) linear combination of the polynomials Po(A), PI (A), . . . , Pn1 (A). These are easily calculated once the eigenvAlues of A are determined. Also the
Equation
but
polynomials
as a
this requires 'l(t), . . . , 'n(t) in (7.30) are easily calculated. Although multipliers differential this particular system has a triangular a system of linear equations, in succession.) and the solutions can be determined
Let rl(t),
Proof
matrix
function
of
equations)
F
. . . , rnet)
by the
be
the
functions
scalar
determined
by
(7.30)
solving matrix
and
define
a
that
F
equation)
F(t) =
(7.31))
nl
!
rk+l(t)Pk(A).
k=O)
that F(O) = rl(O)Po(A) = I. satisfies the same differential equation
Note
Differentiating
(7.31)
and
using
We
will
=
!
k=O
F(t) =
etA
by
namely, F'(t) = AF(t). the recursion formulas (7.30) we obtain) nl
nl
F'(t)
that
prove
as etA,
r\037+I(t)Pk(A)
=
!
{rk(t) k=O)))
+
Ak+Irk+l(t)}Pk(A),
showing
method
Putzer's
be o.
defined to
ro{t) is
where
We
calculating
for
in the form)
this
rewrite
nl
n2
F'{t) :z::\037 rk+l{t)Pk+l{A)
+
\037 k=O)
k=O
AnF(t) =
subtract
then
(7.32))
Therefore
F'{t)
The

=
AnF{t)
shows
which that

'
we find F (t) F{t) = etA.) 1.
EXAMPLE
its eigenvalues Solution.
An)Pk{A)

(A
F'(t) from

equal to
=
firstorder
A2
(7.33))
Po{A)
= I

=
(A

+
Ak+lI)Pk{A)

(Ak+l
An)Pk{A)
AnI)Pk{A).)
rk+l{t)Pk{A)

(A
=
(A

=
(A

=
AnI)F(t)
=
F{O)

AnI)F{t)
of I
last

AF{t)
I, the

AnI){F{t)
0, so the
=
P n{A)
that
=
Arl{t),
=
Ar 2{t)
equations
and Pl{A) = etA =
eAt]
are to
A, we
=
in
rl{t) = Since
(A
\302\267
An)Pk{A)}
so)
Ak+lI)Pk(A),
a linear combination
r\037{t)
these
=

r n{t)P nl{A)} r n{t)P n{A).) becomes)
equation
AnF{t),)
uniqueness theorem (Theorem 7.7)
and A if
is a
A
2 x
2 matrix
with
both
A.)
r\037(t)
Solving

= AF{t).Since
Al =
Writing
\037 k=O)
implies
AnF{t)
etA as
Express
(A
n2 AnI)
theorem
CayleyHamilton
=
+ (Ak+l
Pk+l{A)
rk+l{t){
becomes)
(7.32)
Equation
\037 k=O)
Pk+l{A)
+ (Ak+l
Pk+l(A)
the relation)
n2 =
AnF(t)
see that
(7.28) we
from
But
Ak+lrk+l{t)Pk{A),
to obtain
Anrk+l{t)Pk(A) \037\037:\037
F'(t) 
207)
etA)
A
+

the
solve
=
rl(O)
+
r 2 (0)
rl{t),)
=
r 2{t)
=

AI)
O.)
teAt.)
AI, the required formula teAt{A
1,
we find)
succession
eAt,)
of differential
system
=
eAt(l

At)I
for etA +
is)
teAtA.)))
equations)
208)
of differential
Systems
case the system of differential
In this
Solution.
=
r\037(t)
are given
/lr
=
r 2 (0)
rl(t),)
A =;C
/l.)
is)
rl(O)
2(t) +
/l, where
A and
are
equations
Arl(t),
=
r\037(t)
Its solutions
of A
1 if the eigenvalues
Solve Example
2.
EXAMPLE
equations)
1,
=0.)
by)
_
r l ( t)
eAt ,)
r 2 (t)
eAt _
=
ellt
.
A/l)
Since Po(A)
(7.34
= I and etA =
))
eAt] +

=
A
eAt
 eIlt
PI(A)
the
AI
'] Ilt Ae
AI) =
_
(A
the
I+
eAt 
Then
A.

/l =
2if3
so
oc
=
/l
ifJ,)

oc
A.
ellt
multiplying
will
be
also
I and
A
in
fJ\037O.)
ifJ,)
(7.34) becomes)
Equation
etA =
+
Ilt
eAt and
complex numbers. But if (7.34) will be real. For example,suppose) A =
e
A/l)
are complex numbers, the exponentials A and /l are complex conjugates, the scalars
A, /l
eigenvalues
 /le At
is)
etA
A/l
A/l
If
for
formula
required
e(Hi(J)tI +
e(a+ip)t
e (aip)t

[A
+
(ex
if3)I]
2\037f3
.
= ellt
(A
+
{
= ellt
e ipt
_
eipt
e'(JtI
i sin f3t)I
f3t +
(cos
i cancel
involving
etA =
(7.35))

exI
if3I) }
+
{
The terms

2if3 (A

exI

if3I)}.)
Si\037f3t
and we
eat
get)
cos
{({3
f3t

oc
sin
f3t)1 +
sin
{3t
A}
.
{3)
7.14
Alternate
methods
for calculating
etA
in
special
cases
completely general becauseit simplest method to use in certain special cases. In this section we give simpler methods for computing etA in all the ei gen val ues 0f A are equal , (b) when all the eigenval ues three special cases: ( a) When of A are distinct, and (c) when A has two distinct eigenvalues, exactly one of which has Putzer's method
is valid
multiplicity
for expressingetA
for all square
1.)))
matrices A.
A
a polynomial method general as
in A is
is not always the
methods
Alternate
If A
7.10.
THEOREM
is an
X n
n
calculating
for matrix
equal to
its eigenvalues
all
with
209)
cases)
in special
etA
A,
}ve have)
then
nl (7.36))
\037
=
etA
e).t2
Since the
Proof.
matrices
and
AtI

teA
 A/)k.
(A
k!)
k=O
00
k
(e).t/)2 .!..(A k!)
=
etA = eWetCAH)
we have)
commute
AI)
A/)k.
k=O
theorem impliesthat
The CayleyHamilton

(A
AI)k
0 for
=
every k >
n, so the
theorem
is proved.)
If A
7.11.
THEOREM
we
is an
X n
n
matrix
n distinct
with
eigenvalues
AI'
A2
, . . . , An'
then
have)
n
etA =
where
is a
Lk(A)
polynomial
A
in
11 j=1
The
a matrix
k =
for
the formula)
1, 2,
. ..,n
.)
coefficients.)
interpolation
the equation)
F by
function
by
A,,j./)
Lk(A) are calledLagrange
polynomials
We define
Proof.
Ail 
Ak
j*k Note:

 1 given
n
of degree A
LiA} =
\037 et).kLk(A), k=1)
n
(7.37)) and
=
F(t)
that
verify
equation F'(t) =
F satisfies the differential
F(O) = I. From (7.37)we
see
\037 et).kLk(A) k=1)
and
AF(t)
the
initial
condition
that) n
 F'(t) = \037 et).k(A  AkI)Lk(A).
AF(t)
k=1)
the
By
theorem
CayleyHamilton
the differential equation To complete the proof which
AkI)Lk(A) =

we have (A
= AF(t). F'(t) we need to show
satisfies
F
that
0
for
the initial
each
k,
so F
satisfies
condition F(O) =
I,
becomes)
n (7.38))
\037Lk(A)
= J
.
k=1)
A
of (7.38) is
proof
The next which
has
theorem
multiplicity
outlined treats 1.)))
in
Exercise
16 of
the case when
A
Section 7.15.) has
two
distinct
eigenvalues,
exactly one of
210)
of differential
Systems
has
A
be an
I.Jet A n multiplicity
7.12.
THEOREM
where
n2
etA = eA.t
k
AI )
ellt
k
+
proof of Theorem
As in the
00
=
etA
eAt
2:
=
eA.t
2:
\037k! k=O
+
AI)k
series over r
t
 III =
find)
we
(A

AI)nl(A
The left memberis 0
the
by
relation


AI)n

(Il
2:
r=O (n
 1
+ r)!
(Il

A)r(A
The explicitformula explicit
easy
3

A)(A


A)r(A
AI)nl.)
1
AI)nl =
(Il 
nl
tk
2: k=nl k !
A)
A)k(A 

n2
1
1

(Il
_
et(IlA.)
At {
6
AI)nl
k
\037 \037
x
7.12 can
Theorem
in
are more
also be deducedby
(Il

k!)
(A
A )k
_ AI)nl .
}
1. If
formulas
in
3 case often
a
3
x
applying
Putzer's
method,
complicated. Theorems
arises
7.11 and 7.12 cover all in this the formulas
7.10, in
practice,
reference.)
CASE
AI)nl.)
the proof.)
completes
but the details The
CayleyHamilton theorem.
r becomes)
(ft
Since the
AI)nl+r.
find)
= This
k AI)
AI)nl.)
00


(A k!)
so)

00
tnl+r
2:
A)l
 (Il 
A)(A
 AI)nl+r = (Il
(A
 (Il 
AI
theorem
=
AI)n
we
repeatedly
Therefore the series over
(A
_
(A
by using the

CayleyHamilton
(A
U sing this
 Ill) =
A
nl. AI)
n  1+r
form
Since)
A

(A
\037
e).t
+
AI)k
L(nl+r)! r=O)
closed
in
k A)
}
k=nl
\037
eA.t
Il,
by writing)

(A
k!
00

(A

k=O
\037
eAt
k=O
n2 k \037 t
we evaluate the
Now
(A
k!
k=O
AI)k =

and
kOOk
n2
k
.!....
tk
At 12: ,(ft k.)
(_ Il
A
eigenvalues
we have) n2
eA.t
we begin
7.10
distinct
two
Then
1.
_
_ At 1
{ ( Il
k=O
Proof.
3) with
 1and Il has multiplicity
A 2: k! ( t
(n >
matrix
X n
n
equations)
3 matrix
etA =
A has eA.t{I
+
eigenvaluesA, teA

AI)
+
A,
2 !t (A
A, then)

AI)2}.)))
matrices
of order
n < 3.
case are listed below for
211)
Exercises)
2. If
CASE
a3
 /l1)(A
;'t (A
etA =e

(A
3.
CASE
EXAMPLE.
/l)(A

vI)
+
eAt{I
has
A
+
AI)}
(/l
=
A
etA =
(7.39)
(7.40)
=
etA
stage we or (7.40) to write this
At
et{1 +
powers of
By collecting
=
2t e )1
+

o)
1
t
2e 2t
2(t
+ l)e t
+
2(t
+ 2)et
+ 4e
2t)
of the
For each
A
=

2t
(e
 1)2
et)(A
A)
3 gives 
tet(A
us)
1)2.)
as follows,)
this
2)e t
and
2e 2t }A  {(t
+
2)e
(3t
+
5)e
(3t
+
8)e

+ l)et
the indicated
perform
(3t
1 through
in Exercises
matrices
A
e2t }A
2
.)
operations
in

2e 2t
(t
+ l)e
t

4e 2t
(t
+ 2)et
+
2e 2t .
t

8e 2t)
(t
+ 4)et
+
4e 2t)
t
(7.39)
e 2t
t
6, expressetA
=
+
0
0
0
1 .
A is
Find
a corresponding
A.)
in
polynomial
2
0
0 1 3 .
0 0 1 1
3
5.
A
=
1
0
2
1 1
11 6 matrix
as a
1 1 0 0
1
A 3 x 3
=
\037J.
[\037
etA = (b)
3. A
=
0 6 (a)
2. A
1 2J.
[:
7.
 AI)2.
(A

of Case
formula
1
4.
;'t
Exercises
7.15
1.
te
.
then)
A. =;c p\"

Ill) Il))
.)
2, so the
write
2t + e
with
/l,

4)
5
1, 1,

evt (A  AI)(A (v A)(v
+
/l
0)
1)2 or A2 x 3 matrix,)
as a 3
result
the

v, then)
/l,
 U)2
(A
o
l)} + also
;'t \0372 Ii)
1
+ {(3t +
can calculate (A
2te etA

teA
we can
A
t
(2te
A are
of

o)
2
Solution. The eigenvalues
A, A,
eigenvalues e Jlt

t(A
eigenvaluesA,
 Al)(A  vI)  A)(/l  v)
(A
Jlt
(/l
etA when
Compute
+e
v)
x 3 matrix
If a 3
etA =
A has distinct
3 matrix
x
known to have
!e At { (A 2t 2 formula

all
2At +
if A
is a
its
6.
1 .
2 equal to
eigenvalues
2)1 + ( 
2At
4 x 4 matrix
2
+
2t)A
with
0
0
2
1
0
0
3 0
=
A
0 0 0
4
Prove
that)
A.
+ t 2A2}.)
all its
eigenvalues equal to
)..)))
212
15, solve the
8 through
Exercises
of
each
In
equations
of differential
Systems
=
y'
system
A
to the
Y subject
initial
given
condition.
5
8. A =
:].
[: 10. A
=
12.
A =
0
0
0
1
14. A
16.
=
be
o
6
0
0
3 0
0
0
0
=
Y(O)
A
n
=
Lk(A)
A1
, . . . , An are
(a) Prove
n distinct

A
TT j=1
8
6
=
Y(O)
0
0
0 1 0 =
Y(O)
2 1)
of Theorem
7.11. Let Lk(A)
,) Aj
scalars.
that)
G) Let Yl,
. . . , Yn
be
n arbitrary
if
Ai \037 A k ,)
if
Ai =
=
Lk(Ai)
(b)
2 3 1
.
A.3

Ak
j*k) where
in the proof the by equation)
 1 defined
n
C2
C3
0 0 0 4 0 0 1 0
used
(7.38)
Equation
of degree
2
=
Y(O)
2
1
in
C1
1
2 0 0 2 0 1 0 0
=
15. A
1
outlines a proofof
1
1
0 0
4
the polynomial
13. A =
.
0
1 1 0 0 0 2 1 0
This exercise
=
0
0 1 0
2
1 Y(O)
0
0
2
1
11
11. A =
.
\302\267
C] 2
1
2
0
6
=
Y(O)
=
\037].
1
1
1
1
Y(Q)
15
[
1
0
2
A =
9.
\302\267
[::J
1
3
=
Y(Q)
scalars, and
Ak.)
let)
n
peA)
=
!
YkLk(A).
k=1)
Prove
that p( A)
is the
only polynomial peAk)
(c)
Prove
that
!\037=l
Lk(A)
= 1 for
=
of degree < n
Yk)
every
A,
for
k =
and
deduce
n
!
k=l)
where
I is
the
identity
matrix.)))
Lk(A)
=
I,

1 which
1, 2,
satisfies
the
n equations)
. . . , n.)
that for every
square matrix
A we have)
.
linear
Nonhomogeneous
Nonhomogeneous linear systems with
7.16
J. Here
as an n
(regarded
obtain
an
n
is an
A
1 column
X
formula
explicit
First we multiply both differential equation in
problem)
yea) =
yet) + Q(t),)
B,)
Q is an ndimensional vector function on J, and a is a given in J. We can point by the same process used to problem
matrix,
matrix) continuous for the solution of this of (7.41)
members the

etA{Y'(t)
of (7.42)is the derivative integrate both membersof (7.42)from a to e xA e xA
obtain
we
e iA
matrix
exponential
=
AY(t)}
The left member
by
the
by
and
the
rewrite
form)
(7.42))
Multiplying
coefficients
constant
n constant
X
213)
coefficients)
case.
scalar
the
treat
= A
Y'(t)
(7.41))
an interval
constant
\037t'ith
next the nonhomogeneous initialvalue
We consider
on
systems
product e yet). where x E J, we obtain)
the explicit
L1
of the x,
 eaAyea)
Y(x)
eiAQ(t).)
=
t
dt
etAQ(t)
if we
Therefore,
.)
formula (7.43) which
in the
appears
following
theorem.) THEOREM
function
=
Y'(t) a unique
has
solution on J
(7.43))
As
in the
homogeneous
that the
A
yea)
Y(t),
= B.
term,
EXAMPLE.
Solve
the
Q be an
vector
ndimensional
problen1)
yea) =
Q(t),)
B ,)
explicit formula)
e(xa)AB +
exA
in
difficulty
e(xa)AB,
The second term
Theorem
illustrate
+
initialvalue
r
etAQ(t) dt
.)
this formula
applying
in
practice
lies in
the
Y'(t)
=
matrices.
Y'(t)
We
=
yet)
the
case, the
exponential first
A
by
given
Y(x)
calculation of the Note
matrix and let
7.13. Let A be an n X n constant continuous on an interval J. Then the
=
is the
solution of the
is the
solution
A yet)
7.13 with an initialvalue
Y'(t) =
+ Q(t),)
homogeneous
yea) = o.)
example.)
problem) A
yet)
+
Q(t),)
problem
of the nonhomogeneous
Y(O) =
B,)))
problem)
214)
( 00 , +
interval
the
on
Solution.
2
1 = exA
of
Section
are
A
0)
7.13, the solution is given dt =
etAQ(t)
fox
4. To
t
by
dt
eeXtJAQ(t)
e
calculate
xA we
.)
of Case 3,
use the formula
to obtain)
7.14,
exA
 21)}+ I(e 4x  e2X )(A  21)2  !xe2X (A  21)2  21) + !(e2X  2x  1)(A  21)2} .
e 2X { 1 + x(A
=
= e2X { / can
B= 0 .
2t
te
2, and
2,
0
0
3
to Theorem
Y(x)
eigenvalues
Q(t) =
1
3
According
(7.44))
The
e 2t
1
1
0
A=
equations)
(0), where
2
We
of differential
Systems
+
x by x
replace
x(A

this
in
t
formula
to obtain

l[e
the
Therefore
e(xt)A.
in
integrand
(7.44) is 2 e(xOAQ(t)= e (xt){
I +

(x
t)(A
21) +
1 = e
2X
0
 21)e
+ (A
we
Y(x)
IX
21)
2 2X
e
\037(A
e2x
2
2x
te
!
!e2X _
(A
A
2/ =
1
_ te2X
1
0
1 1
2
1
(A
 2/)2
=
=
e 2x
+ e2x)
0
ie
2x

= e 2x 8 3e 2x + Je
of this
6 x
tx2)
rows
3)
2X

2
1
+
X)
3
Be 3
lx
! + !x 3
3x
+
+ 3)
3
\"8
t]
 ie2X
+
2x

Q 8e
3 2 4\"X
+
J
 Ix + !X2
matrix are the required functions
2
i + Q 8

\302\267)
Yl , Y2 , Ya .)))
.
0
2)

!x + !X2+ Q
4x
!X2
2
3 3 2  :Ix :Ix
!X2)
\"8
4\"
e 2x
2x
X
0 2 ,
2 lx

X
0
2
1
x)

! _ !x _
2 and
1 2\"
find)
Y(x)
t)

0 [
0

2X
21)2e
we have

.
 2t(x 
2t
6X3 ] [ \037\0372
4
The
e
_ 21)e2x
+ (A
\037
+
we
0
[
t) ]
[ ix ]
Since
 21)2}Q(t) 2(x  t)  1
find
e(xtJAQ(t) dt =
=
1](A
e2Xe2t

+

[t(x

t)
t
0
[t ] Integrating,

X 2X

 2(x
2 (xt)

Q
4x
2
1
6 X)
3
lx
3)
3  .l 6)x
_
3)
lx
]
215)
Exercises)
Exercises)
7.17
1. Let Z be
of the
a solution
nonhomogeneous system) ==
Z'(t)
J
on an interval
homogeneous
value
initial
with
J with
value
initial
yea)
are often
methods
Special
one
is only
there
that
non
the
of
solution
system)
and that ==
yet)
the given
Prove
Z(a).
y' (t) on
1r (2(t),)
\037Z(t)
== \037 yet)
1r e(ta)A{
Z(t)
available for ==
e7.tC, Q(t) == tmC, and (2(t) If the particular solution Z(t) Z(t) as indicated in Exercise
1
 Z(a)}.) solution Z(t) which such methods for (2(t) == C and D are constant where
a particular 7 indicate
rxt)C 1r
(sin rxt)D, obtained does not have the required to obtain another solution yet)
(cos
so
formula)
yea)
determining
Exercises2, 3, 5, and
function Q(t).
(2(t))
by the
is given
it
1r
initial
resembles C,
we modify
value,
the required
with
=
(2(t) vectors.
initial
value.
n x
2. (a) Let \037 be a constant solution of the system)
n
== \037 yet)
y'(t)
on ( 
1r (0)
00,
is given
by
the
If
\037 is
show that when) explicitly
nonsingular,
(c) Compute
Y(x)
3.
Let
x n scalar. given
\037 be
be a
an n
(a) Prove that
uA
dU)C.)
(a) has
{e(xa)A
 I}Al.
O.)
eJ
Gl let Band
value
the
a=
B=
=
\037l
matrix,
(f:a e
in part
integral
C
constant
the
yea) ==B,)
1r C,)
eCxa)AB+
the
A = [=\037
vectors. Provethat
formula)
Y(x) =
(b)
C constant ndimensional
Band
matrix,
C be ndimensional
constant
vectors,
and let
ex
== of the form \037Z(t) 1r erxtC has a solution system Z' (t) nonhomogeneous == C. and \037)B Z(t) only if, (rxI that the vector B can always be chosen so that the (b) If rx is not an eigenvalue of \037, prove == e7.tB. form in (a) has a solution of the Z(t) system 2t Y'(t) == \037yet) 1r e C (c) If rx is not an eigenvalue of \037, prove that every solution of the system tA == == B e where has the form yet) ( Y(O) B) + e(J.tB, (rxI \037)lC. ==
4. Use the y' (t)
the
e(J.tB if,
method
== \037 yet)
suggested
2t 1r e C, with)
A
by Exercise
=
3 to
C [\037
\037l)
find
of the
a solution
=
Y(O)
[
=:l)
nonhomogeneous system
=
.))) [\037]
216)
5.
of differential
Systems Let A be an n x n constant matrix, m be a positive integer. (a) Prove that the nonhomogeneous of the form) solution
Y(t) = Bo, B 1 ,
where
. . . , Bm
are
constant
vectors,
(b) If A
the coefficients
in (a)
is nonsingular,
has a
6. Considerthe
of
solution
and
if
+ tmC,
Y(t)
. . . + tm Bm
B2 +
, m.)
Bo, B1, . . . , Bm
=
Y(O)
B, has
,)
Am+lB.
vector
a solution. B can always
so that
be chosen
=
3Yl +
the
system
=
2Yl + 2Y2
t3
+
Y2
+
t
3
.)
2 of the form Y(t) = Bo + tB l + t B2 + t3B3. a particular solution = = . 1 with the of (b) Y2(O) Yl(O) system constant let B, C, D be ndimensional Let A be an n x n constant matrix, Prove that the ex be a number. nonzero real nonhomogeneous system) given
a solution
y' (t)
has a
= A Y(t)
E and
where
F are constant
E and
the form.
vector
initial
Find
F in
terms
of A,
B can always
a particular
+
solution
2 ex [)B
of
the
Y\037
a solution
of the
system
+
=

(sin
Y(O) =
and let
B,)
C
(A
+
=
Yl + Yl

with Yl (0)
3Y2
exD).)
Note
system system)
nonhomogeneous =
ext)F,)
if)
B, C for such a solution. be chosen so that the
Y\037
Find
(sin ext)D,)
(cosext)E
if and only
vectors,
(A2
Determine
+
vectors,
of the form)
solution
particular
+ (cosext)C
Y(t) =
(b)
particular
Find
Find
8. (a)
a
let
system)
nonhomogeneous
Y\037
(a)
and
vectors,
if)
only
such
for
prove that the initial the specified form.
y\037
7.
2
t
A
constant
1
C =
Determine
+
tB l
+
=
y' (t)
system
Bo
C be ndimensional
Band
let
equations)
+
4 sin 2t)
Y2.)
=
Y2(O)
=
1 .)))
that
has a
if A2 solution
+
2 ex [
of
is nonsingular, the specified
The
of Exercises9 through
In each
subject to
the
given
[ 2
3 I}
5 3 I}
0 0 1 1 , 0 0 1
=
The
7.18
general
[ 3e
t
=
Q(t)
X n
+
442
=
Y(O)
[ 221 1007]
2
2t
6
Y(O) =
,
.
2 1)
Y'(t) =
P(t)Y(t)
formula
for the
= AY(t)
+ Q(t),)
+
and Q(t),
matrix
.
707
12 27]
Q(t)
solution of the
yea) =
yet) are n
X
linear
system)
B,)
matrices.
1 column
We
turn
case) Y'(t)
(7.45))
,
t
explicit
a constant n more general
A is to the
+ Q(t)
.

t
Y'(t)
now
tet
linear system
Theorem 7.13 gives an
where
A Y(t)
[::]
1
1 12. A
=
=
=
Y(O}
2t [e }
Q(t) = 2
[
system y' (t)
[\037l
=
Q(t}
=
A
217)
Q(t))
e
1
[
Y(O}
[ 2e t}
:}
5
10. A =
11.
0
Q(t} =
=
A
+
the nonhomogeneous
solve
12,
= pet)yet)
system Y'(t)
condition.)
initial
4
9.
linear
general
= pet)
yet) +
yea) =
Q(t),)
B,)
necessarily constant. theorem J, a general open interval existenceuniqueness which we shall a in that a in J later sectiontells us for each and each initial vector B prove there is exactly one solution to the initialvalue In this section we use this problem (7.45). result to obtain a formula for the solution, generalizingTheorem 7.13. In the scalar case (n = 1) the differential equation (7.45)can be solved as follows. We let A(x) = S\037pet) dt, then multiply both members of (7.45)by eA(t) to rewrite the differential equation in the form) where
the
If P and
n
X
pet) is not on an continuous
n matrix
Q are
eA(t) {Y'(t)
(7.46))

pet)
Y(t)}
=
eA(t)Q(t).)
Now the left member is the derivative of the product eA(t) yet). Therefore, from a to x, where both members a and x are points in J, to obtain) eAC'dy(X) Multiplying
(7.47))))
by eA(x)
we obtain the Y(x)
=

eACa>Y(a)
=
t
eA(t)Q(t)dt.)
explicit formula
eAC\"')eACa>Y(a) +
eA a in
a in
and
for
all x
Y m + 1(x)
in

kIII.)
that)
 l)
Mrx k
1
which is the same
(x)
I T(qJk)
J and
in

ICPk+I(X)
for

19'>k+1(X)
))
which
for T.
constant
a contraction
is
rx
inequality)
=
II qJoll
1 we
have)
M,)
for k
holds
(7.75)
=
k
+ 1 if
it holds
for k
we note
that
IqJk+1(X)
Since

qJk(X)
is valid for
this
I
=
each x
I T(qJk)
in
J we
(x)
If we
(7.75) by induction. qJ(x) denote its sum we
proves let

I
T(qJk1)(X)
must also
II9'>k+1
This

rx
IlqJk

qJk111
qJ(x)
= lim
Therefore the series in
(7.73)
converges
have)
n\03700
qJn(x) =
CfJo(x)
+
Mrx k .)
Mrx k .)
00
(7.76))
I.
in Volume
extensively 1 and
1.
=
1 the
a scalar
field.
m
When
or, more briefly,
function is called a When m > 1 it is
a vector field. variable, or simply continuity, and derivative to 10 and 11 extend the concept of the integral.)
a vectorvalued function This chapter extends the
called
vector
>
vector variable
of a
function
realvalued
studied n
be
both used or f(XI'
of
a vector
concepts
of
limit,
type, and
denoted at
by lightfaced a point x = (Xl'
to denote the . , xn ) for
..
...,
value
off
x
n)
vectors
boldfaced
by
in Rn, the
type.
notations f(x) and
at that
particular point. value at x. We shall
the function
scalar and
Iff
a vector
is
the inner
use
product) n
= X \302\267 Y
!
XkYk
k=l)
and
(YI,
the
corresponding
\302\267 . , \302\267
Yn).
Points
in
norm Ilxll R2 are usually
=
(x
denoted
\302\267
where
x)\037,
by (x, y)
(x, y, z) instead of (Xl' X 2 , x 3). on subsets Scalar and vector fields of R2 defined cations of mathematics to scienceand engineering.
x = (Xl' . . . , xn ) and instead of (Xl' X 2 ); points
y = in R3
by
atmospherewe assigna real numberf(x)which
and
R3 occur
frequently
For example,if
represents
at
the temperature
each
in
the
point
at x, the
appli
x of the function)
243)))
244)
defined is a
f so that
scalar field.
we
If
an example of problems dealing with
we obtain
point,
of scalar
calculus
Differential
a vector
assign
a vector
and vector which
fields)
velocity at
wind
the
represents
field.
or vector fields it is important scalar either to know how the field changes as we move from one point to another. In the onedimensionalcase is the mathematical the tool used to study such derivative Derivative changes. theory in onedimensional case deals with functions on open intervals. To extend the the defined to Rn we consider generalizationsof open called open sets.) intervals theory In physical
8.2 Open balls and open in Let a be a given point
Rn and let r
be a given
The set
number.
positive
of all points x
that)
such
Rn
in
sets

Ilx
The exterior Ilxll = I
with
of S
is the
set
of all
x
with
.)
Exercises)
8.3
field defined on a set S and let c be a given real number. The set of all points a level set off that f(x) = c is called (Geometric and physical problems dealing scalar fields, with in this chapter.) level sets will be discussed later For each of the following S is the whole space RH. Make a sketch to describe the level sets corresponding to the given values of c. c = 0, 1,4, 9 . (a) .(x, y) = x 2 + y2,  2 e1 1 XY = e C  e , , , , e, e2 , e3 . (b) .(x, y)
1. Let f
x
in
be a
scalar
S such
(c) f(x, (d)
.(x,
(e)
.(x,
(f) .(x,
y) = cos (x + y), = x + y + z, y, z) = x2 + 2y2 + 3Z2, z) y, 2 y, z) = sin (x + y2 + z2),)
c = 1, 0,t, !J2,
c = c =
1
.
1, 0, 1 . 0, 6, 12.
c= 
1,!,
O,!
,/2,
1 .)))
246)
each of
2. In
the
cases, let a sketch
following
Make
inequalities.
given
whether or not S is open. Indicate 2 (a) x + y2 < 1 . 2 (b) 3x + 2y2 < 6. (c) Ixl < 1 and Iyl < 1 . x > 0 and y > 0 . (d) (e) Ixl :::; 1 and Iyl < 1 . xy
(g)
each
inequalities
(a)
Ixl
(c)
x + y
o.
boundary
(h)
(i)
S and
(j) x > y. > y.
x
(k)
2 y > x
1, and
1 , Iy I
o. the given of the fo1lowing, let S be the set of a1l points (x, y, z) in 3space satisfying and determine whether or not S is open.  x2  1 > O. y2
(f) x > 0 and
3. In
and vector
of scalar
calculus
Differential
real
A,
B is
and
line
set A
the
that
show
 {x}, obtained
a closed subinterval
by
that
A, show
of
open.t
(c) If
B are open intervals closed interval on
A and
(d) If A
is a
real line) is open.
5. Prove (a)
the
(b) Rn
set 0 is open.
(d) The (e) Give
of open sets in
example
Closed sets. A exercises discuss
of
the
Rn is
let S
cases,
following
a sketch
showing
open
and
the
be the set S
closed, or
+ y2
:::;
2. :::;2.
x 2 + y2 :::; 2. (f) (a) If A is a closed set in 1
< 4. < 4.
:::; y < y
x2 .
(k)
2 Y > x
and
Ixl
< 2.
(I)
2 Y > x
and
Ixl
:::;
2.
A u {x} is also closed. prove is a closedset. that A u B and A \" B are closed.) real line, show
x is a
interval [a, b] on
B are sets, the difference of A which are not in B.)))
and
co1lection
Rn
its complement
(g) 1 :::;x (h) 1 :::;x
+ y2 :::; 1 . 1 < x 2 + y2 < 1 x 2
(b) Prove
are open.
open setsis open.
(c) x2
7.
\" B
Rn:
called closedif of closed sets.) properties
S is open, closed,both (a) x 2 + y2 > o. (b) x 2 + y2 < O.
(e)
A
complement (relative
collection of open setsis open. to show that the intersection of an infinite
Sin
set
Make
conditions.
(d)
and
open.
necessarily
In each
B
A u
that
show
of a finite
intersection an
co1lection of
of any
union
line,
line, show that its
real
is open.
(c) The
6.
properties
following
The empty
the real
on the
the
point
not
in A,
that
real line
 B (calledthe
complement
of B
relative
to A)
is the set of
all
and
Limits
8. Provethe following (a) The empty set (b) Rn is closed. (c) The
union
(e) Give
an
necessarily
of a
may use
You
Rn.
results
the
of Exercise
5.
of closed setsis closed.
of any collection number of
closed sets is closed. the union of an infinite
finite
to show
example closed.
S be a subset
9. Let
sets in
is closed.
0
intersection
The
(d)
of closed
properties
247)
continuity)
that
of closed
collection
sets is
not
of Rn.
sets. (a) Prove that both int S and ext S are open of disjoint sets, and use this to deduce (b) Prove that Rn = (int S) u (ext S) u as, a union as is always a closed set. that the boundary 10.Given a set Sin Rn and a point x with the property that every ball B(x) contains both interior of S. Is the converse x is a boundary point to S. Prove that exterior points of S and points of S necessarily have this property? true? statement That is, does every boundary point 11. Let S be a subset Prove that ext S = int(Rn of Rn. S). 12. Prove that a set S in Rn is closed if and only if S = (int S) u as.)
Limits and
8.4 The
fields. extended to scalar and vector are easily of limit and continuity the definitions for vector fields; they apply also to scalar fields. If a ERn and bERm S is a subset of Rn. a function I: S \037 Rm, where
concepts
shall formulate consider
We
continuity
We
we
te)
wri
= b
Jim I(x)
(8.1))
\037
(or,/(x)
x
b as
\037
a))
x+a)
to mean
that)
111(x)
Jim
(8.2))
=
bll
O.
Ilxall+O)
limit
The
symbol
not required
it is
If we write
h =
in equation (8.2) is the usual limit that I be defined at the point a x
 a,
Jim Ilh
For
points
in R2 we
write
(x,
y) for
110)
11/(a +
x and
h)

(a, b) for
=
bll
a and
Jim
in R3 we
x
put
=
(x,y,
I(x,
y) =
b.
a = (a, b, c)
z) and lin1
(X,1I,Z) +
A
function
I is
said to be continuous
say lis
continuous
on
a set
write)
= b.
I(x, y, z) at
S if lis
and
(a ,b,c))
a if
lim I(x) x+a)
We
definition
the limit relation
express
(x,y)+(a,b))
points
this
o.
follows:)
For
In
becomes)
(8.2)
Equation
calculus.
of elementary
itself.
I is defined = I(a)
continuous at
at
a and
if)
\302\267
each
point
of
S.)))
(8.1) as
248)
is not
are
definitions
these
Since
case, it
that
and vector
familiar
many
fields)
in the onedimensional of limits and continuity can and limits of concerning continuity fields. For vector fields, quotients are of
extensions
straightforward
to learn
surprising
of scalar
calculus
Differential
those
properties
For example, the usual theorems and sums, quotients also hold for scalar products, but we have the following theorem concerning sums,multiplication not defined inner products, and norms.) be extended.
also
THEOREM
lim
(a)
x\037a
x\037a
lim
(c)
x\037a
Iim
(d)
lim If x\037a
[f(x)
+ g(x)]
Af(x) =
Jim
(b)
8.1.
x\037a)
\302\267
II f(x)
II
lor
= b
g( x)
=
Ilbll
lim x\037a
=
g(x)
scalar
every
e, then we
also have:
A.
\302\267
e .
\302\267
(c) and (d);
only parts
prove
band
scalars,
= b + e.
Ab
f( x)
We
Proof.
f(x) =
by
of
proofs
(a)
and
(b) are left
as exercisesfor
reader.
the
(c) we write)
To prove
g(x)
 b.
Now we use the
triangle
f(x).
o