In our study of Fourier series we can summerize the theory development as follows:
For every smooth periodic function f on R with period 2π, the Fourier transform Ff=∑k∈Zf^(k)δk, where f^(k)=2π1∫−ππf(x)e−ikx.
The sequence f^(k) satisfies the rapidly decreasing condition: For every r, there exists constant Cr such that ∣f^(k)∣≤Cr∣k∣−r. We also call such a sequence smooth.
The inversion theorem: f(x)=F−1Ff(x)=F−1∑k∈Zf^(k)δk=∑k∈Zf^(k)F−1δk=∑k∈Zf^(k)eikx.
The theory generalizes to n-dimensional case as well. We restrict our discussion to smooth case.
Let P be an elliptic operator on Tn. Then P is right-invertible module smoothing operator, PQ=I+smoothing. Precisely, there exists an operator Q:C∞(X)→C∞(X) and a smoothing operator TK such that PQ=I−TK.
We know that after Fourier transform, a differential operator becomes an algebraic object. To construct such an operator, we take Fourier transform, do some algebra on the “frequency domain” and construct a corresponding algebraic object of Q. Then we are done by inversion. The algebraic object corresponding to differential operators in the frequency domain are symbols.
Let p(x,ξ)=∑∣α∣≤maα(x)ξα be the polynomial in RHS, we can write
Pf=∑k∈Zna(x,k)f^(k)eikx.
Therefore, via Fourier transform, we can identify P with its polynomial p(x,ξ).
Conversely, with a polynomial ∑∣α∣≤maα(x)ξα we can define a differential operator by replacing ξα by Dα.
Observe that for a general function a(x,ξ), not necessarily a polynomial, we can still define an operation Taf=∑k∈Zna(x,k)f^(k)eikx for f∈C∞(Tn), and Taf will be C∞(T) when the ξ part satisfy sufficient decreasing condition. Such a function a(x,ξ) is called a symbol, and the corresponding Ta is called a pseudo differential operator.
Note that m can take nevative value. Acutally this is the main interesting case. If a∈Sm for positive m, then Dξαa∈S0 for ∣α∣ big enough. Also, (Sm)m∈Z is a decreasing family. Sm⊃Sm−1⊃..... Let S−∞ be the decreasing limit ∩m∈ZSm.
A pseudodifferential operator is an operator T:C∞(Tn)→C∞(Tn) given by some a∈Sm for some m, so that Tf=∑k∈Zf^(k)a(x,k)eikx. We call a the symbol of T. We usually let Ta to denote a pseudodifferential operator with symbol a specified.
□
It follows that a differential operator is a pseudodifferential operator with positive degree. But how about negative degree? Consider the example a(x,k)=k−2, so Taf(k)=k−2f^(k), so f^(k)=D2Taf(k) by derivative theorem. This indicates that if a is negative degree, then Ta will make the function more regular. In particular, if a∈S−∞, then Ta will be a “smoothing operator”. We’ll make these intuitions precise later. Now let’s study a first algebraic property of pseudodifferential operators.
The idea to construct the right inverse Q is to find out the symbol of Q. We already know the symbol of P is p(x,ξ), so the first question is to understand what is PQ in the symbol world.
It follows that by definition, the symbol of PTa is e−iξx⋅P[a(x,ξ)e−iξx]. It sufficies to show that the latter is given by equation (cmp) above.
We make a more general computation which will be used later. Consider P(eitfu) for smooth f and u. First we look at Dj, we define δj such that Dj(eitfu)=eitfδju. This allows us to simply switch the eitf out to avoid extra term. It’s straightforward to find that δju=Dju+t∂xj∂f⋅u, and we just write δj=Dj+t∂xj∂f.
Similarly, DjDr(eitfu)=Dj(eitfδru)=eitfδjδru.
It follows that
Recall that p is a polynomial function of ξ, so (totally algebraic) global Taylor expansion givecs p(x,η+ξ)=∑∣β∣≤mβ1!⋯βn!1∂ξβp(x,ξ)ηβ (as polynomial), so note that the D is taking on x variable,
Let’s think about pseudodifferential operators again. The action of Ta is given by “modifying Fourier coefficients” by multiplication, we know that such operation is essentially convolution, by convolution theorem. In particular, if a doesn’t depend on x, then Ta is precisely convolution with the inverse Fourier transform of (a(k))k∈Z, in other words, Taf=∫Tn[∑k∈Za(k)e−ik(x−y)]f(y)dμ(y).
The series ∑k∈Zn⟨k⟩−m converges if m>n (note that > is required not ≥).
Proof. By comparing with integrals it sufficies to show that ∫Rn⟨ξ⟩−mdm(ξ) converges. Use polar coordinate and change of variable formula we get dm(ξ)=rn−1sin(φ1)⋯sin(φn−2) (see Lecture 20 for the notations). Then
It follows that when ∣α∣+∣β∣≤m⟹l+∣α∣+∣β∣<−n, so DxαDyβKa(x,y) converges absolutely. This implies DxαDyβKa(x,y) is continuous on Tn×Tn for every ∣α∣+∣β∣≤m, i.e. Ka(x,y) is in Cm(Tn×Tn).
We want to prove that for elleptic differential operator P of order m, there exists a pseudodifferential operator Ta which is almost right inverse of P (i.e. PTa=I−smoothing ). Since P is positive order, then to be inverse, Ta should be S−m.
Let pm(x,ξ) be the top homogeneous term of p, pm(x,ξ)=∑∣α∣=maα(x)ξα. P is elliptic implies pm(x,ξ)=0 when ξ=0. Remember we want to find some symbol with order −m, the raw ingadient we have is pm(x,ξ)1, the growth condition is ⟨ξ⟩−m. To make it a symbol we need to deal with the issue at zero, we can modify it with a smooth “high-pass” filter. Let ρ be a smooth cut off function on R with ρ(t)≡0 for t<1 and ρ(t)≡1 for t>2. Then a0:=ρ(∣ξ∣)pm(x,ξ)1 is a symbol in S−m.
Let Tr1=I−PTa0, or r1=1−p∘a0. Then r1 is order -1, because the degree m part of Ta0 and P are reverse to each other. One can also use the composition formula to see it. By (cmp), the symbol of PTa0 is
p∘a0:=p(x,D+ξ)a0(x,ξ)=∣α∣≤m∑aα(x)(D+ξ)α(ρ(∣ξ∣)∑∣α∣=maα(x)ξα1)=∣α∣=m∑aα(x)ξα⋅ρ(∣ξ∣)⋅∑∣α∣=maα(x)ξα1+order ≤−1=ρ(∣ξ∣)+ order ≤−1
The prove of this lemma is the same as the case p∘a0. Here we need P to be elliptic to make sure the m-order term of P cancel with a0.
Continue this process we get a sequence ri∈S−(i+1) and ai∈Sm−(i+1) so that ri=1−[p∘a0+⋯+p∘ai−1]. Eventually we expect to arrive at some a=∑i=0∞ai and r∈S−∞ such that r=1−p∘a. Then we are done if we can show such a is in Sm. The lemma below is slightly different from the ideal case but satisfies our need.
Let (bi)i=0+∞ be a sequence of symbols such that bi∈Sm−i. Then there exists b∈Sm such that for every N, b−SN∈Sm−(N+1), where SN=∑i=0Nbi is the N-th partial sum.
Proof. Fix 0<ϵ<1. Since bi∈Sm−i, then by definition bi has the growth condition
Then we can choose λi big enough such that when ∣ξ∣>λi, ⟨ξ⟩ϵ>2i, so ∣bi(x,ξ)∣<⋅2i1⋅⟨ξ⟩m+ϵ−i.
The idea is to take the small part for bi for every i, and use smooth high-pass filter to cut off the remaining part, then sum them up. Let ρ∈C∞(R) be a smooth function such that
where rN−1=∑i=0N−1(ρ(λi∣ξ∣)−1)bi+ρ(λN∣ξ∣)bN+∑i=N+1+∞ρ(λi∣ξ∣)bi.
Next we show that the error term rN−1 is in Sm−N.
The first term is compact supported so doesn’t matter, the second term is in Sm−N; The third term is dominated by ∑i=N+1+∞2i1⟨ξ⟩m+ϵ−i≤⟨ξ⟩m+ϵ2N+11⟨ξ⟩−(N+1) so it is O(⟨ξ⟩m−N) because ϵ<1.
To show that the third term is in Sm−N we need to look at derivatives as well.
Fix M>0, we can choose λi,M>0 such that ∣DxαDξβbi(x,ξ)∣≤2i1⟨ξ⟩m+ϵ−∣β∣−i for every ∣α∣+∣β∣≤M. The previous case is the M=0 case. So by replacing λi with λi,M we can get new b and rN with
Next we need to eleminate dependence of b on M. Run the argument for every M we get a infinite matrix of positive numbers λi,M. Now we choose its diagnal, let λi=λi,i. Then create b with such λi will give us desired decreasing property for every α,β, hence the rN of this b satisfiess rN∈Sm−N.
□
There exists a∈S−m such that a=∑i=0+∞ai+S−∞. This means a=∑i=0+∞ai+r for some r∈S−∞.
It follows that PTa=1−Tr for some r∈S−∞.
Since Tr is smoothing operator we have finished the proof.
Let W be an inner product space. T:W→W be a linear operator, and T∗ be its adjoint operator given by requiring (Tf,g)=(f,T∗g) on W.
Let Ker(T):={w∈W:Tw=0} be the kernel of T and Ran(T):={w∈W:w=Tw′ for some w′∈W} be the range of T.
Then we automatically have
this is pure linear algebra and no Hilbert space theory involved.
Taking ⊥ both side we get Ker(T∗)⊥=(Ran(T∗)⊥)⊥, where the RHS is equal to Ran(T) when W is a Hilbert space. But we are looking for a condition to imply Ker(T∗)⊥=Ran(T), with no closure allowed, since W will be space of smooth functions and taking closure in L2 will violate regularity. Here is a handy lemma which satisfies our needs.
If there exists a finite dimensional subspace V⊂W such that V⊥⊃Ran(T), then
(1) Ker(T∗)=(Ran(T)∩V)⊥V, here ⊥V means taking orthogonal complement in V. It is equal to (Ran(T)∩V)⊥∩V. By direct sum decomposition of the finite dimensional V we get V=Ker(T∗)⊕(Ran(T)∩V).
(2) Consequently, Ker(T∗)⊥=Ran(T).
Proof.
(1)
First we show that Ker(T∗)=Ran(T)⊥∩V.
Let h1,…,hr be an orthonormal basis of V. For any g∈Ker(T∗)=Ran(T)⊥, let g=PVg+g2, where PVg=∑i=1r(g,hi)hi be the projection of g to V. Then g2⊥V⟹g2∈Ran(T). So g2=0. It follows that g=PVg⟹g∈V. So g∈Ran(T)⊥∩V, this proves ⊂ direction.
By the ∗kernel-range relation we have Ker(T∗)=Ran(T)⊥⊃Ran(T)⊥∩V. So Ker(T∗)=Ran(T)⊥∩V proved.
Next we show that Ran(T)⊥∩V=(Ran(T)∩V)⊥V.
For f∈Ran(T), write f=PVf+f2. Then f2⊥V⟹f2∈Ran(T), so PVf∈Ran(T)∩V. If g∈V and g⊥Ran(T)∩V, then g⊥PVf. But f2⊥V, so g⊥f2. So g⊥f. We have showed that g∈(Ran(T)∩V)⊥V⟹g∈Ran(T)⊥∩V, the ⊃ direction.
The converse inclusion is automatic because (Ran(T)∩V)⊥ is bigger than Ran(T)⊥.
(2) The ∗kernel-range relation implies Ran(T)⊂Ker(T∗)⊥.
Just need to prove Ker(T∗)⊥⊂Ran(T).
For f∈W such that f⊥Ker(T∗), write f=PVf+f2 so f2∈Ran(T). Then f2⊥Ker(T∗), so PVf⊥Ker(T∗). By (1) we have PVf∈Ran(T). This implies f∈Ran(T).
Let L,F1,F2 be the one in the approximation by finite rank lemma above.
Proof of (1):
If f∈Ker(I−TK), then (I−TL)(I−TK)f=(I−TL)0=0, so f∈Ker(I−TF1). By the finite rank case Ker(1−TF) is fintie dimensional. Since I−TL is invertible this implies Ker(I−TK) is also finite dimensional.
Proof of (2):
It’s sufficient to prove f⊥Ker(I−TK∗)⟹f∈Ran(I−TK). The other direction is automatic by the ∗kernel-range relation.
Let F2=∑i=1Nfi(x)gi(y). It’s sufficient to prove that f⊥Ker(I−TK∗) implies f∈Ran(I−TF). Because f∈Ran(1−TF2)⟹f=(I−TK)(I−TL)g for some g, so f∈Ran(I−TK).
Take V=span{g1,…,gN}. By the lemma in finite rank case, f⊥V implies f∈Ran(I−TF2)⟹f∈Ran(I−TK). Then apply the Key linear algebra lemma above to T=I−TK and V we conclude that f∈Ran(I−TF2).