# Green's functions Some of the most important equations in physics can be solved by constructing a beast with a curious set of properties, called a Green’s function. This post contains some interesting nuggets from a lecture I gave on St. Patrick’s day about Green’s functions to the course I assist, Mathematical Methods in the Physical Sciences II. I’ll give some historical background about the life of George Green, the functions’ namesake, introduce what a Green’s function actually is–and what exactly it’s good for–in layperson’s terms, and in the final section go through the physics and the math to develop a deeper understanding of what is going on as well as to truly convince ourselves that the function is holding up its side of the bargain.

The lecture follows closely material in Mathematical Methods in The Physical Sciences, Mathematical Methods for Physicists, and “The Green of Green Functions” and the interested reader is referred to these sources for further information.

# George Green

The namesake of Green’s theorem and Green’s functions, George Green, led an atypical life, first blossoming as a mathematician in his 30s after a career as a miller with little formal education to speak of. Born in 1793 to a baker in Nottingham Green managed to learn to read and write during his 18 months of private school as a child before joining the family business so to speak, working at the nearby mill and having 7 children over the years with the miller’s daughter, whom he never married. In 1823 Green’s life took a turn when he joined the Nottingham subscription library at age 30, which gave him access to the leading scientific journals of the age and a peer group of like minded individuals hailing from the surrounding countryside.

Just 5 years later at age 35, Green self-published “An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism1 almost apologizing for wasting mathematicians’ time in the introductory paragraphs

…it is hoped the difficulty of the subject will incline mathematicians to read this work with indulgence, more particularly when they are informed that it was written by a young man, who has been obliged to obtain the little knowledge he possesses, at such intervals and by such means, as other indispensable avocations which offer but few opportunities of mental improvement, afforded.

This 70 page essay contained both the derivation of Green’s theorem and Green’s functions. Mostly, Green’s customers were local merchants, seemly looking for an erudite bookshelf decoration. At some point, a successful businessman with a background in mathematics, Edward Bromhead,  was sufficiently impressed by Green’s work to write him a letter offering mentorship and support. Green apparently asked his local friends their opinion of Bromhead’s sincerity and was told that Bromhead was just being polite and that it would be inappropriate for someone of such a different social standing to accept such an offer. 20 months later, Green reconsidered and wrote Bromhead accepting his offer.

With Bromhead’s support, Green published several papers, and eventually attended Cambridge in spite of the fact that he possessed, in his own words, “little Latin, less Greek” and had “seen too many winters”. His seminal work that took his career from miller to mathematician remained unpublished and largely unappreciated until shortly after Green’s ignoble death of the flu at age 43 when Lord Kelvin arranged for its publication.

# Green’s functions

Poisson’s equation $\nabla^2 \psi = \frac{\rho}{\epsilon}$ and Laplace’s equation $\nabla^2 \psi = 0$ model the electrostatic potential $\psi$ in the presence and absence of charge, respectively. The equation in the presence of charge is clearly more complicated and can be solved by invoking the machinery of Green’s functions, which were originally  directed towards electrostatic problems of this sort. In this example, the Green’s function $G(r,r')$ physically represents the potential at the point $r$ produced by a unit charge $r'$. Before moving to more complicated examples, it’s easy to see although more complicated to prove that $G$ is symmetric in its arguments. The universe does not often play directional favorites, so we would imagine that the potential produced would be the same, were the charges $r$ and $r'$ swapped.

Green’s functions quickly found other applications to problems in electromagnetic, thermal and mechanical phenomena. Moreover, Green’s functions can be used to formulate a theory of classical wave scattering which leads us into quantum mechanical applications as we notice that the Schrödinger equation of quantum mechanics is itself a wave equation. Using this we can extend the technique to apply to the situation of a non-relativistic scattering of a single particle by an external potential. Once we’ve used Green’s functions to treat scattering we can be very excited. In particle physics interactions scattering is how we investigate properties of the elementary particles. Particle interactions are multiple scattering processes and the transmission of forces is done via quantum fields. The propagation of fields between points was exactly what Green’s functions were invented for–indeed Green’s functions appear in modern quantum field theories. Known as Feynman propagators in this context, they are a standard tool in modern particle physics.

But going back to Poisson’s equation above, we can generalize the equation and its solution, the Green’s function, with a modicum of additional mathematical machinery. This procedure and its associated notation is well covered in Mathematical Methods in The Physical Sciences, Section 9.4. Once generalized it’s easy to spot candidate applications–physical situations with equations of a similar form–for applying the Green’s function methodology to solve them.

Once we have found an equation of the right form we’d like to solve, with the increased abstraction we can show that if we construct a function $G(x,t)$ which is:

1. piecewise over a defined  interval
2. whose pieces satisfy boundary conditions on the interval
3. whose pieces patch together nicely (i.e. fit together perfectly at some location $t$)
4. but not too nicely (i.e. their derivative is discontinuous at $t$)

then we can use $G(x,t)$ to solve the original equation. More precisely a we take a weighted sum of $G$‘s value over our defined interval. In our electrostatics terminology $G$ corresponds to a weighting function that enhances or reduces the effect of a charge element according to its distance from the source. Everything we need from a solution has now been built into the construction of $G$: sounds like black magic, but it’s not. Convincing oneself mathematically is a conceptually straightforward procedure, although a bit involved with the full generalized machinery: simply plug the constructed solution back into the original equation and see it satisfied.

# Physical Intuition

Here I’ll consider a simpler, less general example, to develop some physical intuition as to what the Green’s function is actually representing by providing the mathematical representation of what I have up to now simply referred to as the solution and then concisely going through the calculation checking that our constructed  solution actually satisfies the desired equation.

Consider  the differential equation $y'' + \omega^2 y = f(t)$, where $f(t)$ is some given forcing function. Moreover, let’s impose some simple boundary conditions namely

(1) $y_0=y_0'=0$.

Next we’re going to do something a bit strange, but why will become clear. Namely, we’re going to rewrite $f(t)$ as an integral over a delta function. Recall from the definition for a delta function that $\int_a^b = \phi(t) \delta(t-t_0)dt = \phi(t_0)$ if $t_0$ lies within the integration limits, and 0 otherwise. In this formulation we can think of the force $f(t)$ as the limiting case of a whole sequence of impulses.

(2) $latex f(t)=\int_a^b = f(t’) \delta(t’-t)dt$.

Now that we have formulated this new, strange way of thinking of the forcing function and many more delta functions than we might have initially bargained for, let’s simplify things again by considering a single delta function, namely what if $f(t)=\delta(t'-t)$. We now solve our differential equation (1) for this $f(t)$, which corresponds to a unit impulse at $t'$. Solving this equation is rather easy with a sleight of hand: we simply define the solution to be a function which we call $G(t,t')$. That is $G(t,t')$ is the solution to

(3) $\frac{d^2}{dt^2} G(t,t') + \omega^2 G(t,t') = \delta(t'-t)$.

Finally given some forcing function $f(t)$ we try to find the solution of (1) by simply adding up the response of many such impulses, guessing that the final solution is over the form

(4) $y(t)=\int_0^\infty G(t,t')f(t')dt$.

For now, this is just a guess but we can show it’s correct by following the strategy of plugging it into the original equation (1) and showing that it indeed satisfies it:

• Substitute (4) into (1): $y'' + \omega^2 y = (\frac{d^2}{dt^2}+\omega^2)y=(\frac{d^2}{dt^2}+\omega^2)\int_0^\infty G(t,t')f(t') dt'=\int_0^\infty (\frac{d^2}{dt^2}+\omega^2)G(t,t')f(t') dt'$
• Use (3) to simplify: $y'' + \omega^2 y=\int_0^\infty \delta(t'-t) f(t')dt'$
• Use (2) to finish the proof: $y'' + \omega^2y=f(t)$

Thus (4) is the solution of (1). This finally gives us an easier way to imagine the roll the Green’s function is playing in the solution, it is the response of the system to a unit impulse at $t=t'$ and we have been able to show the finally solution is a sum of such responses.

# References

1. George Green (1841). An Essay on the Application of mathematical Analysis to the theories of Electricity and Magnetism Crelle’s Journal arXiv: 0807.0088v1

## 0 thoughts on “Green's functions”

1. René says:

Very nicely written – sounds a bit like poesy in my physics-trained ears 😉