Table of Contents
Calculating hard answers.
Differential equations arrise in many places in physics, so much that they have been refered to as the language of the universe, but many of these have no analytical solution such as the simplest example, the pendulum. Without the small angle approximation, the equation of motion is, \[ \ddot\theta = -\frac{g}{l}\sin(\theta) .\] Finding where the pendulum will be at a given future (or past) time is solved by predicting its position at a small instance of time from some starting conditions. Other problems from classical mechanics and quantum mechanics yeild these ordinary differential equaitons with a single boundary conditions. The easist way of finding how these systems evolve is by using the Euler method.
The Euler method use the first derivative of an equation to solve for the next position. It is derived from the definition of the derivative, \[ f'(x) = \frac{ f(x+\Delta x)-f(x)}{h} \] and solving for \(f(x+\Delta x)\), \[ f(x+\Delta x) = f(x) + f'(x)h \] where \(h\) is the step of the function.
To solve the pendulum above, we have to use the Euler method twice, first using the definition of the second derivative to find the first derivative, \[ \begin{align} \dot\theta_{i+i} &= \dot\theta_{i} + \ddot\theta h \\ &= \dot\theta_{i} - \frac{g}{l}\sin(\theta_i) h \end{align}\] Here, when \(i = 0\) the value of \(\dot\theta_0\) must be given as an initial condition. For a single boundary condition the value is what we choose it to be. Then to solve for the position we, use the Euler method with newly found first derivative of position, \[ \begin{align} \theta_{i+1} &= \theta_i + \dot\theta h \\ &= \theta_i + \dot\theta_{i}h - \frac{g}{l}\sin(\theta_i) h^2 \end{align}\] Again, the value of \(\theta_0\) is freely chosen.
Summarised, \[ \begin{array}{rl} \dot\theta_{i+1} &= \dot\theta_{i} - \frac{g}{l}\sin(\theta_i) h\\ \theta_{i+i} &= \theta_i + \dot\theta_{i}h - \frac{g}{l}\sin(\theta_i) h^2 \end{array}\]
The Runge-Kutta method (RK) uses the same principals as the Euler method but instead weights several future values and takes the average of those weighted values to acheive a better and more stable prediction. Given, \(\dot y(x) = f(x) \text{ and } y(x_0) = y_0\) then, \[ \begin{array}{cl} y(x+\Delta x) &= y(x) + \frac{1}{6}h(k_1 + 2k_2 + 2k_3 + k_4) \\ k_1 &= f(x_n, \; y_n ) \\ k_2 &= f( x_n + \frac{1}{2}h, \; y_n + \frac{h}{2} k_1) \\ k_3 &= f( x_n + \frac{1}{2}h, \; y_n + \frac{h}{2}k_2) \\ k_4 &= f( x_n + h, \; y_n + h k_3 ) \end{array} \]
The methods developed for ODEs with single boundary conditions can be extended to case with two boundary conditions. With two boundary conditions, the intial values need to have a specific values for both sides to match. This gives rise to what is know as the Shooting and Matching method (S+M). Using the RK or a similar algorithm, the computer chooses a set on initial conditions, then computes the second boundary values ( this is the shooting part ), if the values don't match (the matching part), the computer adjusts the original initial values and tries again.
During class we discussed using the S+M method to solve the Schr\(\ddot o\)denger equation, varing the energies of the unknown wave function until a reasonable guess for the energy was found.
The Fourier Transform (FT) is the conversion of a function \(f(x)\) into an inifinte linear addition cosines and imaginary sines with a new variable. When the domain of the function is time, the transformation variable represents frequency. In position space, the transform variable represent the wave number. Since many, but not all uses of the FT is on time domain functions, I assume the function I talk about are in the time domain. The FT can also be used heat equations but I didn't cover that during the class so I won't talk about it. The Fourier Transform of \(f(t)\) is written: \[ F(\xi) = \int^{\infty}_{-\infty} f(x) e^{-2i\pi x \xi} dx\]
Since computers cannot compute an infinite amount of numbers, a suitable discretization needs to be found which is called the discrete Fourier transform (DFT). Thankfully, the solution to this is apparent from the problem at hand. Considering what the slowest and the fastest frequencies of any array of values with a given time step leads naturally to the solutions: \(\frac{N}{2}\) and \(\Delta\), where \(N\) is the size of the array and the \(\Delta = \frac{1}{N}\). The fastest frequency then is the simply the time step between two values of the array, and starting from this value we can mulitply by an integer upto the slowest frequncy. \[ \frac{1}{N} \le \frac{n}{N} \le \frac{N}{2} \] but consider when \(n = N\) and \(n = 2N\) and plugging it into the equation, \[ e^{-2i\pi \xi \frac{n}{N}} \leadsto e^{-2i\pi \xi \frac{N}{N}} = e^{-2i\pi \xi \frac{2N}{N}} \] This demonstrates that the function is periodic in \(N\) and therefore twice the slowest frequency is sufficient, \[ 1 \le n \le N. \] which can be rewritten as, \[ 0 \le n \le N-1 \] The DFT can then be formalized as: \[ X_\xi = \sum_{n=0}^{N-1} x_n e^{-2i\pi \xi \frac{n}{N}} \text{ where } x_n \text{ is the nth value of the array. } \; (1) \] \(X_\xi\) is the coefficient of the frequency \(\xi\).
I note here that the set of these cosines and imaginary sines of different frequencies is orthogonal but I don't prove that here.
Direct application of the above formula leads to easiest but not the fastest implementation of the FT.
The use of the Fourier Transform finds it self in the areas of image processing, signal processing, function convolution, and partial differential equations. During the class, I looked at the using it to break a periodic function into its frequency components and how to apply the single dimension case of the FT to the mulitdimension case.
For the two dimensional case, the DFT is \[ XY_{\xi\chi} = \sum_{m=0}^{M-1}\sum_{n=0}^{N-1} x_n y_m e^{-2i\pi \xi\chi \frac{nm}{NM}} = \sum_{n=0}^{N-1}\sum_{m=0}^{M-1} x_n y_m e^{-2i\pi \xi\chi \frac{nm}{NM}}\] where \(M\) is the size of the \(y\) array and \(\chi\) represents that frequency defined my the bounds \[ \frac{1}{M} \le \chi \le 1 \]
The two dimensional case was discussed in its use in image processing, that by transfer an image to the frequency domain, removing the highest frequencies, and transforming back, the image could have noise removed at the cost of blurring the image slightly. The use for convoluting two functions was also discussed but never implemented.
The improvement on the DFT is the know as the Fast Fourier Transform (FFT). Without getting into too much detail, the process of perfoming the DFT fast is to split the DFT into half, forming two smaller versions of the orignal DFT. By continuing this recursion, the running time is reduce from \(O(n^2)\) time into \(n\log(n)\) time. The implementation of the FFT requires that the array to be transformed be of length \(2^n\), but this requirement is easy to solve since padding an array with \(0\)s leaves the frequency coefficients unchanged.
They are many uses of matrices, the biggest being to solve the eigenvalue problem. Any system linear equations can be solved with this method. And so can the problem of project a set of functions into the basis of another set of function. One such example, in quantum mechanics, is using a linear combination of known orthonormal wave functions to solve for perturbation. This was discussed in the book Computational Physics by J.M. Thijssen and then implemented.
I feel that the course could have gone better and that was mostly due to my participation, or lack thereof. Although, it was informing and the work I got down and the algorithms implemented have created for me a foundation on which I can explore the topics further. My goal for next semester is to finish the implementation of the Hartree-Fock program which was begun during this semster. The beginning of the theory was