Interplay of Theory Development with Physical Reality

What’s the relation of theories of physics to the actual physical reality they are describing? A naive answer would be something like “Theories are formed by making enough experiments and deducing a common law from the results. As more experiments are made, the theory with its laws becomes a more correct description of reality”. This, however is a simplistic way to describe the situation, as no amount of supporting special cases counts as an actual logical proof for a general statement, and usually there’s a need to already have at least some degree of a “pre-theory” before one can even make any experiments.

Suppose someone wants to study the behavior of matter at different temperatures. The things measured could involve thermal expansion of a metal rod, pressure changes upon heating a gas in a rigid container, or something similar. Actually, to even be able to measure a temperature, one has to make several assumptions about the physical laws related to it. When we immerse one end of a thermometer in a kettleful of hot water, we assume that only the temperature of the thermometer changes significantly, and the hot water stays at a very close to constant temperature as the thermometer and water reach thermal equilibrium. So there we already need to have some kind of a common-sense notion of heat capacity, whatever we decide to call that thing at that point of constructing a theory of thermodynamics. In particular, we assume that if an object has either a very small mass or small volume, it takes only a small amount of “thermal energy” to change its temperature.

Also, the concept of thermal equilibrium itself requires some assumptions that are not obvious in principle. The Zeroth Law of Thermodynamics says that if Object 1 and Object 2 are in thermal equilibrium with each other, and Object 2 is in equilibrium with Object 3, then the Objects 1 and 3 are also in mutual thermal equilibrium. A kind of an equivalence relation, but this time in physics instead of pure mathematics (another example of an equivalence relation in physics is found when forming a set of all scalar-vector potential combinations resulting in the same electric and magnetic fields). We need to do this assumption before we can measure temperatures based on the thermal expansion of some material, as the equilibration needs to be able to be transmitted forward through the material that we are trying to expand by heating it. So, to measure temperature we need to have some kind of an intuitive notion of thermal equilibrium, and also assume that the length of a metal rod or column of mercury or ethanol is a single-valued function of temperature in the range of conditions we are doing the experiments in. This may seem like a chicken-and-egg problem where we need to define temperature to study thermal expansion, and also know a law of thermal expansion to be able to define temperature. The same situation is apparent in the SI definition of the unit of length as an exact fraction of the distance travelled by light in a second, despite the fact that distance measurements were done by people for quite a long time before someone even got the idea of the constancy of speed of light. This circularity problem doesn’t cause as much trouble as one may think, as you can try different kind of theories combining assumptions about temperature and thermal expansion and make a lot of tests on every theory, trying to find a contradiction between the theory and experimental fact, and trust that in the case of wrong assumptions it will become apparent that something fishy is going on. Or of course if there’s a logical contradiction in the theory with itself then it must be wrong no matter what the experiments tell.

Some things that are usually assumed about thermal equilibration include that it proceeds monotonously towards its conclusion, and doesn’t do strange things like the thermometer reading oscillating around some value before setting to equilibrium (in other words, the thermal impact given to the thermometer by the hot liquid doesn’t cause an effect similar to how a mechanical impact affects a physical pendulum in absence of supercritical damping). This assumption is not always true in thermodynamics, as oscillating chemical reactions like the BZ reaction show (first academic papers about oscillating reactions were actually initially rejected from publication because they were so much against the common sense notions about chemical kinetics and thermodynamics at the time).

The temperature/thermal equilibrium example was a rather simple one, but a lot more difficult problem are the differences of the theory of relativity and quantum mechanics to the classical Newtonian mechanics, where one finds out a necessity to accept physical laws that are very difficult to form a mental image about and that are against conventional common sense (however, it must be admitted that even the classical Newton’s first law is not intuitively obvious on the surface of the Earth where there are frictional forces damping almost every form of motion).

Philosophical theories about the nature of successive scientific theories include the Karl Popper’s theory where the most important feature of a scientific theory is thought to be its falsifiability, which means that the theory must be “vulnerable” to experimental evidence in the sense that the theory can’t be somehow cleverly bent to fit any possible experimental results (in the same way how illogical features in a religious belief system can be justified by saying “Inscrutable are the Lord’s ways”). Other philosophers who have investigated the subject include Thomas Kuhn and Paul Feyerabend. In Kuhn’s theory in particular, the very concept of “objective physical reality” is questioned, and it’s said that to be able to talk about physical reality one always has to do that in reference to some particular theoretical framework.

Actually I recently got (for free) a book related to this subject, C. Dilworth’s “Scientific Progress – A Study Concerning the Nature of the Relation Between Successive Scientific Theories”, and it seems to contain a lot of interesting material but I haven’t managed to read very far through it. I hope the links in this blog post help someone who’s trying to find info about this subject.

Advertisements

The problematic radial momentum operator

In quantum mechanics, position coordinates x,y,z of a particle are replaced with position operators \large \hat{x},\hat{y},\hat{z} and the components of the momentum vector \mathbf{p} are replaced with the operators \large \hat{p_x},\hat{p_y},\hat{p_x}. The most important property of these operators is the commutation relation between a coordinate and the corresponding momentum component:

\large [\hat{x},\hat{p_x}] = i\hbar

In the position representation, the momentum operator has the form of a differential operator

\large \hat{p_x} = -i\hbar \frac{\partial}{\partial x}

and the \hat{x} operator is just a multiplication of the wavefunction with the variable x.

Can these results be extended to coordinate systems that are not Cartesian? Consider a plane polar coordinate system where the position is given as the pair \large (r,\theta), with

\large x = r\cos \theta
\large y = r\sin \theta

The “momentum” corresponding to the variable \large \theta is obviously the angular momentum operator

\large \hat{L}_z = -i\hbar \frac{\partial}{\partial \theta}.

But what about the radial momentum operator? It would have to be

\large \hat{p}_r = -i\hbar \frac{\partial}{\partial r}

in the position representation, but is it an acceptable quantum operator?

Actually, it is not. An operator corresponding to a measurable quantity has to be Hermitian, and it’s easy to see that if we have a wavefunction similar to a hydrogen atom ground state

\large \psi (r) = Ae^{-ar}

where a is a real constant, the state is an eigenstate of \hat{p}_r with an imaginary eigenvalue \large ia\hbar. Therefore it’s not possible to use the radial momentum operator as an observable.

Another fun way to see this is to consider a “radial translation operator”, which is generated by the radial momentum:

\large \hat{T}_r (\Delta r) = \exp \left(\frac{i\Delta r \hat{p}_r}{\hbar}\right)

and operate with it on a rotation symmetric function \psi (r). The result is

\large \hat{T}_r (\Delta r)\psi (r) = \psi (r + \Delta r)

However, because a radius coordinate always has to be a nonnegative number, we have irreversibly lost all information about the values of the function \large \psi (r) for \large r<\Delta r here, which means that this radial translation operator does not have an inverse, and therefore can’t be unitary as would be expected if \large \hat{p}_r were Hermitian!

Numerical solution of PDE:s, Part 7: 2D Schrödinger equation

Haven’t been posting for a while, but here’s something new… Earlier I showed how to solve the 1D Schrödinger equation numerically in different situations. Now I’m going to show how to calculate the evolution of a 2D wavepacket in a potential energy field that has been constructed to mimic the classical “two-slit experiment” which shows how the mechanics of low-mass particles like electrons can exhibit interference similar to the mechanics of classical waves (sound, light, water surface, and so on).

A 2D Schrödinger equation for a single particle in a time-independent background potential V(x,y) is

2d-TDSE.png

Where the particle mass has been set to 1 and the Planck’s constant to 2\pi.

To solve this numerically, we need the Crank-Nicolson method, as was the case when solving the 1D problem. More specifically, the linear system to be solved is

evol1

with

evol2

where the wavefunction now has two position indices and one time index, and the potential energy has only two position indices.

To form a model of the two-slit experiment, we choose a domain 0 < x < 6; 0 < y < 6 and make a potential energy function defined by

IF (x < 2.2 OR x > 3.8 OR (x > 2.7 AND x < 3.3)) THEN IF (3.7 < y < 4) THEN V(x,y) = 30

IF (x < 0.5 OR x > 5.5 OR y < 0.5 OR y > 5.5) THEN V(x,y) = 30

Otherwise V(x,y) = 0.

which corresponds to having hard walls surrounding the domain and a barrier with two holes around the line y = 3.85

For an initial condition, we choose a Gaussian wavepacket that has a nonzero expectation value of the momentum in y-direction:

initstate.png

An R-Code that solves this problem for a time interval 0 < t < 1 is

library(graphics) #load the graphics library needed for plotting

lx 3.8)||((j*dx>2.7) && (j*dx<3.3))) { if((k*dx>3.7) && (k*dx<4.0)) { V[j,k] = 30+0i #No significant density is going to go through these barriers } } if((j*dx>5.5) || (j*dx<0.5) || (k*dx>5.5) || (k*dx<0.5)) {
V[j,k] = 30+0i
}
}
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2  5.5)||(j*dx < 0.5)||(k*dx > 5.5)||(k*dx < 0.5)) {
psi[(j-1)*nx+k] = as.complex(0)
}
}
}

xaxis  5) P[l,m] = 2
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = P[l,m]
IP[4*l-2,4*m-3] = P[l,m]+0.25*(P[l+1,m]-P[l,m])
IP[4*l-1,4*m-3] = P[l,m]+0.5*(P[l+1,m]-P[l,m])
IP[4*l,4*m-3] = P[l,m]+0.75*(P[l+1,m]-P[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = paste("plot_abs_",k,".jpg",sep="")) #save the image
image(IP, zlim = c(0,0.15))

dev.off()

}

The code produces a sequence of image files, where the probability density is plotted with colors, as an output. Some representative images from this sequence (converted to grayscale) is shown below:

A video of the time evolution is shown below:

The treshold for maximum white color has been chosen to be quite low, to make the small amount of probability density that crosses the barrier visible.

The discrete grid of points has been made quite coarse here to keep the computation time reasonable, and the resolution has been increased artificially by using linear interpolation between the discrete points.

So, now we’ve seen how to solve the motion of 2D wavepackets moving around obstacles. In the next numerical methods post, I’ll go through the numerical solution of a nonlinear PDE.

Save

Save

Numerical solution of PDE:s, Part 6: Adiabatic approximation for quantum dynamics

Having solved the time-dependent Schrödinger equation both in real and imaginary time, we can move forward to investigate systems where the potential energy function V has an explicit time dependence in it:

timedep.gif

In this kind of systems, the expectation value of the Hamiltonian operator doesn’t have to stay constant.

Time-dependent perturbation theory is one method for finding approximate solutions for this kind of problems, but here I will handle a simpler example, which is called adiabatic approximation.

Suppose that the potential energy function V(x,t) is known. Now, let’s say that we also know the solutions of the time-independent Schrödinger equation

time-indep

for any value of t. I denote the solutions as \psi_n (x;t), where it is understood that x is a variable and t is a parameter. Now, if the function V(x,t) changes very slowly as a function of time, i.e. its partial derivative with respect to t is small at all points of the domain, we can use the adiabatic approximation, which says that if the initial state \Psi (x,0) is the ground state for the potential V(x,0), then the state at time t is the ground state for the potential V(x,t).

adiabatic

So, we can change a ground state of one potential V_1 (x) into the ground state of another potential $V_2 (x)&bg=ffffff&fg=000000$ by making a continuous change from V_1 (x) to V_2 (x) slowly enough.

Let’s test this by chooosing a function V as

moving.gif

i.e. a Hookean potential that moves to the positive x-direction with constant speed. If we set the wavefunction at t=0 to

init.gif

which is the ground state corresponding to V(x,0), the time depelopment of the wavepacket should be like

adiabatic-evolution.gif

which means that is moves with the same constant speed as the bottom of the potential V. This can be calculated with the R-Code below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 15.0 #length of the simulation time interval
nx <- 100 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #harmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3-k*dt*0.05)*(j*dx-3-k*dt*0.05)) #time dependent potential
}

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

for(l in c(1:nx)) {
for(m in c(1:nx)) {
A[l,m]=0
B[l,m]=0
if(l==m) {
A[l,m] = 1 + 2*kappa1 + kappa2[m]
B[l,m] = 1 - 2*kappa1 - kappa2[m]
}
if((l==m+1) || (l==m-1)) {
A[l,m] = -kappa1
B[l,m] = kappa1
}
}
}
sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V)
dev.off()
}
}

and in the following sequences of images you see that the approximation is quite good for v = 0.05

As an animation, this process looks like shown below:

By doing the same calculation again, but this time with v = 3, the image sequence looks like this:

where it is obvious that the approximation doesn’t work anymore.

Save

Save

Numerical solution of PDE:s, Part 5: Schrödinger equation in imaginary time

In the last post, I mentioned that the solution of the time dependent 1D Schrödinger equation

codecogseqn21

can be written by expanding the initial state \Psi (x,0) in the basis of the solutions of time-independent Schrödinger equation

time-independent

and multiplying each term in the expansion by a complex-valued time dependent phase factor:

time-evolution

Now, assuming that the energies E_n are all positive and are ordered so that E_0 is the smallest of them, we get an interesting result by replacing the time variable t with an imaginary time variable s = it. The function \Psi (x,s) is then

imagtime.gif

which is a sum of exponentially decaying terms, of which the first one, c_0 \exp(-E_0 s)\psi_0 (x) decays slowest. So, in the limit of large s, the wave function is

imagtime2.gif

i.e. after normalizing it, it’s approximately the same as the ground state \psi_0. Also, if s is a large number, we have

imagtime3.gif

or

imagtime4.gif

So, here we have a way to use the TDSE to numerically calculate the ground state wavefunction and corresponding energy eigenvalue for any potential energy function V(x). This is very useful, as the ground state can’t usually be solved analytically, except for some very simple potential energy functions such as the harmonic oscillator potential.

To test this imaginary time method, I will approximately calculate the ground state of an anharmonic oscillator, described by a Schrödinger equation

anharmonic.gif

as an initial state, I will choose a Gaussian function

initstate.gif

and the computational domain is defined by 0 < x < 6, 0 < t < 3, \Delta x = 0.05, \Delta t = 0.01.

An R-Code that performs this calculation is shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- (-1i)*lt/nt #length of imaginary timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(0.25*(j*dx-3)*(j*dx-3)+0.15*(j*dx-3)*(j*dx-3)*(j*dx-3)*(j*dx-3)) #anharmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gasussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

nrm = 0
for (l in c(1:nx)) nrm <- nrm + dx*abs(psi[l])*abs(psi[l])

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis, abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2")
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V*max(abs(psi)^2))
dev.off()
}
}

And a set of plots of |\Psi (x,t)|^2 for several values of t look like this:

Here the shape of the anharmonic potential has been plotted to the same images.

The problem with this method for finding a ground state is that if the system has more degrees of freedom than a single coordinate x, the matrices in the linear systems needed in the time stepping quickly become very large and the calculation becomes prohibitively slow. To make this worse, the Crank-Nicolson method of time stepping can’t be parallelized for multiple processors. However, there is another way to compute the evolution of a wave function in imaginary time, which is called Diffusion Monte Carlo, and that is easily parallelizable. DMC is one practical way for calculating ground state wave functions for multi-particle systems such as a helium or a lithium atom.

Save

Save

Save

Numerical solution of PDE:s, Part 4: Schrödinger equation

In the earlier posts, I showed how to numerically solve a 1D or 2D diffusion or heat conduction problem using either explicit or implicit finite differencing. In the 1D example, the relevant equation for diffusion was

diffusion.gif

and an important property of the solution was the conservation of mass,

masscons.gif

i.e. the integral of the concentration field over whole domain stays constant.

Next, I will show how to integrate the 1D time-dependent Schrödinger equation, which in a nondimensional form where we set \hbar = 1 and m = 1 reads:

codecogseqn21

Here i is the imaginary unit and V(x) is the potential energy as a function of x. The solutions of this equation must obey a conservation law similar to the mass conservation in the diffusion equation, the conservation of norm:

unitarity.gif

where the quantity |\Psi (x,t)| is the modulus of the complex-valued function \Psi (x,t) . This property of the solution is also called unitarity of the time evolution.

Apart from the TDSE, another way to represent the time development of this system is to find the normalized solutions \psi_0 (x), \psi_1 (x), \psi_2 (x) \dots of the time-independent Schrödinger equation

time-independent

and write the initial state \Psi (x,0) as a linear combination of those basis functions:

expansion.gif

This is possible because the solutions of the time-independent equation form a basis for the set of acceptable wave functions \psi (x). Then, every term in that eigenfunction expansion is multiplied by a time dependent phase factor \exp(-iE_n t):

time-evolution.gif

The numbers E_n are the eigenvalues corresponding to the solutions \psi_n (x) and the function \psi_0 (x) is called the ground state corresponding to potential V(x), while the functions \psi_1 (x) is the first excited state and so on.

The Schrödinger equation can’t be discretized by using either the explicit or implicit method that we used when solving the diffusion equation. The method is either numerically unstable or doesn’t conserve the normalization of the wave function (or both) if you try to do that. The correct way to discretize the Schrödinger equation is to replace the wave function with a discrete equivalent

discrete-wf.gif

and the potential energy function V(x) with V_{i;j} (or V_i in the case of time-independent potential), and write an equation that basically tells that propagating the state \Psi_{i;j} forward by half a time step gives the same result as propagating the state \Psi_{i;j+1} backwards by half a time step:

discrete-se.gif

Here we have

kappa1.gif

and

kappa2

This kind of discretization is called the Crank-Nicolson method. As boundary conditions, we usually set that at the boundaries of the computational domain the wavefunction stays at value zero: \Psi (0,t) = \Psi (L,t) = 0 for any value of t. In the diffusion problem, this kind of a BC corresponded to infinite sinks at the boundaries, that annihilated anything that diffused through them. In the Schrödinger equation problem, which is a complex diffusion equation, the equivalent condition makes the boundaries impenetrable walls that deflect elastically anything that collides with them.

An R-Code that calculates the time evolution of a Gaussian initial wavefunction

initstate

in an area of zero potential:

idzero.gif

for a domain 0 < x < 6, a lattice spacing \Delta x = 0.05, time interval 0 < t < 2 and time step \Delta t = 0.01, is given below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 2.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 200 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = 0 #zero potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of |psi(x)|^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
dev.off()
}
}

The output files are plots of the absolute squares of the wavefunction, and a few of them are shown below.

In the next simulation, I set the domain and discrete step sizes the same as above, but the initial state is:

init-momentum1

Which is a Gaussian wave packet that has a nonzero momentum in the positive x-direction. This is done by changing the line

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}+(1i)*j*dx

into

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3)+(1i)*j*dx)) #Gaussian initial wavefunction
}

The plots of |\Psi (x,t)|^2 for several values of t are shown below

and there you can see how the wave packet collides with the right boundary of the domain and bounces back.

In the last simulation, I will set the potential function to be

pot2.gif

which is a harmonic oscillator potential, and with the nondimensional mass m =1 and Planck constant \hbar = 1 the ground state \psi _0 (x) of this system is

ground-SHO.gif

If I’d set the initial state to be \Psi (x,0) = \psi_0 (x), or any other solution of the time-independent SE, the modulus of the wavefunction would not change at all. To get something interesting to happen, I instead set an initial state that is a displaced version of the ground state:

disp-ground-sho

The solution can be obtained with the code shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 360 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #Harmonic oscillator potential with k=4
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-2)*(j*dx-2))) #Gaussian initial wavefunction, displaced from equilibrium
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of Abs(psi(x))^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2, xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
lines(xaxis,V)
dev.off()
}
}

and the solution at different values of t look like this (images and video):

Here the shape of the Hookean potential energy is plotted in the same images. So, here you see how the center of the Gaussian wavefunction oscillates around the point x = 3, just like a classical mechanical harmonic oscillator does when set free from a position that is displaced from equilibrium.

By changing the code that produces the output images, we can also get a sequence of plots of the imaginary part of the wavefunction:

if(k %% 3 == 1) { #make plots of Im(psi(x)) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,Im(psi), xlab="position (x)", ylab="Im(Psi)",ylim=c(-1.5,1.5))
title(paste("Im(psi(x,t)) at t =",k*dt))
lines(xaxis,Im(psi))
lines(xaxis,V)
dev.off()
}

and the resulting plots look like this:

Save

Save

Why does quantum mechanics need an infinite dimensional vector space?

When undergraduate physics students are trying to learn quantum mechanics, one of the most difficult obstacles is that abstract infinite dimensional vector spaces are needed, which makes it impossible for a person to mentally “see” the geometry in their minds in the way a three-dimensional space can be imagined. The fact that the coordinates of the points in that vector space are complex numbers certainly doesn’t make it any easier. When I was studying that myself in the early undergraduate studies, it took me over a year to get a hold of the concept that any self-adjoint operator acting on vectors in the quantum state space has a set of eigenvectors that form a basis of that space.

Sometimes, however, a quantum mechanics problem can be handled by approximating the state space with a finite-dimensional vector space. In fact, several physics Nobel prizes have been awarded for the study of two-state quantum systems. One example of a situation, where a quantum system is effectively a two-state system, is something where the ground state (n=0) and the first excited state (n=1) have energies that are close to each other, and the energy gap between the first and second (n = 2) excited states is unusually wide. Then, if the system is at low temperature, the Boltzmann factors of the lowest-lying two states,

CodeCogsEqn(37).gif

and

CodeCogsEqn(38).gif

are much larger numbers that the Boltzmann factors of any other states, and it is unlikely that a measurement of total energy will give a result other than one of the two lowest energy eigenvalues.

How, then, does one know that a quantum state space actually has to be infinite-dimensional? I once saw, in a mathematical physics textbook, a practice problem where this had to be proved, and I was proud for almost immediately finding the correct answer.

A cornerstone of nonrelativistic quantum mechanics is the position-momentum commutation relation

CodeCogsEqn(39).gif

Now, if the quantum state space were finite-dimensional with dimension N, then these operators \hat{x} and \hat{p} would be N\times N square matrices, and the constant i\hbar on the RHS would be a diagonal matrix where all the diagonal elements have value i\hbar

Actually, this equation can’t hold in the finite-dimensional case. To see this, consider the calculation rules for taking the trace of a matrix sum or a matrix product:

CodeCogsEqn(40).gif

codecogseqn43

Now, if you take the trace of both sides of the commutation relation equation, and use these rules, you will get the result

codecogseqn42

Which clearly isn’t possible. So, we have done a false assumption somewhere in the derivation of the contradictory equation above, and the false assumption must be the finite-dimensionality of the space where x and p are defined. An infinite dimensional operator doesn’t always have a trace at all, and this is the case with the operator xp-px.

An interested reader may try to form finite-dimensional matrix representations for the raising and lowering operators of a quantum harmonic oscillator, use those those to form matrices for operators \hat{x} and \hat{p}, and see what happens when you calculate their commutator.