In the earlier posts, I showed how to numerically solve a 1D or 2D diffusion or heat conduction problem using either explicit or implicit finite differencing. In the 1D example, the relevant equation for diffusion was

and an important property of the solution was the conservation of mass,

i.e. the integral of the concentration field over whole domain stays constant.

Next, I will show how to integrate the 1D time-dependent Schrödinger equation, which in a nondimensional form where we set and reads:

Here is the imaginary unit and is the potential energy as a function of . The solutions of this equation must obey a conservation law similar to the mass conservation in the diffusion equation, the conservation of norm:

where the quantity is the modulus of the complex-valued function . This property of the solution is also called unitarity of the time evolution.

Apart from the TDSE, another way to represent the time development of this system is to find the normalized solutions , , of the time-independent Schrödinger equation

and write the initial state as a linear combination of those basis functions:

This is possible because the solutions of the time-independent equation form a basis for the set of acceptable wave functions . Then, every term in that eigenfunction expansion is multiplied by a time dependent phase factor :

The numbers are the eigenvalues corresponding to the solutions and the function is called the ground state corresponding to potential , while the functions is the first excited state and so on.

The Schrödinger equation can’t be discretized by using either the explicit or implicit method that we used when solving the diffusion equation. The method is either numerically unstable or doesn’t conserve the normalization of the wave function (or both) if you try to do that. The correct way to discretize the Schrödinger equation is to replace the wave function with a discrete equivalent

and the potential energy function with (or in the case of time-independent potential), and write an equation that basically tells that propagating the state forward by half a time step gives the same result as propagating the state backwards by half a time step:

Here we have

and

This kind of discretization is called the Crank-Nicolson method. As boundary conditions, we usually set that at the boundaries of the computational domain the wavefunction stays at value zero: for any value of . In the diffusion problem, this kind of a BC corresponded to infinite sinks at the boundaries, that annihilated anything that diffused through them. In the Schrödinger equation problem, which is a complex diffusion equation, the equivalent condition makes the boundaries impenetrable walls that deflect elastically anything that collides with them.

An R-Code that calculates the time evolution of a Gaussian initial wavefunction

in an area of zero potential:

for a domain 0 < x < 6, a lattice spacing , time interval 0 < t < 2 and time step , is given below:

library(graphics) #load the graphics library needed for plotting lx <- 6.0 #length of the computational domain lt <- 2.0 #length of the simulation time interval nx <- 120 #number of discrete lattice points nt <- 200 #number of timesteps dx <- lx/nx #length of one discrete lattice cell dt <- lt/nt #length of timestep V = c(1:nx) #potential energies at discrete points for(j in c(1:nx)) { V[j] = 0 #zero potential } kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices kappa2 <- c(1:nx) #another element for(j in c(1:nx)) { kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j]) } psi = as.complex(c(1:nx)) #array for the wave function values for(j in c(1:nx)) { psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction } xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution for(j in c(1:nx)) { for(k in c(1:nx)) { A[j,k]=0 B[j,k]=0 if(j==k) { A[j,k] = 1 + 2*kappa1 + kappa2[j] B[j,k] = 1 - 2*kappa1 - kappa2[j] } if((j==k+1) || (j==k-1)) { A[j,k] = -kappa1 B[j,k] = kappa1 } } } for (k in c(1:nt)) { #main time stepping loop sol <- solve(A,B%*%psi) #solve the system of equations for (l in c(1:nx)) { psi[l] <- sol[l] } if(k %% 3 == 1) { #make plots of |psi(x)|^2 on every third timestep jpeg(file = paste("plot_",k,".jpg",sep="")) plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2)) title(paste("|psi(x,t)|^2 at t =",k*dt)) lines(xaxis,abs(psi)^2) dev.off() } }

The output files are plots of the absolute squares of the wavefunction, and a few of them are shown below.

In the next simulation, I set the domain and discrete step sizes the same as above, but the initial state is:

Which is a Gaussian wave packet that has a nonzero momentum in the positive x-direction. This is done by changing the line

for(j in c(1:nx)) { psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction }+(1i)*j*dx

into

for(j in c(1:nx)) { psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3)+(1i)*j*dx)) #Gaussian initial wavefunction }

The plots of for several values of are shown below

and there you can see how the wave packet collides with the right boundary of the domain and bounces back.

In the last simulation, I will set the potential function to be

which is a harmonic oscillator potential, and with the nondimensional mass and Planck constant the ground state of this system is

If I’d set the initial state to be , or any other solution of the time-independent SE, the modulus of the wavefunction would not change at all. To get something interesting to happen, I instead set an initial state that is a displaced version of the ground state:

The solution can be obtained with the code shown below:

library(graphics) #load the graphics library needed for plotting lx <- 6.0 #length of the computational domain lt <- 3.0 #length of the simulation time interval nx <- 360 #number of discrete lattice points nt <- 300 #number of timesteps dx <- lx/nx #length of one discrete lattice cell dt <- lt/nt #length of timestep V = c(1:nx) #potential energies at discrete points for(j in c(1:nx)) { V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #Harmonic oscillator potential with k=4 } kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices kappa2 <- c(1:nx) #another element for(j in c(1:nx)) { kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j]) } psi = as.complex(c(1:nx)) #array for the wave function values for(j in c(1:nx)) { psi[j] = as.complex(exp(-(j*dx-2)*(j*dx-2))) #Gaussian initial wavefunction, displaced from equilibrium } xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution for(j in c(1:nx)) { for(k in c(1:nx)) { A[j,k]=0 B[j,k]=0 if(j==k) { A[j,k] = 1 + 2*kappa1 + kappa2[j] B[j,k] = 1 - 2*kappa1 - kappa2[j] } if((j==k+1) || (j==k-1)) { A[j,k] = -kappa1 B[j,k] = kappa1 } } } for (k in c(1:nt)) { #main time stepping loop sol <- solve(A,B%*%psi) #solve the system of equations for (l in c(1:nx)) { psi[l] <- sol[l] } if(k %% 3 == 1) { #make plots of Abs(psi(x))^2 on every third timestep jpeg(file = paste("plot_",k,".jpg",sep="")) plot(xaxis,abs(psi)^2, xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2)) title(paste("|psi(x,t)|^2 at t =",k*dt)) lines(xaxis,abs(psi)^2) lines(xaxis,V) dev.off() } }

and the solution at different values of look like this (images and video):

Here the shape of the Hookean potential energy is plotted in the same images. So, here you see how the center of the Gaussian wavefunction oscillates around the point , just like a classical mechanical harmonic oscillator does when set free from a position that is displaced from equilibrium.

By changing the code that produces the output images, we can also get a sequence of plots of the imaginary part of the wavefunction:

if(k %% 3 == 1) { #make plots of Im(psi(x)) on every third timestep jpeg(file = paste("plot_",k,".jpg",sep="")) plot(xaxis,Im(psi), xlab="position (x)", ylab="Im(Psi)",ylim=c(-1.5,1.5)) title(paste("Im(psi(x,t)) at t =",k*dt)) lines(xaxis,Im(psi)) lines(xaxis,V) dev.off() }

and the resulting plots look like this:

## One thought on “Numerical solution of PDE:s, Part 4: Schrödinger equation”