Numerical solution of PDE:s, Part 10: The thin-film equation

Earlier, I showed how to solve the 1D and 2D versions of the complex Ginzburg-Landau equation, which is an example of a nonlinear partial differential equation, and which had to be linearized for solution with implicit differencing, meaning that the matrix in the linear system was different on each timestep.

Another nonlinear PDE is the so-called thin film equation, which in 2D form reads

2dthinfilm-small.gif

Here the function h(x,y,t) describes the local thickness of a film of viscous liquid located on top of a solid surface described by the xy-plane. The parameter \gamma is the surface tension of the liquid-gas interface and \mu is the viscosity of the liquid.

An unusual thing about this equation is that it’s fourth order in the spatial coordinates, while most equations in physics are second order DE:s. Some equations of continuum mechanics describing elastic bending of cylinders and plates are also fourth order, so this is not the only example.

In many cases, a corresponding equation with only one spatial coordinate is enough for describing thin film physics, and then the graph of the solution can be though of as depicting an intersection of the film at a single value of y-coordinate.

1dthinfilm-small

When discretizing this equation, we must note that the factor h^3 has to be treated explicitly to get a linear system of equations, just like what had to be done with the |A|^2 in the CGLE. Also, we will set \gamma/(3\mu ) = 1 to make the equation dimensionless. One correct way to discretize this equation leads to the system

thinfilm-discrete.gif

where the two index object \alpha_{j}^{i} is

alpha-def-small

Note that now the linear system is not tridiagonal, but heptadiagonal due to the higher order derivatives. Solution of the equation with this differencing scheme and a Gaussian initial condition h(x,0) is done with the following R language code.

library(graphics) #load the graphics library needed for plotting

lx <- 10 #length of the computational domain
lt <- 10 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 150 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

psi = c(1:nx) #array for the function h values
sol = c(1:nx)

kappa = dt/(4*dx*dx*dx*dx)

for(j in c(1:nx)) {
psi[j] = exp(-(j*dx-5)*(j*dx-5))
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points
alpha <- c(1:nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution

for (m in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
alpha[j] = kappa*psi[j]*psi[j]*psi[j]
}

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
if(j!=nx && j!=1) {
A[j,k] = 1 + 2*alpha[j+1] + 2*alpha[j-1] #diagonal elements
}
if(j==1) {
A[j,k] = 1 + 2*alpha[j+1]
}
if(j==nx) {
A[j,k] = 1 + 2*alpha[j-1]
}
}

if(j==k+1 && j!=1) {
A[j,k] = -alpha[j-1] #off-diagonal elements
}

if(j==k-1 && j!=nx) {
A[j,k] = -alpha[j+1]
}

if(j==k+2 && j<nx) {
A[j,k] = -2*alpha[j+1]
}

if(j==k-2 && j>1) {
A[j,k] = -2*alpha[j-1]
}

if(j==k+3 && j<nx) {
A[j,k] = alpha[j+1]
}

if(j==k-3 && j>1) {
A[j,k] = alpha[j-1]
}

}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi) #solve the system of equations

jpeg(file = paste("plot_",m,".jpg",sep=""))
plot(xaxis,sol,xlab = "position (x)",ylab="h(x,t)",ylim=c(0,1),pch='.')
title(paste("h(x,t) at t = ",round(m*dt,digits=2)))
lines(xaxis,sol)
dev.off()

}

dev.off()

An animation of the solution looks like this.

The solutions of this equation have the property that the graph settles into the shape of a downward opening parabola when time proceeds. A problem with this is that the contact angle, in which the liquid surface approaches the solid surface (x-axis) at the final equilibrium, can be anything between 0 and 90 degrees depending on the relative width and height of the initial Gaussian. In a real liquid-solid system, the final contact angle depends on the surface tensions of both the liquid-solid and the liquid-gas interfaces, as described by the Young equation.

To create an equation that can model equilibrium contact angles appropriately, we add a disjoining pressure term \Pi (h) in the equation, as here:

thinfilm-disjoin-small

One form of the \Pi -term that works is

disjoin-def-small

where the h_* is a precursor film thickness.

contact-angle.jpg

The idea behind the precursor film is that even when we have a liquid drop or puddle surrounded by apparently dry solid surface, there is actually a very thin adsorbed layer of liquid molecules of that dry area (the molecules get there by evaporating from the liquid surface and reattaching on the solid). So, in a simulation where we include the disjoining pressure, we need to use an initial condition that is a Gaussian with an added constant equal to the precursor thickness:

h(x,0) = \exp \left[-b(x-x_0 )^2 \right] + h_*

In here we will set the values n = 5 and m = 2 in the disjoining pressure term, and set the precursor film thickness into the value 0.01. The disjoining pressure term can be treated explicitly at the same time as we use implicit differencing for the rest of the equation – we subtract, on the RHS of the discretized equation a term D_{j}^i ,

discrete-disjoin

defined by

dij

where

cij

and \Pi_{j}^i is the disjoining pressure evaluated at the discrete points. Note the use of a central finite difference instead of a one-sided difference when calculating the derivatives – doing otherwise is likely to make the simulation crash. The term D_{j}^{i} only affects the right-hand side vector of the linear system \mathbf{Ax} = \mathbf{b} that we solve on each timestep. A code that solves the new equation for the pre-factor value B=0.1 is shown next.

library(graphics) #load the graphics library needed for plotting

lx <- 10 #length of the computational domain
lt <- 5 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 5000 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

prec = 0.01 #precursor film thickness

psi = c(1:nx) #array for the function h values
sol = c(1:nx)

disjoin1 = c(1:nx)
disjoin2 = c(1:nx)
disjoin3 = c(1:nx)

kappa = dt/(4*dx*dx*dx*dx)

for(j in c(1:nx)) {
psi[j] = exp(-(j*dx-5)*(j*dx-5))+prec
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points
alpha <- c(1:nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution

for (m in c(1:nt)) { #main time stepping loop

disjoin1[1] = 0
disjoin1[nx] = 0
for(j in c(1:nx)) {
disjoin1[j] = 0.1*((prec/psi[j])^5 - (prec/psi[j])^2)
}

for(j in c(2:(nx-2))) {
disjoin2[j] = psi[j]*psi[j]*psi[j]*(disjoin1[j+1] - disjoin1[j-1])/(2*dx)
}

for(j in c(2:(nx-2))) {
disjoin3[j] = (disjoin2[j+1]-disjoin2[j-1])/(2*dx)
}

disjoin3[1] = 0
disjoin3[2] = 0
disjoin3[nx-1] = 0
disjoin3[nx] = 0

for(j in c(1:nx)) {
alpha[j] = kappa*psi[j]*psi[j]*psi[j]
}

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
if(j!=nx && j!=1) {
A[j,k] = 1 + 2*alpha[j+1] + 2*alpha[j-1] #diagonal elements
}
if(j==1) {
A[j,k] = 1 + 2*alpha[j+1]
}
if(j==nx) {
A[j,k] = 1 + 2*alpha[j-1]
}
}

if(j==k+1 && j!=1) {
A[j,k] = -alpha[j-1] #off-diagonal elements
}

if(j==k-1 && j!=nx) {
A[j,k] = -alpha[j+1]
}

if(j==k+2 && j<nx) {
A[j,k] = -2*alpha[j+1]
}

if(j==k-2 && j>1) {
A[j,k] = -2*alpha[j-1]
}

if(j==k+3 && j<nx) {
A[j,k] = alpha[j+1]
}

if(j==k-3 && j>1) {
A[j,k] = alpha[j-1]
}

}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi-disjoin3) #solve the system of equations
for(l in c((nx-20):nx)) { # remove the risk of boundary effects on the right end of the domain
psi[l]=prec
sol[l]=prec
}

if(m%%10 == 0) {
jpeg(file = paste("plot_",m,".jpg",sep=""))
plot(xaxis,sol,xlab = "position (x)",ylab="h(x,t)",ylim=c(0,1),pch='.')
title(paste("h(x,t) at t = ",round(m*dt,digits=2)))
lines(xaxis,sol)
dev.off()
}

}

In the next animation, the solution curves for pre-factor values B=0.1 (red curve) and B=0.01 (black curve) are shown in the same graph.

In the animation, it is apparent that a larger value of B leads to a larger contact angle at equilibrium. Actually, it can be shown that the contact angle \theta depends on the values of the parameters as

disjoin-contact-small.gif

More information about the thin film equation and its solutions can be found on the Wiki page here, and on the NJIT department of applied mathematics homepage. When the effects of gravitation or surface tension gradients are added in the TFE, many kinds of interesting pattern formation effects can happen just like in our previous example of a nonlinear PDE, the Ginzburg-Landau equation. If the liquid film consists of a mixture of many, possibly volatile liquids, complicated multiphysics problems involving fluid dynamics, evaporation, heat transfer and chemical kinetics, all at the same time, are obtained.

 

Numerical solution of PDE:s, Part 9: 2D Ginzburg-Landau equation

In an earlier post, I described the 1-dimensional Ginzburg-Landau equation and showed how it can be linearized and solved with a implicit differencing scheme. The most interesting feature of the solutions was the appearance of seemingly random oscillations. A similar solution method is possible for the 2d version of the equation:

2dGL-large

where again \alpha and \beta are real valued constants.

An R language code for solving this with parameter values \alpha = 0 , \beta = 1.5 and an initial state A(x,0) which is a mixture of 2d plane waves is shown below.

library(graphics) #load the graphics library needed for plotting

lx <- 80.0 #length of the computational domain in x-direction
ly <- 80.0 #length of the computational domain in y-direction
lt <- 60.0 #length of the simulation time interval
nx <- 50 #number of discrete lattice points in x-direction
ny <- 50 #number of discrete lattice points in y-direction
nt <- 240 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell in x-direction
dy <- ly/ny #length of one discrete lattice cell in y-direction
dt <- lt/nt #length of timestep

a <- 0
b <- 1.5

kappa1 = dt*(1+1i*a)/dx/dx
kappa2 = dt*(1+1i*b)

C = c(1:(nx*ny))
Cu = c(1:(nx*ny))
A2d = matrix(nrow=ny,ncol=nx)
xaxis <- c(0:(nx-1))*dx #the x values corresponding to the discrete lattice points
yaxis <- c(0:(ny-1))*dy #the y values corresponding to the discrete lattice points

A = matrix(nrow=(nx*ny),ncol=(nx*ny))
IP = matrix(nrow=4*nx,ncol=4*nx)

for (i in c(1:ny)) {
for (j in c(1:nx)) {
A2d[i,j] <- 0.01*exp(1i*5.21*i+1i*10.331*j)+0.01*exp(1i*15.71*i+1i*17.831*j)
}
}

for (k in c(1:nt)) { #main time stepping loop

for(i in c(1:ny)) {
for(j in c(1:nx)) {
C[(i-1)*nx+j] <- A2d[i,j]

}
}

for(i in c(1:(nx*ny))) {
for(j in c(1:(nx*ny))) {
A[i,j] <- 0
if(i==j && j!=1 && j!=nx && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && j!=1 && j!=nx && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(j==i+1 && (i%%nx != 0)) A[i,j] <- -kappa1
if(j==i-1 && (i%%nx != 1)) A[i,j] <- -kappa1
if(j==i+nx) A[i,j] <- -kappa1
if(j==i-nx) A[i,j] <- -kappa1
}
}

Cu <- solve(A,C)

for(i in c(1:ny)) {
for(j in c(1:nx)) {
if(i==1) Cu[(i-1)*nx+j]=Cu[i*nx+j]
if(i==ny) Cu[(i-1)*nx+j]=Cu[(i-2)*nx+j]
if(j==1) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j+1]
if(j==nx) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j-1]
}
}

for(i in c(1:ny)) {
for(j in c(1:nx)) {
A2d[i,j] <- Cu[(i-1)*nx+j]
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = A2d[l,m]
IP[4*l-2,4*m-3] = A2d[l,m]+0.25*(A2d[l+1,m]-A2d[l,m])
IP[4*l-1,4*m-3] = A2d[l,m]+0.5*(A2d[l+1,m]-A2d[l,m])
IP[4*l,4*m-3] = A2d[l,m]+0.75*(A2d[l+1,m]-A2d[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

#make plots of C(x,y) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
image(Re(IP),zlim=c(-3,3))
title(paste("Real part of solution A(x,y,t)",k*dt))
dev.off()

}

The code produces 2d plots of the real part of the solution on each timestep, and in the video shown below they have been combined into an animation.

In the animation we see the appearance of spiral patterns typical for these values of parameters \alpha,\beta . Other values of the parameters
produce different kinds of patterns, as is described in this link.

Numerical solution of PDE:s, Part 8: Complex Ginzburg-Landau Equation

In the previous numerical solution posts, I described linear equations like diffusion equation and the Schrödinger equation, and how they can be solved by (implicit or explicit) finite differencing. The idea of the implicit methods was to convert the equation into a linear system of equations, from which the function values on a discrete mesh could be calculated.

Saying that these equations were linear means that they can be written as

lindiff

where the linear differential operator, containing space and time derivatives, is acting on the function and producing “something” (usually zero but in the case of source terms/inhomogeneity something nonzero).

As a first example of a nonlinear PDE, let’s consider the complex Ginzburg-Landau equation (CGLE), which reads:

CGLE

Here the \alpha and \beta are real parameters and i is the imaginary unit. Applying an implicit differencing on this may seem to result in a system of equations

findiff-1

but this is not a linear system because of the |A_{i}^{j+1}|^2, so we cannot solve the problem in this way by using linear algebra.

The trick to solve this is to linearize the system, by evaluating the |A|^2 at timestep j and the rest of the quantities at timestep j+1, producing the system

findiff-3

which is now a linear system w.r.t. to the variables evaluated at timestep j+1 (the matrix for solving “A^{j+1}“:s has diagonal elements that depend of “A^j“:s). A more sophisticated method would do several iterations to approximate the values of A(x,t) between the timesteps j and j+1.

An R code that solves the equation for a domain x\in [0,100], t\in [0,150], using discrete steps \Delta x = 0.66, \Delta t = 0.33 , initial state A(x,0) = 0.1e^{2ix} and values \alpha=3 and \beta = -2, is shown here.

library(graphics) #load the graphics library needed for plotting

lx <- 100 #length of the computational domain
lt <- 150 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 450 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

a <- 3
b <- -2

kappa1 = dt*(1+1i*a)/dx/dx #an element needed for the matrices
kappa2 = dt*(1+1i*b)

psi = as.complex(c(1:nx)) #array for the function A values
sol = as.complex(c(1:nx))

for(j in c(1:nx)) {
psi[j] = 0.1*exp(2i*j*dx)
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

IPxaxis <- c(1:(4*nx))*dx/4
IPtaxis <- c(1:(4*nt))*dt/4

sol_plot = matrix(nrow=nt,ncol=nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
IP = matrix(nrow = 4*nt, ncol=4*nx)

for (m in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2*abs(sol[j])*abs(sol[j]) – dt #diagonal elements
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
}
}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi) #solve the system of equations

for (l in c(1:nx)) {
sol_plot[m,l] <- Re(sol[l])
}

jpeg(file = paste(“plot_”,m,”.jpg”,sep=””))
plot(xaxis,Im(sol),xlab = “position (x)”,ylab=”Im[A(x,t)]”,ylim=c(-4,4),pch=’.’)
title(paste(“Im[A(x,t)] at t = “,round(m*dt,digits=2)))
lines(xaxis,Im(sol))
dev.off()

}

for(l in c(1:(nt-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = sol_plot[l,m]
IP[4*l-2,4*m-3] = sol_plot[l,m]+0.25*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l-1,4*m-3] = sol_plot[l,m]+0.5*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l,4*m-3] = sol_plot[l,m]+0.75*(sol_plot[l+1,m]-sol_plot[l,m])
}
}

for(l in c(1:(4*nt))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = “2dplot.jpg”)
image(IPtaxis,IPxaxis,IP,xlab = “t-axis”,ylab=”x-axis”,zlim=c(-2,2))
dev.off()

Plotting the real part of the resulting function A(x,t) at several values of t, we see that the solution initially doesn’t do much anything, but at some point a “phase turbulence” sets in starting from the ends of the x-domain and after that the function evolves in a very random way, without following any clear pattern (unlike the spreading mass/temperature distributions, traveling waves or scattering wavepackets in the case of the common linear PDE:s).

An animation of the solution is shown below.

This kind of chaos is typical of nonlinear systems, be them point mass systems with nonlinear forces between mass points or field systems with nonlinear field equations such as the CGLE here. Note that the solution of this equation is a bit too heavy of a calculation to do just for the purpose of creating random numbers, so for that end other methods such as Perlin’s noise are used.

The 2D color plot of the real part of the solution, plotted in the xt-plane, looks like this:

2dplot

More plots of the solutions for different values of parameters can be found in this article.

It should be noted that, Wolfram Mathematica’s “NDSolve” function can’t usually solve nonlinear PDE:s correctly, despite usually working properly in the case of linear PDE:s. Some other commercial math programs such as Comsol Multiphysics may work better when solving nonlinear problems, at least to my experience.

So, here was the basic idea of how nonlinear PDE:s are solved by linearization, and what kind of things are possible in the behavior of their solutions. In the next PDE post I will show how to solve the thin-film equation, about which I actually wrote my master’s thesis in 2013, and which doesn’t usually behave chaotically unlike the CGLE (but can be made to do so by adding suitable terms).

A Random Bubbles Picture with ImageJ

In the last post, I described how to create an image with random B&W patterns by using random noise and bandpass filtering. This time I will show an even simpler trick with ImageJ, the creation of an image with random bubbles.

Start as before by opening ImageJ and making a new image.

bubble1.jpg

Fill the originally black image with specified random noise of standard deviation 3.0 (Process > Noise > Add Specified Noise ).

bubble2.jpg

Next, choose Process > Filters > Minimum and set the “radius” to 20 pixels. This will replace the brightness value of every pixel in the original image with the smallest brightness value that can be found in its circular neighborhood of that radius.

bubble3.jpg

Next do an edge detection, from the menu Process > Find Edges. The resulting image looks like this.

bubble4.jpg

Finally, modify the brightness of the image and add contrast (both done in Image > Adjust > Brightness/Contrast).

bubble5

Now we have a quite good bubbles image, but it has a rather poor resolution, as can be seen by zooming in:

 

The solution to this is to use the Inkscape vector graphics program to “trace” the bitmap and then export it in a PNG file of a larger resolution. Choose Path > Trace Bitmap from the Inkscape menu and use the “Brightness Cutoff” to do this.

Screenshot from 2017-07-29 13-48-06

If this is done by producing a 2-color vector image (not by multiple brightness steps), the result and the zoom look like this:

bubble-vector.jpg

bubble-vector-zoom.jpg

So, no more pixelization or blurring. The Inkscape program is easy to use and shouldn’t cause trouble. Another way to do the same thing is to use some online image vectorization service.

More image processing stuff coming soon, have fun!

Save

Creating Bitmaps with Random Patterns

This time I’m going to write about image processing and computer graphics. Many of you may have seen procedurally generated textures – bitmaps that look like something from the real world such as rusted steel, marble or stone, but are not photos, being completely computer generated. It may initially seem strange that it’s possible to produce patterns with such natural-looking randomness with a mathematical algorithm. The trick is to use random noise (similar to static on an old television screen) and apply cleverly chosen filters on that.

Let me show how to produce an image with random black spots/stripes on a white background, similar to the pattern of a zebra or some breeds of dog. First, we need a free image processing program called ImageJ, downloadable for Windows or Linux from this address.

First, we will choose New > Image from the File menu, and set the resolution of a new blank image to 1000 x 1000 pixels.

spots-1

The result is an all-black bitmap as you see below.

spots-2

Next, we will add gray random noise with a specified standard deviation of pixel brightness. Choose Noise > Add Specified Noise from the Process menu, and set the standard deviation to value 3.00.

spots-3.jpg

The resulting image looks like that below, where you see the familiar “TV static” appearance.

spots-4

Next we will do a so-called bandpass filtering on the image. The idea here is that the program first converts the image to a Fourier transform, which is a function that tells how much of different “wavelengths” are present in the image. An image that consists of large features (with a characteristic diameter of a large number of pixels) has a high amount of large-wavelength Fourier components, while an image that contains small features has a high amount of short-wavelength components. After converting the image to the Fourier transform the bandpass filtering algorithm removes all components with wavelengths that don’t lie on a desired interval and then does an inverse Fourier transform to produce a filtered image.

Choose FFT > Bandpass Filter from the Process menu and set the lower limit to 30 pixels and the upper limit to 100 pixels. This will remove all features with characteristic lengths that are not in that interval. The result looks like this:

spots-5

This resulting image is pretty much the same as Perlin’s fractal noise, which is very important in computer generated graphics.

Next, we have to do a “color quantization” that converts the image to a maximum contrast one, having only all-black or all-white pixels. To do this,  choose Adjust > Brightness/Contrast from the Image menu.

Dragging the Contrast bar to maximum value, we obtain a two-color image that has random spots/stripes as wanted. If the contrast adjustment doesn’t seem to work, try changing the image type to “RGB Color” from the Image menu first.

spots-7.jpg

Spots of different sizes can be created by changing the upper and lower limits in the bandpass filtering stage. If an image with a larger resolution is desired, the image can be converted to a vector image with a program such as Inkscape, and printed back into a .PNG bitmap file of a higher resolution. This will produce a larger image without blurring or pixelization.

More complicated filtering and coloring procedures can be done to produce images like this one here, but I will write more about it later.

03_preview3.jpg

An example of fractal generating code

Fractals are structures that contain features at all scales, which means that they always reveal more features when zoomed into. Images of computer generated fractals, like the featured image, are something that almost everyone has seen. They can be made by different kinds of iterations of complex valued functions.

After just having quickly (in about half an hour) written an R-code that generates one kind of fractal, I’ll share it here:

n <- 500 # Number of pixels in x and y directions
L <- 1 # Length of the sides of square in complex plane
dx <- L/n # Spatial step size
flg <- 0 # A flag that tells if divergence has been observed
tres <- 0 # Variable for number of iterations before divergence

iter <- 50 # Number of iterations per pixel
bmap = matrix(nrow = n, ncol = n) # Bitmap for the fractal image

for (j in c(1:n)) # Loops over the real and imaginary axis
{
for (k in c(1:n))
{
z = dx*j-1.5 + (1i)*(k*dx-0.5) # Point z is chosen from the square
flg <- 0 # Initially, the flag is zero
tres <- 0 # Zero the treshold variable
x <- 1 # Initial value used in iteration for (l in c(1:iter)) # Iterate { x = x^2 + z  # n -> n+1

if ((flg == 0) && (Mod(x) > 8)) # If divergence seems to have taken place, record the number of iterations done and set flag to 1
{
flg <- 1
tres <- l
}

}
bmap[j,k] <- tres # Decide the color of pixel based on the treshold number of iterations
}
}

jpeg(file = "fractal.jpg") # Save the bitmap
image(bmap, zlim = c(1,30))

The image that the code creates looks like this.

fractal

Now, if you want, try to modify the values of the parameters in the code and see how it affects the output image file. Examples of modifications include changing the exponent 2 in the iterated function f(z) = z^2 + x to some other number (1.9, 2.1, 2.4 or something) or multiplying the z^2 with something other than 1.

Save

Numerical solution of PDE:s, Part 7: 2D Schrödinger equation

Haven’t been posting for a while, but here’s something new… Earlier I showed how to solve the 1D Schrödinger equation numerically in different situations. Now I’m going to show how to calculate the evolution of a 2D wavepacket in a potential energy field that has been constructed to mimic the classical “two-slit experiment” which shows how the mechanics of low-mass particles like electrons can exhibit interference similar to the mechanics of classical waves (sound, light, water surface, and so on).

A 2D Schrödinger equation for a single particle in a time-independent background potential V(x,y) is

2d-TDSE.png

Where the particle mass has been set to 1 and the Planck’s constant to 2\pi.

To solve this numerically, we need the Crank-Nicolson method, as was the case when solving the 1D problem. More specifically, the linear system to be solved is

evol1

with

evol2

where the wavefunction now has two position indices and one time index, and the potential energy has only two position indices.

To form a model of the two-slit experiment, we choose a domain 0 < x < 6; 0 < y < 6 and make a potential energy function defined by

IF (x < 2.2 OR x > 3.8 OR (x > 2.7 AND x < 3.3)) THEN IF (3.7 < y < 4) THEN V(x,y) = 30

IF (x < 0.5 OR x > 5.5 OR y < 0.5 OR y > 5.5) THEN V(x,y) = 30

Otherwise V(x,y) = 0.

which corresponds to having hard walls surrounding the domain and a barrier with two holes around the line y = 3.85

For an initial condition, we choose a Gaussian wavepacket that has a nonzero expectation value of the momentum in y-direction:

initstate.png

An R-Code that solves this problem for a time interval 0 < t < 1 is

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain (both x and y)
lt <- 1.0 #length of the simulation time interval
nx <- 47 #number of discrete lattice points in x and y directions
nt <- 60 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell, same in x and y directions
dt <- lt/nt #length of timestep

V = matrix(nrow=nx,ncol=nx) #potential energies at discrete xy points

for(j in c(1:nx)) { #construct the potential field with a double-slit barrier
for(k in c(1:nx)) {
V[j,k] = 0+0i
if((j*dx<2.2)||(j*dx>3.8)||((j*dx>2.7) && (j*dx<3.3))) {
if((k*dx>3.7) && (k*dx<4.0)) {
V[j,k] = 30+0i #No significant density is going to go through these barriers
}
}
if((j*dx>5.5) || (j*dx<0.5) || (k*dx>5.5) || (k*dx<0.5)) {
V[j,k] = 30+0i
}
}
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:(nx*nx)) #another element

for(j in c(1:nx)) {
for(k in c(1:nx)) {
kappa2[(j-1)*nx+k] <- kappa1*2*dx*dx*V[j,k]
}
}

psi = c(1:(nx*nx)) #array for the wave function values

for(j in c(1:nx)) {
for(k in c(1:nx)) {
psi[(j-1)*nx+k] = exp(-2*(k*dx-1.7)*(k*dx-1.7)-2*(j*dx-3)*(j*dx-3)+(2i)*k*dx) #Gaussian initial wavefunction
if((j*dx > 5.5)||(j*dx < 0.5)||(k*dx > 5.5)||(k*dx < 0.5)) {
psi[(j-1)*nx+k] = as.complex(0)
}
}
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points
yaxis <- c(1:nx)*dx #the y values

A = matrix(nrow=nx*nx,ncol=nx*nx) #matrix for forward time evolution
B = matrix(nrow=nx*nx,ncol=nx*nx) #matrix for backward time evolution
P = matrix(nrow=nx,ncol=nx) #matrix for the solution after time stepping
IP = matrix(nrow=4*nx,ncol=4*nx) #matrix for the higher resolution image obtained by interpolation

for(j in c(1:(nx*nx))) { #Set the values for time evolution matrix elements
for(k in c(1:(nx*nx))) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 4*kappa1 + kappa2[j]
B[j,k] = 1 – 4*kappa1 – kappa2[j]
}
if((k==j+1) || (k==j-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
if((k==j+nx)||(k==j-nx)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for(k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for(l in c(1:(nx*nx))) {
psi[l] <- sol[l]
}

for(l in c(1:nx)) {
for(m in c(1:nx)) {
P[l,m] = abs(psi[(l-1)*nx + m])*abs(psi[(l-1)*nx + m]) #square of the absolute value of wave function
if(abs(V[l,m]) > 5) P[l,m] = 2
}
}
for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = P[l,m]
IP[4*l-2,4*m-3] = P[l,m]+0.25*(P[l+1,m]-P[l,m])
IP[4*l-1,4*m-3] = P[l,m]+0.5*(P[l+1,m]-P[l,m])
IP[4*l,4*m-3] = P[l,m]+0.75*(P[l+1,m]-P[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = paste(“plot_abs_”,k,”.jpg”,sep=””)) #save the image
image(IP, zlim = c(0,0.15))
dev.off()

}


The code produces a sequence of image files, where the probability density is plotted with colors, as an output. Some representative images from this sequence (converted to grayscale) is shown below:

A video of the time evolution is shown below:

The treshold for maximum white color has been chosen to be quite low, to make the small amount of probability density that crosses the barrier visible.

The discrete grid of points has been made quite coarse here to keep the computation time reasonable, and the resolution has been increased artificially by using linear interpolation between the discrete points.

So, now we’ve seen how to solve the motion of 2D wavepackets moving around obstacles. In the next numerical methods post, I’ll go through the numerical solution of a nonlinear PDE.

Save

Save

Numerical solution of PDE:s, Part 6: Adiabatic approximation for quantum dynamics

Having solved the time-dependent Schrödinger equation both in real and imaginary time, we can move forward to investigate systems where the potential energy function V has an explicit time dependence in it:

timedep.gif

In this kind of systems, the expectation value of the Hamiltonian operator doesn’t have to stay constant.

Time-dependent perturbation theory is one method for finding approximate solutions for this kind of problems, but here I will handle a simpler example, which is called adiabatic approximation.

Suppose that the potential energy function V(x,t) is known. Now, let’s say that we also know the solutions of the time-independent Schrödinger equation

time-indep

for any value of t. I denote the solutions as \psi_n (x;t), where it is understood that x is a variable and t is a parameter. Now, if the function V(x,t) changes very slowly as a function of time, i.e. its partial derivative with respect to t is small at all points of the domain, we can use the adiabatic approximation, which says that if the initial state \Psi (x,0) is the ground state for the potential V(x,0), then the state at time t is the ground state for the potential V(x,t).

adiabatic

So, we can change a ground state of one potential V_1 (x) into the ground state of another potential $V_2 (x)&bg=ffffff&fg=000000$ by making a continuous change from V_1 (x) to V_2 (x) slowly enough.

Let’s test this by chooosing a function V as

moving.gif

i.e. a Hookean potential that moves to the positive x-direction with constant speed. If we set the wavefunction at t=0 to

init.gif

which is the ground state corresponding to V(x,0), the time depelopment of the wavepacket should be like

adiabatic-evolution.gif

which means that is moves with the same constant speed as the bottom of the potential V. This can be calculated with the R-Code below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 15.0 #length of the simulation time interval
nx <- 100 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #harmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3-k*dt*0.05)*(j*dx-3-k*dt*0.05)) #time dependent potential
}

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

for(l in c(1:nx)) {
for(m in c(1:nx)) {
A[l,m]=0
B[l,m]=0
if(l==m) {
A[l,m] = 1 + 2*kappa1 + kappa2[m]
B[l,m] = 1 - 2*kappa1 - kappa2[m]
}
if((l==m+1) || (l==m-1)) {
A[l,m] = -kappa1
B[l,m] = kappa1
}
}
}
sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V)
dev.off()
}
}

and in the following sequences of images you see that the approximation is quite good for v = 0.05

As an animation, this process looks like shown below:

By doing the same calculation again, but this time with v = 3, the image sequence looks like this:

where it is obvious that the approximation doesn’t work anymore.

Save

Save

Numerical solution of PDE:s, Part 5: Schrödinger equation in imaginary time

In the last post, I mentioned that the solution of the time dependent 1D Schrödinger equation

codecogseqn21

can be written by expanding the initial state \Psi (x,0) in the basis of the solutions of time-independent Schrödinger equation

time-independent

and multiplying each term in the expansion by a complex-valued time dependent phase factor:

time-evolution

Now, assuming that the energies E_n are all positive and are ordered so that E_0 is the smallest of them, we get an interesting result by replacing the time variable t with an imaginary time variable s = it. The function \Psi (x,s) is then

imagtime.gif

which is a sum of exponentially decaying terms, of which the first one, c_0 \exp(-E_0 s)\psi_0 (x) decays slowest. So, in the limit of large s, the wave function is

imagtime2.gif

i.e. after normalizing it, it’s approximately the same as the ground state \psi_0. Also, if s is a large number, we have

imagtime3.gif

or

imagtime4.gif

So, here we have a way to use the TDSE to numerically calculate the ground state wavefunction and corresponding energy eigenvalue for any potential energy function V(x). This is very useful, as the ground state can’t usually be solved analytically, except for some very simple potential energy functions such as the harmonic oscillator potential.

To test this imaginary time method, I will approximately calculate the ground state of an anharmonic oscillator, described by a Schrödinger equation

anharmonic.gif

as an initial state, I will choose a Gaussian function

initstate.gif

and the computational domain is defined by 0 < x < 6, 0 < t < 3, \Delta x = 0.05, \Delta t = 0.01.

An R-Code that performs this calculation is shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- (-1i)*lt/nt #length of imaginary timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(0.25*(j*dx-3)*(j*dx-3)+0.15*(j*dx-3)*(j*dx-3)*(j*dx-3)*(j*dx-3)) #anharmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gasussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

nrm = 0
for (l in c(1:nx)) nrm <- nrm + dx*abs(psi[l])*abs(psi[l])

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis, abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2")
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V*max(abs(psi)^2))
dev.off()
}
}

And a set of plots of |\Psi (x,t)|^2 for several values of t look like this:

Here the shape of the anharmonic potential has been plotted to the same images.

The problem with this method for finding a ground state is that if the system has more degrees of freedom than a single coordinate x, the matrices in the linear systems needed in the time stepping quickly become very large and the calculation becomes prohibitively slow. To make this worse, the Crank-Nicolson method of time stepping can’t be parallelized for multiple processors. However, there is another way to compute the evolution of a wave function in imaginary time, which is called Diffusion Monte Carlo, and that is easily parallelizable. DMC is one practical way for calculating ground state wave functions for multi-particle systems such as a helium or a lithium atom.

Save

Save

Save

Numerical solution of PDE:s, Part 4: Schrödinger equation

In the earlier posts, I showed how to numerically solve a 1D or 2D diffusion or heat conduction problem using either explicit or implicit finite differencing. In the 1D example, the relevant equation for diffusion was

diffusion.gif

and an important property of the solution was the conservation of mass,

masscons.gif

i.e. the integral of the concentration field over whole domain stays constant.

Next, I will show how to integrate the 1D time-dependent Schrödinger equation, which in a nondimensional form where we set \hbar = 1 and m = 1 reads:

codecogseqn21

Here i is the imaginary unit and V(x) is the potential energy as a function of x. The solutions of this equation must obey a conservation law similar to the mass conservation in the diffusion equation, the conservation of norm:

unitarity.gif

where the quantity |\Psi (x,t)| is the modulus of the complex-valued function \Psi (x,t) . This property of the solution is also called unitarity of the time evolution.

Apart from the TDSE, another way to represent the time development of this system is to find the normalized solutions \psi_0 (x), \psi_1 (x), \psi_2 (x) \dots of the time-independent Schrödinger equation

time-independent

and write the initial state \Psi (x,0) as a linear combination of those basis functions:

expansion.gif

This is possible because the solutions of the time-independent equation form a basis for the set of acceptable wave functions \psi (x). Then, every term in that eigenfunction expansion is multiplied by a time dependent phase factor \exp(-iE_n t):

time-evolution.gif

The numbers E_n are the eigenvalues corresponding to the solutions \psi_n (x) and the function \psi_0 (x) is called the ground state corresponding to potential V(x), while the functions \psi_1 (x) is the first excited state and so on.

The Schrödinger equation can’t be discretized by using either the explicit or implicit method that we used when solving the diffusion equation. The method is either numerically unstable or doesn’t conserve the normalization of the wave function (or both) if you try to do that. The correct way to discretize the Schrödinger equation is to replace the wave function with a discrete equivalent

discrete-wf.gif

and the potential energy function V(x) with V_{i;j} (or V_i in the case of time-independent potential), and write an equation that basically tells that propagating the state \Psi_{i;j} forward by half a time step gives the same result as propagating the state \Psi_{i;j+1} backwards by half a time step:

discrete-se.gif

Here we have

kappa1.gif

and

kappa2

This kind of discretization is called the Crank-Nicolson method. As boundary conditions, we usually set that at the boundaries of the computational domain the wavefunction stays at value zero: \Psi (0,t) = \Psi (L,t) = 0 for any value of t. In the diffusion problem, this kind of a BC corresponded to infinite sinks at the boundaries, that annihilated anything that diffused through them. In the Schrödinger equation problem, which is a complex diffusion equation, the equivalent condition makes the boundaries impenetrable walls that deflect elastically anything that collides with them.

An R-Code that calculates the time evolution of a Gaussian initial wavefunction

initstate

in an area of zero potential:

idzero.gif

for a domain 0 < x < 6, a lattice spacing \Delta x = 0.05, time interval 0 < t < 2 and time step \Delta t = 0.01, is given below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 2.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 200 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = 0 #zero potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of |psi(x)|^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
dev.off()
}
}

The output files are plots of the absolute squares of the wavefunction, and a few of them are shown below.

In the next simulation, I set the domain and discrete step sizes the same as above, but the initial state is:

init-momentum1

Which is a Gaussian wave packet that has a nonzero momentum in the positive x-direction. This is done by changing the line

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}+(1i)*j*dx

into

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3)+(1i)*j*dx)) #Gaussian initial wavefunction
}

The plots of |\Psi (x,t)|^2 for several values of t are shown below

and there you can see how the wave packet collides with the right boundary of the domain and bounces back.

In the last simulation, I will set the potential function to be

pot2.gif

which is a harmonic oscillator potential, and with the nondimensional mass m =1 and Planck constant \hbar = 1 the ground state \psi _0 (x) of this system is

ground-SHO.gif

If I’d set the initial state to be \Psi (x,0) = \psi_0 (x), or any other solution of the time-independent SE, the modulus of the wavefunction would not change at all. To get something interesting to happen, I instead set an initial state that is a displaced version of the ground state:

disp-ground-sho

The solution can be obtained with the code shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 360 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #Harmonic oscillator potential with k=4
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-2)*(j*dx-2))) #Gaussian initial wavefunction, displaced from equilibrium
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of Abs(psi(x))^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2, xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
lines(xaxis,V)
dev.off()
}
}

and the solution at different values of t look like this (images and video):

Here the shape of the Hookean potential energy is plotted in the same images. So, here you see how the center of the Gaussian wavefunction oscillates around the point x = 3, just like a classical mechanical harmonic oscillator does when set free from a position that is displaced from equilibrium.

By changing the code that produces the output images, we can also get a sequence of plots of the imaginary part of the wavefunction:

if(k %% 3 == 1) { #make plots of Im(psi(x)) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,Im(psi), xlab="position (x)", ylab="Im(Psi)",ylim=c(-1.5,1.5))
title(paste("Im(psi(x,t)) at t =",k*dt))
lines(xaxis,Im(psi))
lines(xaxis,V)
dev.off()
}

and the resulting plots look like this:

Save

Save