Numerical solution of PDE:s, Part 10: The thin-film equation

Earlier, I showed how to solve the 1D and 2D versions of the complex Ginzburg-Landau equation, which is an example of a nonlinear partial differential equation, and which had to be linearized for solution with implicit differencing, meaning that the matrix in the linear system was different on each timestep.

Another nonlinear PDE is the so-called thin film equation, which in 2D form reads

2dthinfilm-small.gif

Here the function h(x,y,t) describes the local thickness of a film of viscous liquid located on top of a solid surface described by the xy-plane. The parameter \gamma is the surface tension of the liquid-gas interface and \mu is the viscosity of the liquid.

An unusual thing about this equation is that it’s fourth order in the spatial coordinates, while most equations in physics are second order DE:s. Some equations of continuum mechanics describing elastic bending of cylinders and plates are also fourth order, so this is not the only example.

In many cases, a corresponding equation with only one spatial coordinate is enough for describing thin film physics, and then the graph of the solution can be though of as depicting an intersection of the film at a single value of y-coordinate.

1dthinfilm-small

When discretizing this equation, we must note that the factor h^3 has to be treated explicitly to get a linear system of equations, just like what had to be done with the |A|^2 in the CGLE. Also, we will set \gamma/(3\mu ) = 1 to make the equation dimensionless. One correct way to discretize this equation leads to the system

thinfilm-discrete.gif

where the two index object \alpha_{j}^{i} is

alpha-def-small

Note that now the linear system is not tridiagonal, but heptadiagonal due to the higher order derivatives. Solution of the equation with this differencing scheme and a Gaussian initial condition h(x,0) is done with the following R language code.

library(graphics) #load the graphics library needed for plotting

lx <- 10 #length of the computational domain
lt <- 10 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 150 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

psi = c(1:nx) #array for the function h values
sol = c(1:nx)

kappa = dt/(4*dx*dx*dx*dx)

for(j in c(1:nx)) {
psi[j] = exp(-(j*dx-5)*(j*dx-5))
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points
alpha <- c(1:nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution

for (m in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
alpha[j] = kappa*psi[j]*psi[j]*psi[j]
}

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
if(j!=nx && j!=1) {
A[j,k] = 1 + 2*alpha[j+1] + 2*alpha[j-1] #diagonal elements
}
if(j==1) {
A[j,k] = 1 + 2*alpha[j+1]
}
if(j==nx) {
A[j,k] = 1 + 2*alpha[j-1]
}
}

if(j==k+1 && j!=1) {
A[j,k] = -alpha[j-1] #off-diagonal elements
}

if(j==k-1 && j!=nx) {
A[j,k] = -alpha[j+1]
}

if(j==k+2 && j<nx) {
A[j,k] = -2*alpha[j+1]
}

if(j==k-2 && j>1) {
A[j,k] = -2*alpha[j-1]
}

if(j==k+3 && j<nx) {
A[j,k] = alpha[j+1]
}

if(j==k-3 && j>1) {
A[j,k] = alpha[j-1]
}

}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi) #solve the system of equations

jpeg(file = paste("plot_",m,".jpg",sep=""))
plot(xaxis,sol,xlab = "position (x)",ylab="h(x,t)",ylim=c(0,1),pch='.')
title(paste("h(x,t) at t = ",round(m*dt,digits=2)))
lines(xaxis,sol)
dev.off()

}

dev.off()

An animation of the solution looks like this.

The solutions of this equation have the property that the graph settles into the shape of a downward opening parabola when time proceeds. A problem with this is that the contact angle, in which the liquid surface approaches the solid surface (x-axis) at the final equilibrium, can be anything between 0 and 90 degrees depending on the relative width and height of the initial Gaussian. In a real liquid-solid system, the final contact angle depends on the surface tensions of both the liquid-solid and the liquid-gas interfaces, as described by the Young equation.

To create an equation that can model equilibrium contact angles appropriately, we add a disjoining pressure term \Pi (h) in the equation, as here:

thinfilm-disjoin-small

One form of the \Pi -term that works is

disjoin-def-small

where the h_* is a precursor film thickness.

contact-angle.jpg

The idea behind the precursor film is that even when we have a liquid drop or puddle surrounded by apparently dry solid surface, there is actually a very thin adsorbed layer of liquid molecules of that dry area (the molecules get there by evaporating from the liquid surface and reattaching on the solid). So, in a simulation where we include the disjoining pressure, we need to use an initial condition that is a Gaussian with an added constant equal to the precursor thickness:

h(x,0) = \exp \left[-b(x-x_0 )^2 \right] + h_*

In here we will set the values n = 5 and m = 2 in the disjoining pressure term, and set the precursor film thickness into the value 0.01. The disjoining pressure term can be treated explicitly at the same time as we use implicit differencing for the rest of the equation – we subtract, on the RHS of the discretized equation a term D_{j}^i ,

discrete-disjoin

defined by

dij

where

cij

and \Pi_{j}^i is the disjoining pressure evaluated at the discrete points. Note the use of a central finite difference instead of a one-sided difference when calculating the derivatives – doing otherwise is likely to make the simulation crash. The term D_{j}^{i} only affects the right-hand side vector of the linear system \mathbf{Ax} = \mathbf{b} that we solve on each timestep. A code that solves the new equation for the pre-factor value B=0.1 is shown next.

library(graphics) #load the graphics library needed for plotting

lx <- 10 #length of the computational domain
lt <- 5 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 5000 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

prec = 0.01 #precursor film thickness

psi = c(1:nx) #array for the function h values
sol = c(1:nx)

disjoin1 = c(1:nx)
disjoin2 = c(1:nx)
disjoin3 = c(1:nx)

kappa = dt/(4*dx*dx*dx*dx)

for(j in c(1:nx)) {
psi[j] = exp(-(j*dx-5)*(j*dx-5))+prec
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points
alpha <- c(1:nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution

for (m in c(1:nt)) { #main time stepping loop

disjoin1[1] = 0
disjoin1[nx] = 0
for(j in c(1:nx)) {
disjoin1[j] = 0.1*((prec/psi[j])^5 - (prec/psi[j])^2)
}

for(j in c(2:(nx-2))) {
disjoin2[j] = psi[j]*psi[j]*psi[j]*(disjoin1[j+1] - disjoin1[j-1])/(2*dx)
}

for(j in c(2:(nx-2))) {
disjoin3[j] = (disjoin2[j+1]-disjoin2[j-1])/(2*dx)
}

disjoin3[1] = 0
disjoin3[2] = 0
disjoin3[nx-1] = 0
disjoin3[nx] = 0

for(j in c(1:nx)) {
alpha[j] = kappa*psi[j]*psi[j]*psi[j]
}

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
if(j!=nx && j!=1) {
A[j,k] = 1 + 2*alpha[j+1] + 2*alpha[j-1] #diagonal elements
}
if(j==1) {
A[j,k] = 1 + 2*alpha[j+1]
}
if(j==nx) {
A[j,k] = 1 + 2*alpha[j-1]
}
}

if(j==k+1 && j!=1) {
A[j,k] = -alpha[j-1] #off-diagonal elements
}

if(j==k-1 && j!=nx) {
A[j,k] = -alpha[j+1]
}

if(j==k+2 && j<nx) {
A[j,k] = -2*alpha[j+1]
}

if(j==k-2 && j>1) {
A[j,k] = -2*alpha[j-1]
}

if(j==k+3 && j<nx) {
A[j,k] = alpha[j+1]
}

if(j==k-3 && j>1) {
A[j,k] = alpha[j-1]
}

}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi-disjoin3) #solve the system of equations
for(l in c((nx-20):nx)) { # remove the risk of boundary effects on the right end of the domain
psi[l]=prec
sol[l]=prec
}

if(m%%10 == 0) {
jpeg(file = paste("plot_",m,".jpg",sep=""))
plot(xaxis,sol,xlab = "position (x)",ylab="h(x,t)",ylim=c(0,1),pch='.')
title(paste("h(x,t) at t = ",round(m*dt,digits=2)))
lines(xaxis,sol)
dev.off()
}

}

In the next animation, the solution curves for pre-factor values B=0.1 (red curve) and B=0.01 (black curve) are shown in the same graph.

In the animation, it is apparent that a larger value of B leads to a larger contact angle at equilibrium. Actually, it can be shown that the contact angle \theta depends on the values of the parameters as

disjoin-contact-small.gif

More information about the thin film equation and its solutions can be found on the Wiki page here, and on the NJIT department of applied mathematics homepage. When the effects of gravitation or surface tension gradients are added in the TFE, many kinds of interesting pattern formation effects can happen just like in our previous example of a nonlinear PDE, the Ginzburg-Landau equation. If the liquid film consists of a mixture of many, possibly volatile liquids, complicated multiphysics problems involving fluid dynamics, evaporation, heat transfer and chemical kinetics, all at the same time, are obtained.

 

Advertisements

Numerical solution of PDE:s, Part 9: 2D Ginzburg-Landau equation

In an earlier post, I described the 1-dimensional Ginzburg-Landau equation and showed how it can be linearized and solved with a implicit differencing scheme. The most interesting feature of the solutions was the appearance of seemingly random oscillations. A similar solution method is possible for the 2d version of the equation:

2dGL-large

where again \alpha and \beta are real valued constants.

An R language code for solving this with parameter values \alpha = 0 , \beta = 1.5 and an initial state A(x,0) which is a mixture of 2d plane waves is shown below.

library(graphics) #load the graphics library needed for plotting

lx <- 80.0 #length of the computational domain in x-direction
ly <- 80.0 #length of the computational domain in y-direction
lt <- 60.0 #length of the simulation time interval
nx <- 50 #number of discrete lattice points in x-direction
ny <- 50 #number of discrete lattice points in y-direction
nt <- 240 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell in x-direction
dy <- ly/ny #length of one discrete lattice cell in y-direction
dt <- lt/nt #length of timestep

a <- 0
b <- 1.5

kappa1 = dt*(1+1i*a)/dx/dx
kappa2 = dt*(1+1i*b)

C = c(1:(nx*ny))
Cu = c(1:(nx*ny))
A2d = matrix(nrow=ny,ncol=nx)
xaxis <- c(0:(nx-1))*dx #the x values corresponding to the discrete lattice points
yaxis <- c(0:(ny-1))*dy #the y values corresponding to the discrete lattice points

A = matrix(nrow=(nx*ny),ncol=(nx*ny))
IP = matrix(nrow=4*nx,ncol=4*nx)

for (i in c(1:ny)) {
for (j in c(1:nx)) {
A2d[i,j] <- 0.01*exp(1i*5.21*i+1i*10.331*j)+0.01*exp(1i*15.71*i+1i*17.831*j)
}
}

for (k in c(1:nt)) { #main time stepping loop

for(i in c(1:ny)) {
for(j in c(1:nx)) {
C[(i-1)*nx+j] <- A2d[i,j]

}
}

for(i in c(1:(nx*ny))) {
for(j in c(1:(nx*ny))) {
A[i,j] <- 0
if(i==j && j!=1 && j!=nx && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && j!=1 && j!=nx && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(j==i+1 && (i%%nx != 0)) A[i,j] <- -kappa1
if(j==i-1 && (i%%nx != 1)) A[i,j] <- -kappa1
if(j==i+nx) A[i,j] <- -kappa1
if(j==i-nx) A[i,j] <- -kappa1
}
}

Cu <- solve(A,C)

for(i in c(1:ny)) {
for(j in c(1:nx)) {
if(i==1) Cu[(i-1)*nx+j]=Cu[i*nx+j]
if(i==ny) Cu[(i-1)*nx+j]=Cu[(i-2)*nx+j]
if(j==1) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j+1]
if(j==nx) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j-1]
}
}

for(i in c(1:ny)) {
for(j in c(1:nx)) {
A2d[i,j] <- Cu[(i-1)*nx+j]
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = A2d[l,m]
IP[4*l-2,4*m-3] = A2d[l,m]+0.25*(A2d[l+1,m]-A2d[l,m])
IP[4*l-1,4*m-3] = A2d[l,m]+0.5*(A2d[l+1,m]-A2d[l,m])
IP[4*l,4*m-3] = A2d[l,m]+0.75*(A2d[l+1,m]-A2d[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

#make plots of C(x,y) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
image(Re(IP),zlim=c(-3,3))
title(paste("Real part of solution A(x,y,t)",k*dt))
dev.off()

}

The code produces 2d plots of the real part of the solution on each timestep, and in the video shown below they have been combined into an animation.

In the animation we see the appearance of spiral patterns typical for these values of parameters \alpha,\beta . Other values of the parameters
produce different kinds of patterns, as is described in this link.

Numerical solution of PDE:s, Part 8: Complex Ginzburg-Landau Equation

In the previous numerical solution posts, I described linear equations like diffusion equation and the Schrödinger equation, and how they can be solved by (implicit or explicit) finite differencing. The idea of the implicit methods was to convert the equation into a linear system of equations, from which the function values on a discrete mesh could be calculated.

Saying that these equations were linear means that they can be written as

lindiff

where the linear differential operator, containing space and time derivatives, is acting on the function and producing “something” (usually zero but in the case of source terms/inhomogeneity something nonzero).

As a first example of a nonlinear PDE, let’s consider the complex Ginzburg-Landau equation (CGLE), which reads:

CGLE

Here the \alpha and \beta are real parameters and i is the imaginary unit. Applying an implicit differencing on this may seem to result in a system of equations

findiff-1

but this is not a linear system because of the |A_{i}^{j+1}|^2, so we cannot solve the problem in this way by using linear algebra.

The trick to solve this is to linearize the system, by evaluating the |A|^2 at timestep j and the rest of the quantities at timestep j+1, producing the system

findiff-3

which is now a linear system w.r.t. to the variables evaluated at timestep j+1 (the matrix for solving “A^{j+1}“:s has diagonal elements that depend of “A^j“:s). A more sophisticated method would do several iterations to approximate the values of A(x,t) between the timesteps j and j+1.

An R code that solves the equation for a domain x\in [0,100], t\in [0,150], using discrete steps \Delta x = 0.66, \Delta t = 0.33 , initial state A(x,0) = 0.1e^{2ix} and values \alpha=3 and \beta = -2, is shown here.

library(graphics) #load the graphics library needed for plotting

lx <- 100 #length of the computational domain
lt <- 150 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 450 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

a <- 3
b <- -2

kappa1 = dt*(1+1i*a)/dx/dx #an element needed for the matrices
kappa2 = dt*(1+1i*b)

psi = as.complex(c(1:nx)) #array for the function A values
sol = as.complex(c(1:nx))

for(j in c(1:nx)) {
psi[j] = 0.1*exp(2i*j*dx)
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

IPxaxis <- c(1:(4*nx))*dx/4
IPtaxis <- c(1:(4*nt))*dt/4

sol_plot = matrix(nrow=nt,ncol=nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
IP = matrix(nrow = 4*nt, ncol=4*nx)

for (m in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2*abs(sol[j])*abs(sol[j]) – dt #diagonal elements
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
}
}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi) #solve the system of equations

for (l in c(1:nx)) {
sol_plot[m,l] <- Re(sol[l])
}

jpeg(file = paste(“plot_”,m,”.jpg”,sep=””))
plot(xaxis,Im(sol),xlab = “position (x)”,ylab=”Im[A(x,t)]”,ylim=c(-4,4),pch=’.’)
title(paste(“Im[A(x,t)] at t = “,round(m*dt,digits=2)))
lines(xaxis,Im(sol))
dev.off()

}

for(l in c(1:(nt-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = sol_plot[l,m]
IP[4*l-2,4*m-3] = sol_plot[l,m]+0.25*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l-1,4*m-3] = sol_plot[l,m]+0.5*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l,4*m-3] = sol_plot[l,m]+0.75*(sol_plot[l+1,m]-sol_plot[l,m])
}
}

for(l in c(1:(4*nt))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = “2dplot.jpg”)
image(IPtaxis,IPxaxis,IP,xlab = “t-axis”,ylab=”x-axis”,zlim=c(-2,2))
dev.off()

Plotting the real part of the resulting function A(x,t) at several values of t, we see that the solution initially doesn’t do much anything, but at some point a “phase turbulence” sets in starting from the ends of the x-domain and after that the function evolves in a very random way, without following any clear pattern (unlike the spreading mass/temperature distributions, traveling waves or scattering wavepackets in the case of the common linear PDE:s).

An animation of the solution is shown below.

This kind of chaos is typical of nonlinear systems, be them point mass systems with nonlinear forces between mass points or field systems with nonlinear field equations such as the CGLE here. Note that the solution of this equation is a bit too heavy of a calculation to do just for the purpose of creating random numbers, so for that end other methods such as Perlin’s noise are used.

The 2D color plot of the real part of the solution, plotted in the xt-plane, looks like this:

2dplot

More plots of the solutions for different values of parameters can be found in this article.

It should be noted that, Wolfram Mathematica’s “NDSolve” function can’t usually solve nonlinear PDE:s correctly, despite usually working properly in the case of linear PDE:s. Some other commercial math programs such as Comsol Multiphysics may work better when solving nonlinear problems, at least to my experience.

So, here was the basic idea of how nonlinear PDE:s are solved by linearization, and what kind of things are possible in the behavior of their solutions. In the next PDE post I will show how to solve the thin-film equation, about which I actually wrote my master’s thesis in 2013, and which doesn’t usually behave chaotically unlike the CGLE (but can be made to do so by adding suitable terms).

Numerical solution of PDE:s, Part 7: 2D Schrödinger equation

Haven’t been posting for a while, but here’s something new… Earlier I showed how to solve the 1D Schrödinger equation numerically in different situations. Now I’m going to show how to calculate the evolution of a 2D wavepacket in a potential energy field that has been constructed to mimic the classical “two-slit experiment” which shows how the mechanics of low-mass particles like electrons can exhibit interference similar to the mechanics of classical waves (sound, light, water surface, and so on).

A 2D Schrödinger equation for a single particle in a time-independent background potential V(x,y) is

2d-TDSE.png

Where the particle mass has been set to 1 and the Planck’s constant to 2\pi.

To solve this numerically, we need the Crank-Nicolson method, as was the case when solving the 1D problem. More specifically, the linear system to be solved is

evol1

with

evol2

where the wavefunction now has two position indices and one time index, and the potential energy has only two position indices.

To form a model of the two-slit experiment, we choose a domain 0 < x < 6; 0 < y < 6 and make a potential energy function defined by

IF (x < 2.2 OR x > 3.8 OR (x > 2.7 AND x < 3.3)) THEN IF (3.7 < y < 4) THEN V(x,y) = 30

IF (x < 0.5 OR x > 5.5 OR y < 0.5 OR y > 5.5) THEN V(x,y) = 30

Otherwise V(x,y) = 0.

which corresponds to having hard walls surrounding the domain and a barrier with two holes around the line y = 3.85

For an initial condition, we choose a Gaussian wavepacket that has a nonzero expectation value of the momentum in y-direction:

initstate.png

An R-Code that solves this problem for a time interval 0 < t < 1 is

library(graphics) #load the graphics library needed for plotting

lx 3.8)||((j*dx>2.7) && (j*dx<3.3))) { if((k*dx>3.7) && (k*dx<4.0)) { V[j,k] = 30+0i #No significant density is going to go through these barriers } } if((j*dx>5.5) || (j*dx<0.5) || (k*dx>5.5) || (k*dx<0.5)) {
V[j,k] = 30+0i
}
}
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2  5.5)||(j*dx < 0.5)||(k*dx > 5.5)||(k*dx < 0.5)) {
psi[(j-1)*nx+k] = as.complex(0)
}
}
}

xaxis  5) P[l,m] = 2
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = P[l,m]
IP[4*l-2,4*m-3] = P[l,m]+0.25*(P[l+1,m]-P[l,m])
IP[4*l-1,4*m-3] = P[l,m]+0.5*(P[l+1,m]-P[l,m])
IP[4*l,4*m-3] = P[l,m]+0.75*(P[l+1,m]-P[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = paste("plot_abs_",k,".jpg",sep="")) #save the image
image(IP, zlim = c(0,0.15))

dev.off()

}

The code produces a sequence of image files, where the probability density is plotted with colors, as an output. Some representative images from this sequence (converted to grayscale) is shown below:

A video of the time evolution is shown below:

The treshold for maximum white color has been chosen to be quite low, to make the small amount of probability density that crosses the barrier visible.

The discrete grid of points has been made quite coarse here to keep the computation time reasonable, and the resolution has been increased artificially by using linear interpolation between the discrete points.

So, now we’ve seen how to solve the motion of 2D wavepackets moving around obstacles. In the next numerical methods post, I’ll go through the numerical solution of a nonlinear PDE.

Save

Save

Numerical solution of PDE:s, Part 2: Implicit method

In the previous blog post, I showed how to solve the diffusion equation

codecogseqn6

using the explicit method, where the equation is converted to a discrete one

codecogseqn12

which can be simplified by using the notation

CodeCogsEqn(16).gif

The problem with this approach is that if we want to have a high resolution, i.e. \Delta x is very small, the timestep \Delta t has to be made even much smaller to keep the procedure numerically stable. The stability of this kind of calculations can be investigated with Von Neumann stability analysis, and there you will find out that the instability acts by amplifying the short-wavelength Fourier components of the numerical discretization error.

The specific stability condition for the explicit solution of diffusion equation is

CodeCogsEqn(17).gif

In the better method to solve the diffusion equation, the implicit method, we will not solve the numbers f_{i;j+1} in a straightforward way from the numbers f_{i;j}. Instead, we use backward time stepping to write the f_{i;j} as a function of the numbers f_{i-1;j+1}, f_{i;j+1} and f_{i+1;j+1}, as in the equation below:

CodeCogsEqn(18).gif

which represents a linear system of equations. More specifically, it is a tridiagonal system, and in matrix-vector form it reads

CodeCogsEqn(26).png

for the case of a relatively small mesh n_x = 7. So, now we have to solve this tridiagonal system on each timestep, and this take more computation time than the explicit method but has the advantage of making the calculation stable even when \Delta t is not necessarily made much smaller than \Delta x.

A code for solving this kind of a linear system can be found from the book “Numerical Recipes for C” or from the corresponding book written for FORTRAN.

Now, let’s use the implicit method to solve a diffusion problem where the x-domain is

x \in [0,6],

and the step sizes are

\Delta x = 0.01 and \Delta t = 0.05 .

The initial concentration profile is chosen to be

CodeCogsEqn(19).gif

(don’t be confused by the fact that the function is now called C instead of f) and we use a boundary condition that forces the value of C(x,t) at the left endpoint to be C(0,t)=1 and that at the right endpoint to be C(6,t) = 0. This means that the left boundary is an infinite source of concentration (or heat in the case of conduction problem) and the right boundary is an infinite “sink”. With physical intuition, it is easy to deduce that this kind of system evolves toward a steady state where the value of C(x) decreases linearly from 1 to 0 on the interval [0,6]

A C++ code for doing this calculation is shown below.

// This program calculates the time development of a diffusion or heat conduction
// system with implicit finite differencing. The system described is the development of a concentration or temperature field
// between boundary points where it is constrained to stay at different constant values.
// Teemu Isojärvi, Feb 2017

#include 
#include 

using namespace std;

#define LX 6. // Length of spatial domain
#define NX 600 // Number of lattice points
#define LT 3. // Length of time interval
#define NT 60 // Number of timesteps

#define D 1. // Diffusion coefficient

int main(void)
{

double dx = (double)LX/(double)NX; // Lattice spacing
double dt = (double)LT/(double)NT; // Length of timestep

double c[NX]; // Concentration values at lattice points

double x; // Auxiliary position variable

double kappa = D*dt/(dx*dx); // Auxiliary variable for representing the linear system

double g[NX];
double b;
double q;

double u[NX]; // Vector for storing the solution on each timestep

for(int m = 0; m<NX; m++)
{
x = (double)m*dx;
c[m]=exp(-3*x*x); // Gaussian initial concentration
}

for(int n = 0; n<NT; n++)
{
c[0]=1;
c[NX-1]=0;

u[0]=(c[0]+kappa)/(2*kappa+1);
q = 2*kappa + 1;

for(int m = 1; m < NX; m++) { // First loop for solving the tridiagonal system g[m]=-kappa/q; q = (2*kappa + 1) + kappa*g[m]; u[m]=(c[m]+kappa*u[m-1])/q; } for(int m=(NX-2); m>=0; m--) u[m] -= g[m+1]*u[m+1]; // Second loop

for(int m=0; m<NX; m++) c[m] = u[m]; // Updating the concentration or temperature field

}

for(int m = 0; m<NX; m++)
{
x = (double)m*dx;
cout << x << " " << c[m] << "\n"; // Output with the results at time t = LT
}

return 0;
}

Running this program three times, with domain lenght parameters L=0.5, L=1.0 and L=3.0 and keeping the time step constant, we get data points that can be plotted in the same coordinate system with a graphing program, like below:

penetration_cpp

Figure 1. Time evolution of a concentration field C(x,t) in a system where the concentration is forced to stay at constant values at the endpoints of the domain.

The simulation seems to proceed as expected, approaching a linearly decreasing function C(x)

An equivalent code for FORTRAN is in the next box:

! Calculates the time development of a concentration distribution C(x,t) with implicit
! finite-differencing of a diffusion equation. The boundary condition is that the left boundary is an infinite source
! of solute/heat and the value of C(x) at x=0 stays at constant value 1. The value of C at the right boundary stays zero.
! Therefore, the function C(x) evolves towards a linearly decreasing function.
! Teemu Isojärvi, Feb 2017

PROGRAM MAIN

real :: DX, DT, LX, LT, D, KAPPA,B,Q ! Real variables for discretization and the diffusion constant
integer :: NT,NX ! Number of time and position steps

REAL :: C(600) ! Concentration field as an array of data points, dimension same as value of NX
REAL :: G(600)
REAL :: U(600)

REAL :: X ! Auxiliary variable

INTEGER :: M ! Integer looping variables
INTEGER :: N

LX = 6. ! Length of x-interval
LT = 3.0 ! Length of t-interval
NX = 600 ! Number of lattice points
NT = 300 ! Number of time steps
DX = LX/NX ! Distance between neighboring lattice points
DT = LT/NT ! Length of time step
D = 1. ! Diffusion/heat conduction coefficient
KAPPA = D*DT/(DX*DX)

do M = 1, NX ! Initial values of concentration
X = M*DX
C(M) = EXP(-3*X*X) ! Gaussian initial concentration distribution
end do

do N = 1, NT ! Time stepping loop

C(1)=1
C(NX)=0

U(1)=(C(1) + 2 * KAPPA)/(2 * KAPPA + 1)
Q = 2 * KAPPA + 1

do M = 2, NX ! First loop for solving the tridiagonal system
G(M) = -KAPPA/Q
Q = (2 * KAPPA + 1) + KAPPA * G(M)
U(M) = (C(M) + KAPPA * U(M-1))/Q
end do

do M = (NX-1), 1
U(M) = U(M) - G(M+1) * U(M+1) ! Second loop
end do

do M = 1, NX-1
C(M) = U(M) ! Updating the concentration or temperature field
end do

end do

do M = 1, NX
X = M*DX
print *,X,C(M) ! Print the x and concentration values at data points
end do

END

To test the implicit method with R-Code, let’s solve a problem where the length of the x-domain is 20, the time interval has length 3 and the initial distribution is

CodeCogsEqn(20).gif

to ensure that we can set the boundary conditions C(0,t)=C(L,t)=0 (the Gaussian distribution doesn’t have time to spread all the way to the boundaries in a time interval of 3 units). The code for this calculation is shown below:

library(graphics) #load the graphics library needed for plotting
library(limSolve)

lx <- 20.0 #length of the computational domain
lt <- 3. #length of the simulation time interval
nx <- 4000 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

D <- 1.0 #diffusion constant

Conc <- c(1:nx) #define a vector to put the x- and t-dependent concentration values in

xaxis <- c(1:nx) #the x values corresponding to the discrete lattice points

kappa <- D*dt/(dx*dx) #a parameter needed in the discretization

offdiagonal <- rep(-kappa, times = nx-1)
ondiagonal <- rep(1+2*kappa, times = nx)

for (i in c(1:nx)) {
Conc[i] <- exp(-2*(i*dx-10)*(i*dx-10)) #a Gaussian initial concentration field

xaxis[i] <- i*dx #calculate the x coordinates of lattice points
}

for (j in c(1:nt)) { #main time stepping loop

sol <- Solve.tridiag(offdiagonal, ondiagonal, offdiagonal, Conc)

for (k in c(1:nx)) {
Conc[k] <- sol[k]
}

if(j %% 3 == 1) { #make plots of C(x) on every third timestep
jpeg(file = paste("plot_",j,".jpg",sep=""))
plot(xaxis,Conc,xlab="position (x)", ylab="concentration",ylim=c(0,2))
title(paste("C(x) at t =",j*dt))
lines(xaxis,Conc)
dev.off()
}
}

An animation of the results can be viewed in this link. If you test the code yourself, remember to install the limSolve package first, by writing the command

install.packages(“limSolve”)

in the R console. If you don’t want to load the video, here are the plots for 3 different values of t:

plot_34.jpgplot_142plot_289

When solving PDE:s other than the ordinary diffusion equation, the implicit method is often even more necessary that it was in the examples here. For example, when solving the time-development of a quantum wave packet from the time-dependent Schroedinger equation, the solution function doesn’t stay normalized if one tries to do a simple explicit time stepping. The solution of Schroedinger equations with implicit method will be shown in the next post. The TDSE is a complex diffusion equation of the form

CodeCogsEqn(21).gif

where the equation has been simplified by setting the Planck constant to value 2\pi and the particle mass to m=1.

The 1D diffusion and Schroedinger equations are simple in the sense that the linear system to be solved on each timestep is tridiagonal, with makes the solution considerably faster than is the case with a general linear system. Systems that are not tridiagonal will appear when one wants to solve equations with more than one space coordinate (i.e. x and y instead of just x), or when the equation contains higher than 2nd order derivatives with respect to the position coordinate (this can be demonstrated with the thin-film equation, which is fourth-order with respect to x).

Save