An example of fractal generating code

Fractals are structures that contain features at all scales, which means that they always reveal more features when zoomed into. Images of computer generated fractals, like the featured image, are something that almost everyone has seen. They can be made by different kinds of iterations of complex valued functions.

After just having quickly (in about half an hour) written an R-code that generates one kind of fractal, I’ll share it here:

n <- 500 # Number of pixels in x and y directions
L <- 1 # Length of the sides of square in complex plane
dx <- L/n # Spatial step size
flg <- 0 # A flag that tells if divergence has been observed
tres <- 0 # Variable for number of iterations before divergence

iter <- 50 # Number of iterations per pixel
bmap = matrix(nrow = n, ncol = n) # Bitmap for the fractal image

for (j in c(1:n)) # Loops over the real and imaginary axis
{
for (k in c(1:n))
{
z = dx*j-1.5 + (1i)*(k*dx-0.5) # Point z is chosen from the square
flg <- 0 # Initially, the flag is zero
tres <- 0 # Zero the treshold variable
x <- 1 # Initial value used in iteration for (l in c(1:iter)) # Iterate { x = x^2 + z  # n -> n+1

if ((flg == 0) && (Mod(x) > 8)) # If divergence seems to have taken place, record the number of iterations done and set flag to 1
{
flg <- 1
tres <- l
}

}
bmap[j,k] <- tres # Decide the color of pixel based on the treshold number of iterations
}
}

jpeg(file = "fractal.jpg") # Save the bitmap
image(bmap, zlim = c(1,30))

The image that the code creates looks like this.

fractal

Now, if you want, try to modify the values of the parameters in the code and see how it affects the output image file. Examples of modifications include changing the exponent 2 in the iterated function f(z) = z^2 + x to some other number (1.9, 2.1, 2.4 or something) or multiplying the z^2 with something other than 1.

Save

Advertisements

Common logical pitfalls when learning to write mathematical proofs

Mathematics is needed in many scientific disciplines, even though it’s used in somewhat different ways in them. A theoretical physicist usually has to study not only applied math, but also some pure mathematics which involves logical proofs of statements. A mechanical engineering student can usually do without anything containing formal logic and such, but has to learn basic differential equations, volume integrals and similar methods of applied mathematics. Chemistry and biology students also benefit from mathematical knowledge (at least one has to be able to solve a quadratic equation before one can calculate reactant and product concentrations from an equilibrium constant of a chemical reaction).

Rigorous logical thinking is not something that a human being has a natural instinct for (just as the concept of inertia and the Newton’s 1st and 2nd laws are not obvious from everyday earth-bound experience). Let me list some of the most usual logical errors that a beginner does when trying to learn to prove mathematical statements.

1. Inappropriate reversal of an implication

It is easy to see that if x = 3, then also x^2 = 9. But the reversal is not true – if x^2 = 9, we don’t necessarily need to have x = 3, as it could also be -3. Even the statement x^3 = 27 does not mean that x = 3, if we allow for the possibility that x is a complex number (can you find the points on complex plane that solve this equation?).

A more obvious way to demonstrate this is to say that the statement x = 3 implies x \in \mathbf{N} but not every natural number is 3.

To avoid doing this kind of logical errors, make sure that you never start proving a statement by assuming that it is true. In fact, any logical statement can be “proved” if you assume things that contradict each other – it is logically correct to say that “If the Moon is made of cheese and it’s not made of cheese, then also cows are able to fly.”

2. Playing with something that does not exist

Sometimes it seems possible to prove that an object X has properties A, B and C, but after some more thinking you find out that X doesn’t even exist at all. You may have seen some Youtube videos where a math teacher proves something like 1 + 2 + 3 + 4 + \dots = -\frac{1}{12}, which is an absurd statement as the sum on the left side of the equality does not converge, but it is possible to make it look like true if you take a liberty to divide the integer terms into many pieces (with different signs) and rearrange them as you want.

Actually, the terms of a sum with an infinite number of terms can’t be arbitrarily rearranged even in all cases where the sum converges.

3. Assuming without proof that something is unique

This is what we already did in case 1, by assuming that x^2 = 9 implies x=3. There are also some more obviously crazy results that can be made by doing this mistake. One example is to use De Moivre’s formula

e^{iz} = \cos z + i\sin z,

to show that e^{0} = e^{2\pi i}, and then take the natural logarithm of both sides of this equation to “show” that 0 = 2\pi i. The problem with this logic is that the complex logarithm function is not single valued – there is an infinite number of complex numbers that can be called logarithm of a given complex number.

4. Assuming that an infinite set contains everything

I remember seeing in some book a historical claim that the ancient Greeks tried to prove that there is only a finite number of types of an atom in the following way: “If there were an infinite number of different kinds of atoms, there would exist atoms of all possible sizes, and then some of them would have to be large enough to be visible which is not true as we have never seen an individual atom”. This, of course, is poppycock, as we can also say that there is an infinite number of integers that are divisible by 2, but that set still does not contain the number 5.

There are many other ways to do logical errors by making false assumptions about infinity. For one thing, the above mentioned set of even integers is “just as infinite” as the set of all integers, even though it may seem to contain half as many numbers. The set of real numbers is “more infinite” than the set of natural numbers, which you will learn to prove if you study the concept of cardinality and injective/surjective/bijective mappings.

Numerical solution of PDE:s, Part 7: 2D Schrödinger equation

Haven’t been posting for a while, but here’s something new… Earlier I showed how to solve the 1D Schrödinger equation numerically in different situations. Now I’m going to show how to calculate the evolution of a 2D wavepacket in a potential energy field that has been constructed to mimic the classical “two-slit experiment” which shows how the mechanics of low-mass particles like electrons can exhibit interference similar to the mechanics of classical waves (sound, light, water surface, and so on).

A 2D Schrödinger equation for a single particle in a time-independent background potential V(x,y) is

2d-TDSE.png

Where the particle mass has been set to 1 and the Planck’s constant to 2\pi.

To solve this numerically, we need the Crank-Nicolson method, as was the case when solving the 1D problem. More specifically, the linear system to be solved is

evol1

with

evol2

where the wavefunction now has two position indices and one time index, and the potential energy has only two position indices.

To form a model of the two-slit experiment, we choose a domain 0 < x < 6; 0 < y < 6 and make a potential energy function defined by

IF (x < 2.2 OR x > 3.8 OR (x > 2.7 AND x < 3.3)) THEN IF (3.7 < y < 4) THEN V(x,y) = 30

IF (x < 0.5 OR x > 5.5 OR y < 0.5 OR y > 5.5) THEN V(x,y) = 30

Otherwise V(x,y) = 0.

which corresponds to having hard walls surrounding the domain and a barrier with two holes around the line y = 3.85

For an initial condition, we choose a Gaussian wavepacket that has a nonzero expectation value of the momentum in y-direction:

initstate.png

An R-Code that solves this problem for a time interval 0 < t < 1 is

library(graphics) #load the graphics library needed for plotting

lx 3.8)||((j*dx>2.7) && (j*dx<3.3))) { if((k*dx>3.7) && (k*dx<4.0)) { V[j,k] = 30+0i #No significant density is going to go through these barriers } } if((j*dx>5.5) || (j*dx<0.5) || (k*dx>5.5) || (k*dx<0.5)) {
V[j,k] = 30+0i
}
}
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2  5.5)||(j*dx < 0.5)||(k*dx > 5.5)||(k*dx < 0.5)) {
psi[(j-1)*nx+k] = as.complex(0)
}
}
}

xaxis  5) P[l,m] = 2
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = P[l,m]
IP[4*l-2,4*m-3] = P[l,m]+0.25*(P[l+1,m]-P[l,m])
IP[4*l-1,4*m-3] = P[l,m]+0.5*(P[l+1,m]-P[l,m])
IP[4*l,4*m-3] = P[l,m]+0.75*(P[l+1,m]-P[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = paste("plot_abs_",k,".jpg",sep="")) #save the image
image(IP, zlim = c(0,0.15))

dev.off()

}

The code produces a sequence of image files, where the probability density is plotted with colors, as an output. Some representative images from this sequence (converted to grayscale) is shown below:

A video of the time evolution is shown below:

The treshold for maximum white color has been chosen to be quite low, to make the small amount of probability density that crosses the barrier visible.

The discrete grid of points has been made quite coarse here to keep the computation time reasonable, and the resolution has been increased artificially by using linear interpolation between the discrete points.

So, now we’ve seen how to solve the motion of 2D wavepackets moving around obstacles. In the next numerical methods post, I’ll go through the numerical solution of a nonlinear PDE.

Save

Save

Numerical solution of PDE:s, Part 6: Adiabatic approximation for quantum dynamics

Having solved the time-dependent Schrödinger equation both in real and imaginary time, we can move forward to investigate systems where the potential energy function V has an explicit time dependence in it:

timedep.gif

In this kind of systems, the expectation value of the Hamiltonian operator doesn’t have to stay constant.

Time-dependent perturbation theory is one method for finding approximate solutions for this kind of problems, but here I will handle a simpler example, which is called adiabatic approximation.

Suppose that the potential energy function V(x,t) is known. Now, let’s say that we also know the solutions of the time-independent Schrödinger equation

time-indep

for any value of t. I denote the solutions as \psi_n (x;t), where it is understood that x is a variable and t is a parameter. Now, if the function V(x,t) changes very slowly as a function of time, i.e. its partial derivative with respect to t is small at all points of the domain, we can use the adiabatic approximation, which says that if the initial state \Psi (x,0) is the ground state for the potential V(x,0), then the state at time t is the ground state for the potential V(x,t).

adiabatic

So, we can change a ground state of one potential V_1 (x) into the ground state of another potential $V_2 (x)&bg=ffffff&fg=000000$ by making a continuous change from V_1 (x) to V_2 (x) slowly enough.

Let’s test this by chooosing a function V as

moving.gif

i.e. a Hookean potential that moves to the positive x-direction with constant speed. If we set the wavefunction at t=0 to

init.gif

which is the ground state corresponding to V(x,0), the time depelopment of the wavepacket should be like

adiabatic-evolution.gif

which means that is moves with the same constant speed as the bottom of the potential V. This can be calculated with the R-Code below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 15.0 #length of the simulation time interval
nx <- 100 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #harmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3-k*dt*0.05)*(j*dx-3-k*dt*0.05)) #time dependent potential
}

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

for(l in c(1:nx)) {
for(m in c(1:nx)) {
A[l,m]=0
B[l,m]=0
if(l==m) {
A[l,m] = 1 + 2*kappa1 + kappa2[m]
B[l,m] = 1 - 2*kappa1 - kappa2[m]
}
if((l==m+1) || (l==m-1)) {
A[l,m] = -kappa1
B[l,m] = kappa1
}
}
}
sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V)
dev.off()
}
}

and in the following sequences of images you see that the approximation is quite good for v = 0.05

As an animation, this process looks like shown below:

By doing the same calculation again, but this time with v = 3, the image sequence looks like this:

where it is obvious that the approximation doesn’t work anymore.

Save

Save

Numerical solution of PDE:s, Part 5: Schrödinger equation in imaginary time

In the last post, I mentioned that the solution of the time dependent 1D Schrödinger equation

codecogseqn21

can be written by expanding the initial state \Psi (x,0) in the basis of the solutions of time-independent Schrödinger equation

time-independent

and multiplying each term in the expansion by a complex-valued time dependent phase factor:

time-evolution

Now, assuming that the energies E_n are all positive and are ordered so that E_0 is the smallest of them, we get an interesting result by replacing the time variable t with an imaginary time variable s = it. The function \Psi (x,s) is then

imagtime.gif

which is a sum of exponentially decaying terms, of which the first one, c_0 \exp(-E_0 s)\psi_0 (x) decays slowest. So, in the limit of large s, the wave function is

imagtime2.gif

i.e. after normalizing it, it’s approximately the same as the ground state \psi_0. Also, if s is a large number, we have

imagtime3.gif

or

imagtime4.gif

So, here we have a way to use the TDSE to numerically calculate the ground state wavefunction and corresponding energy eigenvalue for any potential energy function V(x). This is very useful, as the ground state can’t usually be solved analytically, except for some very simple potential energy functions such as the harmonic oscillator potential.

To test this imaginary time method, I will approximately calculate the ground state of an anharmonic oscillator, described by a Schrödinger equation

anharmonic.gif

as an initial state, I will choose a Gaussian function

initstate.gif

and the computational domain is defined by 0 < x < 6, 0 < t < 3, \Delta x = 0.05, \Delta t = 0.01.

An R-Code that performs this calculation is shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- (-1i)*lt/nt #length of imaginary timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(0.25*(j*dx-3)*(j*dx-3)+0.15*(j*dx-3)*(j*dx-3)*(j*dx-3)*(j*dx-3)) #anharmonic potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gasussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

nrm = 0
for (l in c(1:nx)) nrm <- nrm + dx*abs(psi[l])*abs(psi[l])

if(k %% 3 == 1) { #make plots of psi(x) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis, abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2")
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis, abs(psi)^2)
lines(xaxis, V*max(abs(psi)^2))
dev.off()
}
}

And a set of plots of |\Psi (x,t)|^2 for several values of t look like this:

Here the shape of the anharmonic potential has been plotted to the same images.

The problem with this method for finding a ground state is that if the system has more degrees of freedom than a single coordinate x, the matrices in the linear systems needed in the time stepping quickly become very large and the calculation becomes prohibitively slow. To make this worse, the Crank-Nicolson method of time stepping can’t be parallelized for multiple processors. However, there is another way to compute the evolution of a wave function in imaginary time, which is called Diffusion Monte Carlo, and that is easily parallelizable. DMC is one practical way for calculating ground state wave functions for multi-particle systems such as a helium or a lithium atom.

Save

Save

Save

Numerical solution of PDE:s, Part 4: Schrödinger equation

In the earlier posts, I showed how to numerically solve a 1D or 2D diffusion or heat conduction problem using either explicit or implicit finite differencing. In the 1D example, the relevant equation for diffusion was

diffusion.gif

and an important property of the solution was the conservation of mass,

masscons.gif

i.e. the integral of the concentration field over whole domain stays constant.

Next, I will show how to integrate the 1D time-dependent Schrödinger equation, which in a nondimensional form where we set \hbar = 1 and m = 1 reads:

codecogseqn21

Here i is the imaginary unit and V(x) is the potential energy as a function of x. The solutions of this equation must obey a conservation law similar to the mass conservation in the diffusion equation, the conservation of norm:

unitarity.gif

where the quantity |\Psi (x,t)| is the modulus of the complex-valued function \Psi (x,t) . This property of the solution is also called unitarity of the time evolution.

Apart from the TDSE, another way to represent the time development of this system is to find the normalized solutions \psi_0 (x), \psi_1 (x), \psi_2 (x) \dots of the time-independent Schrödinger equation

time-independent

and write the initial state \Psi (x,0) as a linear combination of those basis functions:

expansion.gif

This is possible because the solutions of the time-independent equation form a basis for the set of acceptable wave functions \psi (x). Then, every term in that eigenfunction expansion is multiplied by a time dependent phase factor \exp(-iE_n t):

time-evolution.gif

The numbers E_n are the eigenvalues corresponding to the solutions \psi_n (x) and the function \psi_0 (x) is called the ground state corresponding to potential V(x), while the functions \psi_1 (x) is the first excited state and so on.

The Schrödinger equation can’t be discretized by using either the explicit or implicit method that we used when solving the diffusion equation. The method is either numerically unstable or doesn’t conserve the normalization of the wave function (or both) if you try to do that. The correct way to discretize the Schrödinger equation is to replace the wave function with a discrete equivalent

discrete-wf.gif

and the potential energy function V(x) with V_{i;j} (or V_i in the case of time-independent potential), and write an equation that basically tells that propagating the state \Psi_{i;j} forward by half a time step gives the same result as propagating the state \Psi_{i;j+1} backwards by half a time step:

discrete-se.gif

Here we have

kappa1.gif

and

kappa2

This kind of discretization is called the Crank-Nicolson method. As boundary conditions, we usually set that at the boundaries of the computational domain the wavefunction stays at value zero: \Psi (0,t) = \Psi (L,t) = 0 for any value of t. In the diffusion problem, this kind of a BC corresponded to infinite sinks at the boundaries, that annihilated anything that diffused through them. In the Schrödinger equation problem, which is a complex diffusion equation, the equivalent condition makes the boundaries impenetrable walls that deflect elastically anything that collides with them.

An R-Code that calculates the time evolution of a Gaussian initial wavefunction

initstate

in an area of zero potential:

idzero.gif

for a domain 0 < x < 6, a lattice spacing \Delta x = 0.05, time interval 0 < t < 2 and time step \Delta t = 0.01, is given below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 2.0 #length of the simulation time interval
nx <- 120 #number of discrete lattice points
nt <- 200 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = 0 #zero potential
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of |psi(x)|^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2,xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
dev.off()
}
}

The output files are plots of the absolute squares of the wavefunction, and a few of them are shown below.

In the next simulation, I set the domain and discrete step sizes the same as above, but the initial state is:

init-momentum1

Which is a Gaussian wave packet that has a nonzero momentum in the positive x-direction. This is done by changing the line

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3))) #Gaussian initial wavefunction
}+(1i)*j*dx

into

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-2*(j*dx-3)*(j*dx-3)+(1i)*j*dx)) #Gaussian initial wavefunction
}

The plots of |\Psi (x,t)|^2 for several values of t are shown below

and there you can see how the wave packet collides with the right boundary of the domain and bounces back.

In the last simulation, I will set the potential function to be

pot2.gif

which is a harmonic oscillator potential, and with the nondimensional mass m =1 and Planck constant \hbar = 1 the ground state \psi _0 (x) of this system is

ground-SHO.gif

If I’d set the initial state to be \Psi (x,0) = \psi_0 (x), or any other solution of the time-independent SE, the modulus of the wavefunction would not change at all. To get something interesting to happen, I instead set an initial state that is a displaced version of the ground state:

disp-ground-sho

The solution can be obtained with the code shown below:

library(graphics) #load the graphics library needed for plotting

lx <- 6.0 #length of the computational domain
lt <- 3.0 #length of the simulation time interval
nx <- 360 #number of discrete lattice points
nt <- 300 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

V = c(1:nx) #potential energies at discrete points

for(j in c(1:nx)) {
V[j] = as.complex(2*(j*dx-3)*(j*dx-3)) #Harmonic oscillator potential with k=4
}

kappa1 = (1i)*dt/(2*dx*dx) #an element needed for the matrices
kappa2 <- c(1:nx) #another element

for(j in c(1:nx)) {
kappa2[j] <- as.complex(kappa1*2*dx*dx*V[j])
}

psi = as.complex(c(1:nx)) #array for the wave function values

for(j in c(1:nx)) {
psi[j] = as.complex(exp(-(j*dx-2)*(j*dx-2))) #Gaussian initial wavefunction, displaced from equilibrium
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
B = matrix(nrow=nx,ncol=nx) #matrix for backward time evolution

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
B[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2[j]
B[j,k] = 1 - 2*kappa1 - kappa2[j]
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1
B[j,k] = kappa1
}
}
}

for (k in c(1:nt)) { #main time stepping loop

sol <- solve(A,B%*%psi) #solve the system of equations

for (l in c(1:nx)) {
psi[l] <- sol[l]
}

if(k %% 3 == 1) { #make plots of Abs(psi(x))^2 on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,abs(psi)^2, xlab="position (x)", ylab="Abs(Psi)^2",ylim=c(0,2))
title(paste("|psi(x,t)|^2 at t =",k*dt))
lines(xaxis,abs(psi)^2)
lines(xaxis,V)
dev.off()
}
}

and the solution at different values of t look like this (images and video):

Here the shape of the Hookean potential energy is plotted in the same images. So, here you see how the center of the Gaussian wavefunction oscillates around the point x = 3, just like a classical mechanical harmonic oscillator does when set free from a position that is displaced from equilibrium.

By changing the code that produces the output images, we can also get a sequence of plots of the imaginary part of the wavefunction:

if(k %% 3 == 1) { #make plots of Im(psi(x)) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
plot(xaxis,Im(psi), xlab="position (x)", ylab="Im(Psi)",ylim=c(-1.5,1.5))
title(paste("Im(psi(x,t)) at t =",k*dt))
lines(xaxis,Im(psi))
lines(xaxis,V)
dev.off()
}

and the resulting plots look like this:

Save

Save

About the NASA’s TRAPPIST-1 findings

Yesterday, I was excitedly following the NASA press conference about the exoplanets found orbiting the star TRAPPIST-1 located 39 light years from earth. When I heard the day before that they’re going to announce something, the first idea that came to my mind was that they must have observed a combination of organic molecules and oxygen in some distant planet’s atmosphere, which would be a chemical nonequilibrium state that is difficult to explain without some kind of biology being present. Yes, of course you can’t have an atmosphere where there’s something like a mixture of methane and oxygen that is within flammability limits, but even very small concentrations of hydrocarbons and other organic compounds are apparently possible to notice from the IR spectrum of a distant object.

Actually, they didn’t have any spectral data yet, which could be used to make conclusions about the chemical composition of the atmospheres (that will be obtained later), but they had information about the masses, orbit radii and cross-sectional areas of the planets (these can be deduced from the oscillation of the star’s apparent position, see link “reduced mass“, and from the dimming of the light of the star when a planet temporarily moves past it), and the planet densities were too large for them to be gas-planets. Also, the temperatures of three of them seemed to be in the range where water can exist as a liquid at reasonable pressures.

Actually, a planet doesn’t necessarily need to even be in the so called habitable zone to possibly have liquid water – even on Mercury, the hottest planet on our Solar system, there are places that are always in shadow and can actually get quite cold because the rate of heat conduction through large masses of rock is slow compared to the rate of radiative heat loss to space. Therefore liquid water could exist in such places if contained inside some kind of “pocket” that would also protect it from low pressures. Also, the icy surface of Jupiter’s moon Europa can hide liquid water beneath it, because the tidal forces from Jupiter cause viscous (frictional) heating inside the moon, despite it being way too far from the Sun to be inside the definition of “habitable zone“.

So, more interesting stuff is probably coming, as the astronomers obtain infrared spectra from that star system.

Also, see the related discussion in Physics Forums here.

The featured image is from Wikimedia Commons.