Numerical solution of PDE:s, Part 9: 2D Ginzburg-Landau equation

In an earlier post, I described the 1-dimensional Ginzburg-Landau equation and showed how it can be linearized and solved with a implicit differencing scheme. The most interesting feature of the solutions was the appearance of seemingly random oscillations. A similar solution method is possible for the 2d version of the equation:

2dGL-large

where again \alpha and \beta are real valued constants.

An R language code for solving this with parameter values \alpha = 0 , \beta = 1.5 and an initial state A(x,0) which is a mixture of 2d plane waves is shown below.

library(graphics) #load the graphics library needed for plotting

lx <- 80.0 #length of the computational domain in x-direction
ly <- 80.0 #length of the computational domain in y-direction
lt <- 60.0 #length of the simulation time interval
nx <- 50 #number of discrete lattice points in x-direction
ny <- 50 #number of discrete lattice points in y-direction
nt <- 240 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell in x-direction
dy <- ly/ny #length of one discrete lattice cell in y-direction
dt <- lt/nt #length of timestep

a <- 0
b <- 1.5

kappa1 = dt*(1+1i*a)/dx/dx
kappa2 = dt*(1+1i*b)

C = c(1:(nx*ny))
Cu = c(1:(nx*ny))
A2d = matrix(nrow=ny,ncol=nx)
xaxis <- c(0:(nx-1))*dx #the x values corresponding to the discrete lattice points
yaxis <- c(0:(ny-1))*dy #the y values corresponding to the discrete lattice points

A = matrix(nrow=(nx*ny),ncol=(nx*ny))
IP = matrix(nrow=4*nx,ncol=4*nx)

for (i in c(1:ny)) {
for (j in c(1:nx)) {
A2d[i,j] <- 0.01*exp(1i*5.21*i+1i*10.331*j)+0.01*exp(1i*15.71*i+1i*17.831*j)
}
}

for (k in c(1:nt)) { #main time stepping loop

for(i in c(1:ny)) {
for(j in c(1:nx)) {
C[(i-1)*nx+j] <- A2d[i,j]

}
}

for(i in c(1:(nx*ny))) {
for(j in c(1:(nx*ny))) {
A[i,j] <- 0
if(i==j && j!=1 && j!=nx && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && i!=1 && i!=ny) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && j!=1 && j!=nx && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(i==j && (j==1 || j==nx) && (i==1 || i==ny)) A[i,j] <- 1+2*kappa1+kappa2*abs(C[j])*abs(C[j]) - dt
if(j==i+1 && (i%%nx != 0)) A[i,j] <- -kappa1
if(j==i-1 && (i%%nx != 1)) A[i,j] <- -kappa1
if(j==i+nx) A[i,j] <- -kappa1
if(j==i-nx) A[i,j] <- -kappa1
}
}

Cu <- solve(A,C)

for(i in c(1:ny)) {
for(j in c(1:nx)) {
if(i==1) Cu[(i-1)*nx+j]=Cu[i*nx+j]
if(i==ny) Cu[(i-1)*nx+j]=Cu[(i-2)*nx+j]
if(j==1) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j+1]
if(j==nx) Cu[(i-1)*nx+j]=Cu[(i-1)*nx+j-1]
}
}

for(i in c(1:ny)) {
for(j in c(1:nx)) {
A2d[i,j] <- Cu[(i-1)*nx+j]
}
}

for(l in c(1:(nx-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = A2d[l,m]
IP[4*l-2,4*m-3] = A2d[l,m]+0.25*(A2d[l+1,m]-A2d[l,m])
IP[4*l-1,4*m-3] = A2d[l,m]+0.5*(A2d[l+1,m]-A2d[l,m])
IP[4*l,4*m-3] = A2d[l,m]+0.75*(A2d[l+1,m]-A2d[l,m])
}
}

for(l in c(1:(4*nx))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

#make plots of C(x,y) on every third timestep
jpeg(file = paste("plot_",k,".jpg",sep=""))
image(Re(IP),zlim=c(-3,3))
title(paste("Real part of solution A(x,y,t)",k*dt))
dev.off()

}

The code produces 2d plots of the real part of the solution on each timestep, and in the video shown below they have been combined into an animation.

In the animation we see the appearance of spiral patterns typical for these values of parameters \alpha,\beta . Other values of the parameters
produce different kinds of patterns, as is described in this link.

Advertisements

Numerical solution of PDE:s, Part 8: Complex Ginzburg-Landau Equation

In the previous numerical solution posts, I described linear equations like diffusion equation and the Schrödinger equation, and how they can be solved by (implicit or explicit) finite differencing. The idea of the implicit methods was to convert the equation into a linear system of equations, from which the function values on a discrete mesh could be calculated.

Saying that these equations were linear means that they can be written as

lindiff

where the linear differential operator, containing space and time derivatives, is acting on the function and producing “something” (usually zero but in the case of source terms/inhomogeneity something nonzero).

As a first example of a nonlinear PDE, let’s consider the complex Ginzburg-Landau equation (CGLE), which reads:

CGLE

Here the \alpha and \beta are real parameters and i is the imaginary unit. Applying an implicit differencing on this may seem to result in a system of equations

findiff-1

but this is not a linear system because of the |A_{i}^{j+1}|^2, so we cannot solve the problem in this way by using linear algebra.

The trick to solve this is to linearize the system, by evaluating the |A|^2 at timestep j and the rest of the quantities at timestep j+1, producing the system

findiff-3

which is now a linear system w.r.t. to the variables evaluated at timestep j+1 (the matrix for solving “A^{j+1}“:s has diagonal elements that depend of “A^j“:s). A more sophisticated method would do several iterations to approximate the values of A(x,t) between the timesteps j and j+1.

An R code that solves the equation for a domain x\in [0,100], t\in [0,150], using discrete steps \Delta x = 0.66, \Delta t = 0.33 , initial state A(x,0) = 0.1e^{2ix} and values \alpha=3 and \beta = -2, is shown here.

library(graphics) #load the graphics library needed for plotting

lx <- 100 #length of the computational domain
lt <- 150 #length of the simulation time interval
nx <- 150 #number of discrete lattice points
nt <- 450 #number of timesteps
dx <- lx/nx #length of one discrete lattice cell
dt <- lt/nt #length of timestep

a <- 3
b <- -2

kappa1 = dt*(1+1i*a)/dx/dx #an element needed for the matrices
kappa2 = dt*(1+1i*b)

psi = as.complex(c(1:nx)) #array for the function A values
sol = as.complex(c(1:nx))

for(j in c(1:nx)) {
psi[j] = 0.1*exp(2i*j*dx)
sol[j] = psi[j]
}

xaxis <- c(1:nx)*dx #the x values corresponding to the discrete lattice points

IPxaxis <- c(1:(4*nx))*dx/4
IPtaxis <- c(1:(4*nt))*dt/4

sol_plot = matrix(nrow=nt,ncol=nx)

A = matrix(nrow=nx,ncol=nx) #matrix for forward time evolution
IP = matrix(nrow = 4*nt, ncol=4*nx)

for (m in c(1:nt)) { #main time stepping loop

for(j in c(1:nx)) {
for(k in c(1:nx)) {
A[j,k]=0
if(j==k) {
A[j,k] = 1 + 2*kappa1 + kappa2*abs(sol[j])*abs(sol[j]) – dt #diagonal elements
}
if((j==k+1) || (j==k-1)) {
A[j,k] = -kappa1 #off-diagonal elements
}
}
}

for(l in c(1:nx)) {
psi[l] = sol[l]
}
sol <- solve(A,psi) #solve the system of equations

for (l in c(1:nx)) {
sol_plot[m,l] <- Re(sol[l])
}

jpeg(file = paste(“plot_”,m,”.jpg”,sep=””))
plot(xaxis,Im(sol),xlab = “position (x)”,ylab=”Im[A(x,t)]”,ylim=c(-4,4),pch=’.’)
title(paste(“Im[A(x,t)] at t = “,round(m*dt,digits=2)))
lines(xaxis,Im(sol))
dev.off()

}

for(l in c(1:(nt-1))) {
for(m in c(1:(nx-1))) { #make a bitmap with 4 times more pixels, using linear interpolation
IP[4*l-3,4*m-3] = sol_plot[l,m]
IP[4*l-2,4*m-3] = sol_plot[l,m]+0.25*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l-1,4*m-3] = sol_plot[l,m]+0.5*(sol_plot[l+1,m]-sol_plot[l,m])
IP[4*l,4*m-3] = sol_plot[l,m]+0.75*(sol_plot[l+1,m]-sol_plot[l,m])
}
}

for(l in c(1:(4*nt))) {
for(m in c(1:(nx-1))) {
IP[l,4*m-2] = IP[l,4*m-3]+0.25*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m-1] = IP[l,4*m-3]+0.5*(IP[l,4*m+1]-IP[l,4*m-3])
IP[l,4*m] = IP[l,4*m-3]+0.75*(IP[l,4*m+1]-IP[l,4*m-3])
}
}

jpeg(file = “2dplot.jpg”)
image(IPtaxis,IPxaxis,IP,xlab = “t-axis”,ylab=”x-axis”,zlim=c(-2,2))
dev.off()

Plotting the real part of the resulting function A(x,t) at several values of t, we see that the solution initially doesn’t do much anything, but at some point a “phase turbulence” sets in starting from the ends of the x-domain and after that the function evolves in a very random way, without following any clear pattern (unlike the spreading mass/temperature distributions, traveling waves or scattering wavepackets in the case of the common linear PDE:s).

An animation of the solution is shown below.

This kind of chaos is typical of nonlinear systems, be them point mass systems with nonlinear forces between mass points or field systems with nonlinear field equations such as the CGLE here. Note that the solution of this equation is a bit too heavy of a calculation to do just for the purpose of creating random numbers, so for that end other methods such as Perlin’s noise are used.

The 2D color plot of the real part of the solution, plotted in the xt-plane, looks like this:

2dplot

More plots of the solutions for different values of parameters can be found in this article.

It should be noted that, Wolfram Mathematica’s “NDSolve” function can’t usually solve nonlinear PDE:s correctly, despite usually working properly in the case of linear PDE:s. Some other commercial math programs such as Comsol Multiphysics may work better when solving nonlinear problems, at least to my experience.

So, here was the basic idea of how nonlinear PDE:s are solved by linearization, and what kind of things are possible in the behavior of their solutions. In the next PDE post I will show how to solve the thin-film equation, about which I actually wrote my master’s thesis in 2013, and which doesn’t usually behave chaotically unlike the CGLE (but can be made to do so by adding suitable terms).

An example of fractal generating code

Fractals are structures that contain features at all scales, which means that they always reveal more features when zoomed into. Images of computer generated fractals, like the featured image, are something that almost everyone has seen. They can be made by different kinds of iterations of complex valued functions.

After just having quickly (in about half an hour) written an R-code that generates one kind of fractal, I’ll share it here:

n <- 500 # Number of pixels in x and y directions
L <- 1 # Length of the sides of square in complex plane
dx <- L/n # Spatial step size
flg <- 0 # A flag that tells if divergence has been observed
tres <- 0 # Variable for number of iterations before divergence

iter <- 50 # Number of iterations per pixel
bmap = matrix(nrow = n, ncol = n) # Bitmap for the fractal image

for (j in c(1:n)) # Loops over the real and imaginary axis
{
for (k in c(1:n))
{
z = dx*j-1.5 + (1i)*(k*dx-0.5) # Point z is chosen from the square
flg <- 0 # Initially, the flag is zero
tres <- 0 # Zero the treshold variable
x <- 1 # Initial value used in iteration for (l in c(1:iter)) # Iterate { x = x^2 + z  # n -> n+1

if ((flg == 0) && (Mod(x) > 8)) # If divergence seems to have taken place, record the number of iterations done and set flag to 1
{
flg <- 1
tres <- l
}

}
bmap[j,k] <- tres # Decide the color of pixel based on the treshold number of iterations
}
}

jpeg(file = "fractal.jpg") # Save the bitmap
image(bmap, zlim = c(1,30))

The image that the code creates looks like this.

fractal

Now, if you want, try to modify the values of the parameters in the code and see how it affects the output image file. Examples of modifications include changing the exponent 2 in the iterated function f(z) = z^2 + x to some other number (1.9, 2.1, 2.4 or something) or multiplying the z^2 with something other than 1.

Save

Common logical pitfalls when learning to write mathematical proofs

Mathematics is needed in many scientific disciplines, even though it’s used in somewhat different ways in them. A theoretical physicist usually has to study not only applied math, but also some pure mathematics which involves logical proofs of statements. A mechanical engineering student can usually do without anything containing formal logic and such, but has to learn basic differential equations, volume integrals and similar methods of applied mathematics. Chemistry and biology students also benefit from mathematical knowledge (at least one has to be able to solve a quadratic equation before one can calculate reactant and product concentrations from an equilibrium constant of a chemical reaction).

Rigorous logical thinking is not something that a human being has a natural instinct for (just as the concept of inertia and the Newton’s 1st and 2nd laws are not obvious from everyday earth-bound experience). Let me list some of the most usual logical errors that a beginner does when trying to learn to prove mathematical statements.

1. Inappropriate reversal of an implication

It is easy to see that if x = 3, then also x^2 = 9. But the reversal is not true – if x^2 = 9, we don’t necessarily need to have x = 3, as it could also be -3. Even the statement x^3 = 27 does not mean that x = 3, if we allow for the possibility that x is a complex number (can you find the points on complex plane that solve this equation?).

A more obvious way to demonstrate this is to say that the statement x = 3 implies x \in \mathbf{N} but not every natural number is 3.

To avoid doing this kind of logical errors, make sure that you never start proving a statement by assuming that it is true. In fact, any logical statement can be “proved” if you assume things that contradict each other – it is logically correct to say that “If the Moon is made of cheese and it’s not made of cheese, then also cows are able to fly.”

2. Playing with something that does not exist

Sometimes it seems possible to prove that an object X has properties A, B and C, but after some more thinking you find out that X doesn’t even exist at all. You may have seen some Youtube videos where a math teacher proves something like 1 + 2 + 3 + 4 + \dots = -\frac{1}{12}, which is an absurd statement as the sum on the left side of the equality does not converge, but it is possible to make it look like true if you take a liberty to divide the integer terms into many pieces (with different signs) and rearrange them as you want.

Actually, the terms of a sum with an infinite number of terms can’t be arbitrarily rearranged even in all cases where the sum converges.

3. Assuming without proof that something is unique

This is what we already did in case 1, by assuming that x^2 = 9 implies x=3. There are also some more obviously crazy results that can be made by doing this mistake. One example is to use De Moivre’s formula

e^{iz} = \cos z + i\sin z,

to show that e^{0} = e^{2\pi i}, and then take the natural logarithm of both sides of this equation to “show” that 0 = 2\pi i. The problem with this logic is that the complex logarithm function is not single valued – there is an infinite number of complex numbers that can be called logarithm of a given complex number.

4. Assuming that an infinite set contains everything

I remember seeing in some book a historical claim that the ancient Greeks tried to prove that there is only a finite number of types of an atom in the following way: “If there were an infinite number of different kinds of atoms, there would exist atoms of all possible sizes, and then some of them would have to be large enough to be visible which is not true as we have never seen an individual atom”. This, of course, is poppycock, as we can also say that there is an infinite number of integers that are divisible by 2, but that set still does not contain the number 5.

There are many other ways to do logical errors by making false assumptions about infinity. For one thing, the above mentioned set of even integers is “just as infinite” as the set of all integers, even though it may seem to contain half as many numbers. The set of real numbers is “more infinite” than the set of natural numbers, which you will learn to prove if you study the concept of cardinality and injective/surjective/bijective mappings.

A continuity curiosity

It’s sunday, and I don’t feel like writing lots of source code and posting it here today, so I’ll make a shorter post this time…

In physics and engineering, functions describing physical quantities are often classified as piecewise continuous, everywhere continuous, differentiable, continuously differentiable or complex analytical, which is a list of requirements of increasing strictness. In classical mechanics, most functions are continuous and “well-behaved”, excluding some fractal phase-space curves in chaotic systems.

In pure mathematics, there often happens some kind of play with badly discontinuous or in some other way “detached from reality” functions. Let me show one example of this kind of a function.

The continuity of a real function f:\mathbb{R}\rightarrow\mathbb{R} at point x_0 is defined by

codecogseqn23

or in other words

CodeCogsEqn(24).gif

which practically means that if we set any error bars f(x_o)-\epsilon \leq f(x) \leq f(x_0)+\epsilon for f(x), no matter how strict, there are always some error bars x_0 - \delta \leq x \leq x_0 + \delta for x that will keep the function value between those error bars.

So, here’s an example curious function that is special in the sense that it is continuous at only one point:

CodeCogsEqn(22).gif

which is a real function that has value x when x is a rational number and has value -x when x is an irrational number (the notation \mathbb{Q}^C means the complement of \mathbb{Q}).

Left as an exercise for the reader, is to explain how this function is continuous only at the point x=0.

Sometimes later, I’ll write about fractal curves, which are continuous everywhere but are nowhere differentiable and have no finite curve length.