Square roots of matrices and operators

Everyone knows the definition of the square root of a non-negative number: if x > 0, then the square root of x is a nonnegative number y for which we have y^2 = x, and it is denoted y = \sqrt{x}. We could also call the number -\sqrt{x} a square root of x, but by convention the positive solution of the equation y^2 = x is called “The” square root.

So, what if we have an algebraic system that is not the set of positive real numbers, but where there exists a defined multiplication operation “\diamond “? If a is an element of such a set, then a square root of a would obviously be an element b for which b\diamond b = a (\diamond is here the multiplication operation). The set of complex numbers is obviously such a set, but is there any other example?

Consider the set of 2 \times 2 matrices, the elements of which are denoted

codecogseqn

where a,b,c and d are real or complex numbers. So, could we define the square root of this kind of matrices in the same way as it is done with the real numbers, using the normal matrix multiplication as the operation “\diamond “? Non-commutativity shouldn’t be a problem here, because any algebraic object commutes with itself.

Let’s use the identity matrix as an example:

CodeCogsEqn(1).gif

What’s its square root? Really quickly one finds out that there are as many as four matrices that can be seen as the square root of I:

CodeCogsEqn(2).gif

and it is not at all obvious which one of these should be called “The Square Root of I“. Because of this, the concept of square root is not often used when working with matrices.

Sometimes, a square root can also be found for operators that act on a set of functions, like the second derivative operator \frac{d^2}{dx^2}, for which the operators \frac{d}{dx} and -\frac{d}{dx} are obvious “square root candidates”. The multiplication of operators doesn’t have to be commutative, so they are a bit like matrices, but they are usually not representable by finite-dimensional matrices unless the domain (the set of functions that the operator acts on) is artificially made very limited.

An example of the operator square root problem that actually occurs in the analysis of a real physical system is the Klein-Gordon equation, which was the first attempt to form a relativistic version of the time-dependent Schroedinger equation of quantum mechanics. The time dependent Schroedinger equation of a free particle (particle that is not acted on by any forces) is:

CodeCogsEqn(25).gif

where \Psi is a function of both time t and a space coordinate x (for a simple one-dimensional situation, that is). This can be written shortly as

CodeCogsEqn(27).gif

where the free-particle Hamiltonian operator

CodeCogsEqn(28).gif

is obtained from the classical expression of kinetic energy: E_k = p^2 /2m by replacing the classical one-dimensional momentum p with the corresponding quantum momentum operator

CodeCogsEqn(29).gif

The problem with the normal TDSE is that the time and space coordinates are treated differently in it, because it is first order with respect to time and second order with respect to position. This is not acceptable if we want to satisfy the principle of special relativity.

A naive attempt to form a relativistic Schroedinger equation is to take the formula of the energy of a moving body in special relativity

CodeCogsEqn(30).gif

and replace the momentum p with the quantum momentum operator \hat{p}. Then we would have

codecogseqn31

but the problem with this is how to define the square root of something that contains a derivative operator. One way to do this would be to use the Taylor series expansion of the square root:

codecogseqn32

equating the x here with the derivative operator, but this is not acceptable, because an equation that has derivatives of all positive integral orders up to infinity can be seen as equivalent to a difference equation like

codecogseqn33

which is non-local, and in an equation of motion like the Klein-Gordon equation this would mean that there can be immediate (faster than speed of light) interactions between objects. (if you want, try to write the difference equation above as an infinite-order differential equation by using Taylor expansion).

Because of the non-locality problem, the actual conventional Klein-Gordon equation is formed by making the square energy operator \hat{H}^2 using the formula of relativistic momentum, and equating \hat{H}^2 \Psi  with -\hbar^2 \frac{\partial^2 \Psi}{\partial t^2} . The result is

CodeCogsEqn(34).gif

This, however, causes other problems because the equation here allows particles to have a negative value of kinetic energy, which is hard to interpret physically. These problems are solved in quantum field theory (QFT), where the number of particles that are present in the system can be uncertain, similarly to how the position of a single quantum particle described by the Schroedinger equation can have some expectation value and nonzero standard deviation.

What are the most energetic combustible materials?

Most people have seen several examples of combustion reactions that produce different flame temperatures. The temperature of a campfire can be somewhat over 800 deg C, while the flame of an oxyacetylene torch in welding can reach a temperature of over 3000 deg C. The energeticity of combustion reactions can be quantified in several ways:

1. Maximum temperature of the flame that is produced
2. Amount of heat released in combustion (per unit mass or volume of fuel)
3. In the cases of gaseous fuels, aerosols or vapors, the magnitude of sudden pressure increase in the explosion of fuel-oxygen mixture

Note that the amount of heat released per mole of fuel is not a good measure of the energeticity, as you can make the molar heat of combustion of an organic compound practically as large as you want by building longer and longer hydrocarbon chains.

The heat release of combustion reactions, more formally called enthalpy of combustion \Delta H_c, is rather easy to measure in the laboratory using a calorimeter. A typical example of a combustion reaction is the burning of methane, described by the reaction equation below:

CH_4 + 2O_2 \rightarrow CO_2 + 2H_2 O .

The enthalpy change in this reaction can be calculated from the enthalpies of formation (\Delta H_f) of the reactants and products, which can be found from tables of thermodynamic quantities (in books or online). The enthalpy change in the combustion of a mole of methane is

\Delta_c (CH_4) = \Delta H_f(CO_2 ) + 2\Delta H_f(H_2 O) - \Delta H_f (CH_4 ),

where the enthalpy of formation of elemental oxygen has not been included as it is zero (as is the \Delta H_f of any element in its most stable form).

The thermodynamic quantities in this are

\Delta H_f(CO_2 ) = -393 kJ/mol
\Delta H_f(H_2 O) = -242 kJ/mol
\Delta H_f(CH_4 ) = -74.9 kJ/mol

which give a value of about -802 kJ/mol or -50 kJ/g for the quantity \Delta H_c (CH_4 ). An enthalpy change that is negative means that heat is released to the surroundings by the reaction. The formal definition of enthalpy is H=E+PV, or internal energy plus pressure times volume, and in constant-pressure processes of simple systems its change is \Delta H = \Delta E + P\Delta V. A system that is “simple” in the sense meant here can do work on the surroundings only by expanding: W = -P\Delta V. Some systems can do other kinds of work, too, like elastic deformation (which can require an application of force over some distance even without involving a change of volume) or electrical work (charging a battery).

As the heat of combustion increases linearly with increasing \Delta H_f:s of the products and decreases linearly with increasing \Delta H_f:s of the reactants, the most energetic fuels are those that have a positive heat of formation, which means that energy is consumed when they are formed from their elemental constituents. Straight-chain alkane hydrocarbons like ethane, propane and butane all have negative heats of formation. To make a hydrocarbon that has a positive \Delta H_f, we need to have triple bonds between carbon atoms, or highly strained carbon-carbon bonds (such as in small alicyclic rings). An example of the former case is acetylene C2H2, for which \Delta H_f = 227.4 kJ/mol = 8.7 kJ/g and an example of the latter is cyclopropane (C_3 H_6), for which \Delta H_f = 53.2 kJ/mol = 1.26 kJ/g. Cyclopropane was used in the past as an anesthetic gas in surgical operations, but this use was discontinued because of the fire/explosion hazard related to it. In addition to molecules with small ring structures, there are also other molecules with large “steric hindrance” like tetra-tert-butylmethane or cubane that have or are predicted to have a positive heat of formation.

Inorganic combustible substances with positive \Delta H_f include hydrazine (N_2 H_4) and cyanogen (C_2 N_2). Both are nasty toxic compounds, and hydrazine can also explode when heated, even with no oxygen present, as it decomposes violently to elemental hydrogen and nitrogen if it’s given enough activation energy (this can happen with acetylene, too). Cyanogen is a gaseous compound that is formed from two cyano groups (-CN). The cyano group is called a pseudohalogen, as it is often found in organic molecules in positions where there could also be a halogen (fluorine, chlorine, bromine or iodine) atom (see chlorobenzene and cyanobenzene). A stoichiometric mixture of cyanogen and oxygen can reach a flame temperature of about 4500 degrees Celsius.

cyanobenzene.pngchlorobenzene

Figure 1. Molecular structures of cyanobenzene (left) and chlorobenzene (right).

Reactive metals like magnesium or aluminum often have large heats of combustion, for example the \Delta H_c (Al) is about -838 kJ/mol or -31 kJ/g which is less per gram than the typical values of hydrocarbons, but more per unit volume because Al metal has a significantly higher density than liquefied hydrocarbons. Aluminum is not something that a layman would think of as a combustible fuel, but it is actually very flammable when it’s in the form of very fine powder, and it is used in thermite mixtures and flash powders (pyrotechnic mixtures of Al powder with oxidizers such as potassium perchlorate, which produce a very loud bang and a temperature of over 3000 deg C when ignited in a confined space).

magnesium_sparks

Figure 2. Magnesium metal burns with a really high-temperature flame, which makes it useful in firestarters (source: https://en.wikipedia.org/wiki/Magnesium#/media/File:Magnesium_Sparks.jpg )

541px-thermitereaction

Figure 3. The thermite mixture, made from finely powdered aluminum and iron oxide, burns with a high temperature flame and has been used in the welding of railroad tracks. (source: https://commons.wikimedia.org/wiki/File:ThermiteReaction.jpg )

When estimating the maximum temperature that can be reached in the combustion of some substance, there is a need to consider not only the enthalpy changes of the reactions, but also the heat capacities of the products that are formed. A quantity denoted T_f, and called adiabatic flame temperature, is the temperature that would theoretically be reached when the fuel reacts with oxygen in a system that is thermally insulated (to prevent heat loss to the surroundings) but isobaric (can do work on surroundings by expanding). A basic estimate of T_f is

CodeCogsEqn(35).gif
where \Delta H_c is the heat produced in the combustion of 1 mole of the fuel and

C_p is the constant-pressure heat capacity

codecogseqn36

of the product mixture at a temperature of about 1000 deg C (partial derivative of enthalpy with respect to temperature at constant pressure). This formula is not accurate for the most energetic combustions, as most combustion reactions don’t proceed all the way to the stoichiometric end products in high temperatures. This is because at high temperatures, the molar entropy change \Delta S of the reaction starts to be a significant factor in determining the molar Gibbs energy change (and equilibrium constant) of the process, and smaller molecules (like H_2 and O_2 instead of H_2 O) usually have a larger molar entropy than large ones. At high temperatures the heat capacity also increases dramatically, because the Boltzmann factor k_B T, which gives the energy scale corresponding to absolute temperature T, starts to approach the energy scale of molecular vibrational (and later also electronic) transitions, and energy is therefore distributed to the vibrational and electronic degrees of freedom of the reaction products. At really high temperatures, like inside the Sun, matter is in the form of ionized plasma, but that kind of extreme conditions can’t be produced chemically.

When considering the combustion of reactive metals, another thing that limits the maximum reaction temperature is the boiling point of the reaction products such as MgO or Al_2 O_3. The flame temperature in metal combustion can’t usually exceed the boiling point of the most volatile product, as the heat of combustion is not as large as the heat of vaporization of the reaction products. The combustion of zirconium metal in pure oxygen can raise the temperature up to 4000 degrees Celsius, because of the very high boiling point of zirconium oxide.

The factors mentioned above limit the maximum temperature attainable in a chemical combustion reaction to about 5000 degrees Celsius, which is approximately the adiabatic flame temperature of dicyanoacetylene, a derivative of cyanogen. The amount of pressure rise in the combustion reaction, which was mentioned as one measure of the energeticity of the reaction, depends on both the flame temperature and the difference in the number of moles of gas is the reactants and products. Nuclear reactions such as the fission of uranium-235 are able to produce much higher temperatures, up to millions of degrees Celsius, due to the very large energy release in a relatively small volume.

Some things about iterative sequences

How would one, using only pen and paper, construct a sequence of rational numbers that approaches the number \sqrt{2} ? This is not a simple question for many, as nowadays most people haven’t had to calculate approximations for square roots without a calculator. A simple way to solve this problem is by noting that, if x is any positive rational number, then the number \sqrt{2} is always between the numbers x and 2/x. So, we can make the sequence by starting with a_0 = 1 and then calculating the number a_n as the average value of a_{n-1} and 2/a_{n-1}:

a_1 = \frac{1}{2}(1 + \frac{2}{1}) = 3/2 = 1.5
a_2 = \frac{1}{2}(3/2 + 2/(3/2)) = 17/12 \approx 1.416666
a_3 = \frac{1}{2}(17/12 + 2/(17/12)) = 577/408 \approx 1.414216

This sequence approaches the number \sqrt{2} = 1.414213562 rather quickly. The square root of some other integer m could be calculated with the same method by using the iteration rule a_n = \frac{1}{2}(a_{n-1} + m/a_{n-1}). This method also produces error bars for the approximate results, as we always have a_n \leq \sqrt{m} \leq m/a_n or m/a_n \leq \sqrt{m} \leq a_n

More generally, if one wants to find an iterative sequence that approaches a number x that has some desired property described by the equation f(x) = 0, the iteration rules to try are of the form a_n = a_{n-1} + cf(a_{n-1}), where c is some positive or negative constant. Note that the number x is a “fixed point” of this iteration, which means that if for some n, the number a_n is already exactly a_n = x, then all the numbers a_k with k>n are equal to x.

Depending on the choice of the initial guess a_0 and the factor c, these iterations can converge towards the desired result either quickly (like the iteration for \sqrt{2} above), slowly or not at all. Suppose we want to find a solution for the cubic equation P(x) = x^3 + 3x^2 -3x - 9 = 0 by iteration. If we try to set a_0 = 1 and a_n = a_{n-1}+P(a_{n-1}), we get the quickly diverging sequence

a_1 = -7
a_2 = -191
a_3 = -6858055

and so on. If we choose a_n = a_{n-1} + 0.1P(a_{n-1}) instead, the sequence converges towards the solution x = -\sqrt{3} \approx -1.732050808, but rather slowly:

a_0 = 1
a_1 = 0.2
a_2 = -0.7472
a_3 = -1.29726

a_8 = -1.71421

Sometimes this kind of iterations can also get “stuck in a loop”, bouncing between two values and not converging towards any number, like the sequence a_0 = 1, a_1 = 2, a_2 = 1, a_3 = 2, a_4 = 1, …

Sometimes the property that defines the number that we’re seeking is not defined by a polynomial equation. For example, the Lambert W function W(x) is defined by

x = W(x)e^{W(x)}.

Obviously, if we want to find an approximation of W(x) for some x, we could try an iteration with rule a_n = a_{n-1} + c(x-a_{n-1}e^{a_{n-1}}) where c is some positive or negative number between -1 and 1. But this, of course doesn’t produce rational number approximations like the iterations in the examples above did. Other things that have to be calculated iteratively, are the solutions of transcendental equations like e^x = 3x, which appear in the quantum theory of finite potential wells and the analysis of nonlinear electronic circuits (those that have diodes or other nonlinear components in them).

So, if you have to keep a presentation about convergence and iterations in some maths course, playing with this kind of calculations is a good way to construct multiple example cases of slowly or quickly converging sequences!

Random Numbers in Computation

When doing numerical simulations on a computer, there is often a need for “random” numbers, The most important application where they are needed is probably Monte Carlo simulations, which are used for things like computing approximate ground state wavefunctions and energy eigenvalues for quantum systems, and the calculation of position- and direction-dependent light intensity functions in an optical system where emission, reflection and absorption of light takes place.

The fact about these random numbers is that they are actually not random at all, and should be described as pseudorandom numbers. A digital computer works deterministically, which means that the same input always produces the same result. This situation might change when quantum computing becomes more commonplace. Those who have studied quantum mechanics may remember that if we have a quantum observable (some measurable quantity) A that has eigenstates \left|a\right> and \left|b\right> corresponding to eigenvalues a and b, and we prepare the quantum system to a state

\left|\psi\right> = \left|a\right> + \left|b\right>

(or more generally any state \left|\psi\right> = e^{i\alpha}\left|a\right> + e^{i\beta}\left|b\right> where \alpha and \beta are real numbers), and measure the value of the observable A in this state, the result of the measurement can be a or b with equal probabilities, otherwise being completely random. Here randomness means that it’s not possible to know beforehand which one of the possible values is obtained in any particular repetition of the experiment.

When one sees the sequence a_n with

a_0 = 0,a_1 = 1, a_2 =2, a_3=3, a_4=4 \dots

it seems to be intuitively obvious that it is not something we’d call random, as every number in the sequence is the previous one plus number 1. Neither is the sequence

a_0 = 0.5431, a_1 = -1.4267, a_2 = 1.1002, a_3 = -0.2983, a_4 = 0.7826, a_5 = -1.9201 \dots

because it is alternating in sign, with a positive number always being followed by a negative number and vice versa. In addition to the appearance of randomness as evaluated by a human observer, there are also many kinds of statistical tests that can be performed on a sequence to see whether there are correlations between the numbers a_n and a_{n+1} (or a_n and a_{n+2} etc.).

Fortunately it is quite easy to construct sequences of numbers that lie in some desired interval of the real number line and are otherwise seemingly random, in the sense of not following any obvious logic/pattern. The best-known example of this is probably the behaviour of the sequence of complex numbers

a_n = e^{in^2}=\cos n^2 + i\sin n^2, with n=1,2,3,4,\dots.

The numbers in this sequence all lie on the unit circle of the complex plane, and as the difference between n^2 and (n+1)^2 is always incommensurate with the period 2\pi, the numbers have an appearance of randomness.

The visual evaluation of the randomness of a number sequence is easiest if the x and y coordinates of a set of points on a plane are assigned values from the sequence. To test this, let us make two sequences, a_n and b_n, defined in a way similar to the real and complex components of the function mentioned above:

a_n = \sin (13+7(n+11)^2)
b_n = \sin (27+17(n+23)^2)

The factors 7 and 17, constant “phases” 13 and 27 and “offsets” 11 and 23 are something I just “pulled from the hat”, and they could as well be any other set of numbers that don’t have common factors.

Now make a C++ code that prints the coordinates of points (x,y) = (a_n , b_n) from n=1 to n=nmax:

#include <stdio.h>
#include <math.h>

main()
{
double n;
double nmax = 5000;

for(n=1; n<nmax; n++)
{
printf(“%f %f\n”, sin(13.0 + 7.0 *(n+11)*(n+11)), sin(27.0 + 17.0 * (n+23)*(n+23)));
}
}

Suppose this code was compiled into an executable file randnum.exe. Now writing

randnum > out.txt

on the Windows command line (or something equivalent in Linux), print the coordinates in the file “out.txt”.

Next draw the data point in the file in a 2D coordinate system, using Grace or some other graphing program. The results for the values nmax = 5000, nmax = 10000 and nmax = 20000 are below:

ran5000.jpgran10000.jpg

ran20000
The points seem to be rather random except that they are more likely to be near the borders of the rectangle that in the middle. The reason for this is the same as why an oscillating pendulum is most likely to be found near the edges of its trajectory at a randomly chosen moment – the derivative of the sine function A\sin \omega t with respect to t has its minimum absolute value when the sine function itself is at its maximum or minimum values. To correct this, change the code so that it converts the sine terms back to angles in interval [-\pi /2 , \pi /2] and divides by \pi / 2:

for(n=1; n<nmax; n++)
{
printf(“%f %f\n”, 2*asin(sin(13.0 + 7.0*(n+11)*(n+11)))/3.141, 2*asin(sin(27.0 + 17.0*(n+23)*(n+23)))/3.141);
}

Below you see a plot of the data points (x,y) obtained with this code.

ranh.jpg

Seems to be random enough for most purposes. Note, however, that you have to be careful with what kind of sequence you use when producing pseudorandom numbers in this way. If we do the same thing but use the same number as n-offset in the two sine functions:

for(n=1; n<nmax; n++)
{
printf(“%f %f\n”, sin(13.0 + 7.0 *(n+11)*(n+11)), sin(27.0 + 17.0 * (n+11)*(n+11)));
}

the plot of the data points looks like this:

lissajous.jpg

which doesn’t appear random at all.

Nondimensionalization and characteristic scales in physics

1 Introduction

When doing calculations in computational physics, it is usually necessary to convert equations to a nondimensional form, as a computer almost always works with dimensionless numbers. The most simple way to do this would probably be to equate the SI units (kilogram, meter, second…) to 1. This choice of scale, however, is not always appropriate as the interesting features of the dynamics of the system can take place at lengths and time intervals that are very small or large compared to these units. Plots and tables of the computational results are easier to read when the values in them are numbers that don’t differ from 1 by too many orders of magnitude. Examples of different choices of scales are the use of natural units (where we set c = 1 and ћ=1) in high-energy physics, and atomic units in quantum chemistry, where an appropriate choice of length unit is the Bohr and that of energy is Hartree.

2 Simple harmonic motion

As an example of a classical mechanical system with a characteristic timescale, consider a 1D Hookean oscillator with spring constant k and mass M. The equation of motion for the single position variable x(t) is

 M \frac{d^2 x}{dt^2} = -kx .           (1)

The obvious choice of typical timescale in this case is of course the period of oscillation

T = 2 \pi \sqrt{\frac{M}{k}} ,           (2)

and the corresponding nondimensional time variable would be \tilde{t}=t/T. This scale could also be found by simple dimensional analysis, forming a product of powers of the parameters M and k, and requiring that the product has dimensions of time:

[M]^{\alpha}[k]^{\beta} = kg^\alpha \times \frac{kg^\beta}{s^{2\beta}}=kg^{\alpha + \beta}s^{-2\beta} = s         (second) .         (3)

Solving this for alpha and beta gives us a characteristic time M^{1/2}k^{-1/2}, which is the period T divided by 2π and is of the same order of magnitude as T. If we want to integrate the equation of motion numerically with the finite-difference method, we would probably choose the discrete time step to be at least about 10-15 times smaller than T, and when plotting the trajectory x(t) we would usually choose a range of t-coordinate that is only a few periods long, no matter what the initial position x(0) and velocity x’(0) of the oscillating body are.

On the other hand, the motion of the oscillator described above does not have a general characteristic length scale. If we have two trajectories where in the first one the total energy (kinetic + potential) of the oscillator is 1 Joule and in the second one it’s 100 Joules, it is not appropriate to use the same range of x-coordinate when plotting the functions x(t), because in the latter case the amplitude of the trajectory is 10 times larger than in the former. This fact can also be seen from a simple dimensional analysis, by noting that there is no product of powers of M and k that has dimension of length:

[M]^{\alpha}[k]^{\beta} \neq m  (meter), for any \alpha, \beta .

3 Linear motion with damping

As another simple mechanical system that has a characteristic timescale, consider linear 1D motion with damping (air resistance or friction). The equation of motion is

M\frac{d^2 x}{dt^2}=-b\frac{dx}{dt},              (4)

where the damping parameter b has dimensions of mass/time. Immediately we can see that the variable M/b has dimensions of time, but we don’t know its physical significance yet. Solving the DE for x(t) and its time derivative, we obtain:

x(t) = x_0 + \frac{Mv_0}{b} - \frac{Mv_0}{b}e^{-b(t-t_0 )/M}        (5)

x'(t) = v_0 e^{-b(t-t_0 )/M} = v_0 2^{-\frac{b(t-t_0 )}{M \log 2}}=v_0 2^{-\frac{t-t_0 }{t_{1/2}}}  .      (6)

where x_0 and v_0 are the position and velocity at time t = t_0, and

t_{1/2}=\frac{M\log 2}{b}     (7)

is the half-life of the velocity.

Usually when some variable decays exponentially with some half-life, be it the number of radioactive atoms in a sample or the amount of a foreign substance (medication, environmental toxin) in human body, we can say that for our purposes the variable has practically become zero after about 6-10 half-lifes. Therefore the parameter t_{1/2}, or the related parameter M/b (which we obtained by dimensional analysis) can be seen as the characteristic timescale for the motion in this system. Again, there is no single characteristic length for the system, as the distance the object travels from the initial position x_0 in some number of half-lives is linearly proportional to the initial velocity v_0.

4  A particle inside rigid box

What about mechanical systems where the motion happens in some characteristic length scale? The simplest example of this kind of behavior is 1D motion of an object that has been confined in some interval [0, L] of the x-axis by impenetrable walls at points x = 0 and x = L. This can be described by saying that the potential energy of the object is zero when 0 < x < L and it is infinite when x < 0 or x > L. In quantum mechanics this model system is called ”particle in a box”, but here we consider only the classical mechanical equivalent.

If the collisions of the object with the walls are elastic, the trajectory x(t) is a sawtooth function like the one plotted below:

sawtooth

Figure 1. The periodic classical trajectory of a 1D particle in a rigid box.

Now the interval the object moves in is always [0,L] which doesn’t depend on its initial position and velocity (as long as the initial position is inside the box and the initial velocity is non-zero). Therefore the obvious choice of length scale for the system is L. However, this system does not have a characteristic time scale, as the period of the motion is 2L/v_0 which depends on the initial conditions.

5 A system with both a characteristic length and time

How, then, would one construct a system where there exist characteristic scales in both length and time? By dimensional analysis, we can expect that the set of parameters a,b,c,... in the equation of motion should be such that some product of their powers, a^{\alpha} b^{\beta} c^{\gamma} \dots , has dimensions of time and some other similar product has dimensions of length.

One way to make such a system is to form an equation of motion that contains both a Hookean returning force and a term that depends on both position and velocity:

M\frac{d^2 x}{dt^2} = -b x \left( \frac{dx}{dt} \right)^2 - kx        (8)

Now we can deduce that the parameter b needs to have dimensions [mass][length]^{-2}. A parameter formed from M, b and k that has dimensions of length is

L = \sqrt{\frac{M}{b}}            (9)

and a characteristic time is the familiar

T = \sqrt{\frac{M}{k}},            (10)

which already appeared in the case of simple harmonic motion. From these we can form dimensionless length and time variables

\tilde{x} = \frac{x}{L}, \tilde{t} = \frac{t}{T},            (11)

and written using these variables, the equation of motion becomes

\frac{d^2 \tilde{x}}{d\tilde{t}^2} = -A x \left( \frac{d\tilde{x}}{d\tilde{t}} \right)^2 - B\tilde{x} ,          (12)

where A and B are dimensionless numbers defined by:

A = \frac{bL^2}{M}, B = \frac{k T^2}{M}.

So how do the trajectories x(t) of this system look like? The equation of motion can’t be solved analytically, but we can do a numerical solution with Mathematica, Matlab or some other program capable of that kind of calculations. Setting the values of the parameters to A=0.5 and B=1, and using the initial position \tilde{x}(0) = 0 and a set of different initial velocities x'(0) = 0, 10, 100 or 10000, the following graphs are obtained:

v1v10

v100v10000

Figure 2. Trajectories of a particle in a velocity-dependent force field for initial dimensionless velocities of 1, 10, 100 or 10000.

From the plots in Fig. 2., it is obvious that the amplitude and period of the trajectory stay in the same orders of magnitude for a very large range of initial velocities/energies of the moving object. A 10^4 – fold increase in the dimensionless velocity causes only an about 6-fold increase in amplitude and an about 4-fold increase in frequency. The amplitude seems to increase only logarithmically with increasing velocity, i.e. slower than its any fractional power.

Note, however, that if the initial kinetic energy of the particle at t = 0 is small enough, the velocity-dependent term in the equation of motion can be neglected, and the system behaves like a simple harmonic oscillator for which the amplitude is directly proportional to the energy. To see the more interesting properties of this system, as in the graphs above, the energy needs to be large enough. In physics there are many situations where a system behaves differently in different energy scales, one example being a collision of atomic nuclei which can be described with simple Coulombic repulsion at low collision energies but will involve the possibility of nuclear fusion at larger energies where the particles can overcome the repulsive electrostatic force. The classical mechanical system here is a very simple toy model of this kind of behavior.

6 Applications

Many problems of applied physics where characteristic lengths and times and dimensionless quantities appear are related to the physics of fluid flow. A familiar dimensionless number that describes fluid motion is the Reynolds number (Re), which is related to the probability of turbulence occurring in a fluid system. Others include the capillary number (denoted Ca) and Eötvös number (denoted Eo), which describe the significance of surface tension in relation to other forces.

When a way to calculate the relevant length scales for a particular system are known, it is easier to choose a sufficiently fine discretization and a sufficiently large computational domain (intervals of the x- and t-axes) when integrating equations of motion numerically.