Cover_Page

Contents

Preface

1 A First Monte Carlo Example

1.1 Energy of Interacting Classical Gas

2 Variational Quantum Monte Carlo for a One-Electron System

3 Two Electrons with Two Adiabatically Decoupled Nuclei: Hydrogen Molecule

3.1 Theoretical Description of the System

3.2 Numerical Results of Moderate Accuracy

3.3 Controlling the Accuracy

3.4 Details of Numerical Program

4 Three Electrons: Lithium Atom

4.1 More Electrons, More Problems: Particle and Spin Symmetry

4.2 Electron Orbitals for the Slater Determinant

4.3 Slater Determinants: Evaluation and Update

4.4 Some Important Observables in Atoms?

4.5 Statistical Accuracy

4.6. Ground State Results

4.7 Optimization?

5 Many-Electron Confined Systems

5.1 Model Systems with Few Electrons

5.2 Orthorhombic Quantum Dot

5.3 Spherical Quantum Dot

6 Many-Electron Atomic Aggregates: Lithium Cluster

6.1 Clusters and Nanophysics

6.2 Cubic BCC Arrangement of Lithium Atoms

6.3 The Cluster: Intermediate between Atom and Solid

7 Infinite Number of Electrons: Lithium Solid

7.1 Infinite Lattice

7.2 Wave Function

7.3 Jastrow Factor

7.4 Results for the 3 × 3 × 3 and 4 × 4 × 4 Superlattice Solid

8 Diffusion Quantum Monte Carlo (DQMC)

8.1 Towards a First DQMC Program

8.2 Conclusion

9 Epilogue

Appendix

A.1 The Interacting Classical Gas: High Temperature Asymptotics

A.2 Pseudorandom Number Generators

A.3 Some Generalization of the Jastrow Factor

A.4 Series Expansion

A.5 Wave Function Symmetry and Spin

A.6 Infinite Lattice: Ewald Summation

A.7 Lattice Sums: Calculation

References

Index

Title Page

Related Titles

Waser, R. (ed.)

Nanoelectronics and Information Technology

Advanced Electronic Materials and Novel Devices

2012

ISBN: 978-3-527-40927-3

 

Reimers, J. R.

Computational Methods for Large Systems

Electronic Structure Approaches for Biotechnology and Nanotechnology

2011

ISBN: 978-0-470-48788-4

 

Alkauskas, A., Deák, P., Neugebauer, J., Pasquarello, A., Van de Walle, C. G. (eds.)

Advanced Calculations for Defects in Materials

Electronic Structure Methods

2011

ISBN: 978-3-527-41024-8

 

Kroese, D. P., Taimre, T., Botev, Z. I.

Handbook of Monte Carlo Methods

2011

ISBN: 978-0-470-17793-8

 

Allinger, N. L.

Molecular Structure

Understanding Steric and Electronic Effects from Molecular Mechanics

2010

ISBN: 978-0-470-19557-4

 

Owens, F. J., Poole, Jr., C. P.

The Physics and Chemistry of Nanosolids

2008

ISBN: 978-0-470-06740-6

 

Reinhard, P.-G., Suraud, E.

Introduction to Cluster Dynamics

2004

ISBN: 978-3-527-40345-5

 

Landau, R. H., Páez, M. J.

Computational Physics

Problem Solving with Computers

1997

ISBN: 978-0-471-11590-8

The Authors

Prof. Wolfgang Schattke

Institute of Theoretical Physics and Astrophysics

Christian-Albrechts-University Kiel

Leibnizstr. 15

24118 Kiel

and

Ikerbasque Foundation/Donostia International

Physics Center

P. Manuel de Lardizabal 4

20018 Donostia – San Sebastián

Spain

 

Dr. Ricardo Díez Muiño

Centro de Física de Materiales CSIC-UPV/EHU

and

Donostia Intern. Physics Cente

P. Manuel de Lardizabal 4

20018 Donostia – San Sebastian

Spain

Preface

The reader might be inclined not to read the preface when starting with the book, but rather at a later time when laziness or leisure leaves time for it. In the worst case, the reader might come back to the preface angered by some lack of understanding or, quite the opposite, angered by reading some undergraduate simplistic explanations. By consulting the preface, the reader is asking the authors about their goals in writing the book.

The main goal is declared by the book’s title, nevertheless with some restrictions in mind.

The following publication is settled somewhere between a textbook and a computer code manual. Its level is perhaps too specialized for a textbook and too broad for a manual. A positive comment would be that its content includes rather practical advice on what is usually described in a theoretical textbook, as well as presenting in more detail the physical understanding of what the manual of a code promises as a result. Dangling between these two extremes the authors could not decide where to place the book exactly, so they decided to take the risk of sharing the common ground in both.

Of course, one purpose was to make it more reader-friendly than a scientific paper, or a review article. However, reviews such as that of Foulkes, Mitas, Needs, and Rajagopal for example, represent invaluable sources for extended studies [1]. The path to fulfilling the purpose of a “friendly” book was led only by the authors’ own experience and will differ from that of others. In other words, neither the authors attended courses on “How to Write Pedagogically Good Books” nor did they read such literature.

Of course, the reader does not expect a manual coming with a scientific code which reduces to “read the input file explanations and then go on.” Therefore, instead of presenting just one code that could cover the general field, the authors decided to break up the program into pieces, each of them devoted to one of a few leading examples.

A pedagogic, but time and space-consuming possibility would have been to develop the codes step by step, to let the reader run into the many traps of programming errors with exercises and solutions. That could fill many volumes. We tried not to expand the volume beyond acceptable limits, but to keep enough material so that the reader could start and develop from it his or her own specific programs. Therefore, we gave up on the textbook idea.

At the very first concept of this book we thought of presenting the code collected from the PhD theses of Eckstein and Bahnsen, who completed their work within the group of one of the authors (WS). It soon became clear that we would easily run into one of the difficulties cited above which we wanted to avoid. Therefore, we decided to program from scratch. In this way, we were also free to present our way of understanding the codes. In addition, we take on full responsibility of errors, not attributing them to any other source.

However, the number of mistakes unveiled and additionally those still hidden is embarrassing. Though some of the latter might be useful to track the path the code developed, they are not on purpose, we assure that. We present the code as it developed after testing and correcting as usual. Our main programming style, if we had any, was to render the code to be easily changed. This can be taken as an excuse for the lack of beauty and the lack of program efficiency. Both aspects and perspectives will be evaluated by the community differently with changing time, changing compilers, and changing computational facilities. To keep the work along the course of finding the pleasure in writing, we must admit deficiencies which we are now blamed for. We hope that the pleasure of eventually acquiring successful access to the quantum Monte Carlo scheme might outweigh the shortcomings from the reader’s point of view as well.

Thus, the book is not written to deliver an optimized program code. These codes exist and their development is left to another branch of science. Instead, we wanted to show some aspects of the vast and beautiful possibilities of the quantum Monte Carlo (QMC) method and to attract and maybe seduce the reader to devote his or her interest to this subject. We also want to touch on the various possibilities of choices of computing schemes connected with the method. The material presented here is by no way complete, and the general scientific development is not treated completely either. Some approaches are tentative and should be improved, some are clumsy and might be smoothed. Some parts are still under discussion.

After these atmospherical remarks, let us summarize the main topics that we included and some of those excluded from the content. We almost entirely focused on the variational quantum Monte Carlo (VQMC) scheme. The diffusion Monte Carlo (DMC) topic only covers a rather trivial example, the harmonic oscillator. There is another large branch of quantum Monte Carlo calculations for electron systems that we entirely omit here. It is based on the path integral with explicit fermion statistics. Relying on large computing resources it is used in a model-like manner for example for strong-coupling systems but rarely applied ab-initio to systems of material science.

VQMC is usually considered as the poor man’s version of QMC primarily because its theoretical concept is simple. One can refrain from the heavy complex machinery, which is hidden in the depths of quantum statistics, and calculate only the energy expectation value by a multidimensional integral and minimize the latter with respect to the parameters present in the wave function ansatz. The integral itself is computed with statistically chosen points of support, and that is the stage at which some statistics enters. In particular, there is the belief in the central limit theorem stating that the procedure guarantees the reliability of those points which are drawn from a random walk. The problem lies in an adequate choice of a parameterized wave function. If there are many parameters, then one additionally has to utilize regression methods to obtain the best choice of them.

In contrast, the complexity of DMC is derived from the evolutionary scheme of a diffusion equation for the wave function, which should converge towards the true solution. Thus, one can dispense of an optimization procedure. Instead, one has to program the steps of the evolution, which is a combination of the separate actions of the kinetic and potential energy Hamiltonians on the actual wave function, to obtain the successive approximations. This combination as well as the generation of the random walkers, which mimic the wave function is less trivial. So we thought it important to explain this theoretical background and to show how it works with those easy going examples. To satisfy oneself with the role of being a theoretically poor man when devoting oneself to VQMC, one could imagine that DMC only replaces the optimization procedure of VQMC. One would also think that the physical insight lies in the choice of the functional shape of the many-body wave function rather than in obtaining its numerical representation as from DMC. Actually, in scientific calculations, one uses VQMC as the starting point and the rich man becomes again superior to the poor.

Presenting mainly VQMC in this volume, we proceed from simple examples such as the hydrogen atom, which has a known solution, to complicated ones such as the lithium solid. Being an infinite system, the latter presents a number of additional theoretical and numerical aspects which inflate the magnitude of the first example. Several intermediate steps are therefore inserted and explained: the hydrogen molecule, to deal with a two-electron system, going over to three electrons in the lithium atom, expanding to an arbitrary number of electrons when enclosed in a simple box potential or when assembled to an aggregate as a lithium cluster, to finally treating the three-dimensional periodic array of lithium atoms in a crystal. The two-electron system provides a first glance of particle symmetry in the wave function. The lithium atom stands for multiplicity and spin symmetry. Instead of localized orbitals, plane waves are utilized in a box, which also gives an opportunity to present the pair-correlation function. With the cluster of lithium atoms we discuss the role of a physical boundary that is important for the case of the infinite solid because of its shape-dependent energy terms, which only slowly converge with system size. The theory for the solid suffers from such terms resulting in an unacceptable slowing-down of convergence. Special remedies have to be discussed to this end, which complicate the program structure in addition to the routines already needed for a solid-state system.

The solid concludes the examples in the field of VQMC followed by the subject of DMC. Some detailed derivations are found at the end of the book in an appendix. The References cite suggestions for details and a deeper understanding of the material rather than exhaust the field or give honor to contributions for their historical importance.

One of the authors (WS) feels especially and gratefully obliged to the Donostia International Physics Center (DIPC) at the University of the Basque Country (UPV/EHU) for its long-lasting and generous hospitality. The time there rendered the development of this book an exciting experience and pleasure. In addition, these activities provided the opportunity to work for a period within the Ikerbasque community, which complemented the broad range of interests where he was embedded.

The other author (RDM) would like to thank the warm hospitality of the Christian-Albrechts-Universität of Kiel during part of the writing of this book. RDM is also extremely grateful to WS for teaching him how to maneuver in the intricate world of quantum Monte Carlo, and is definitely indebted to him for his patience and generosity.

Wolfgang Schattke
Ricardo Díez Muiño

Kiel

Donostia – San Sebastian

2012

1

A First Monte Carlo Example

What will be found in this chapter: We introduce randomness in a general way and we show how to deal with it in terms of probabilities and statistics. To illustrate the concepts, we start the book with an example based on classical physics, namely classical particles moving in a box. It is an example much simpler than those that involve quantum mechanics, but that already demonstrates the power of statistical physics and the deep insight offered by averaging magnitudes over many degrees of freedom.

1.1 Energy of Interacting Classical Gas

There are an overwhelming number of places in life where one is confronted with statistics: from a random binary even/odd decision, when picking petals off a flower to learn about the chances of being loved, to the refined probability distributions of health and age that life insurance companies use to estimate the premium [2].

Of course, the knowledge of how to treat ensembles of many elements appears to be much older and already shows the first traces of statistical insight. In ancient Asia Minor, for instance, the Hittites, who were strong in book-keeping, registered with eager interest the quantity of barley for their beer. No doubt that, for this purpose, they used some measurement pot instead of counting the grains in the bucket. Masters of cuneiform writing as they were, their alphabet would have had trouble counting huge numbers to enumerate the grains.

Statistical aspects emerged even more clearly in the past in the context of cryptography. In the early Islamic centuries, Arab scientists were very skilled in identifying the originality of texts which were attributed to Muhammad. To decode a text, the occurrences of single letters in a language can be counted. Such statistical analysis of languages was crucial to develop decoding algorithms able to solve outstanding problems of cryptology. For example, roman military encoded their messages mixing the letters of the alphabet in a manner only known to the intended receiver, a procedure that resisted code-breaking for many centuries [3].

Mankind did not wait for the appearance of the Monte Carlo casinos to make their own statistical evaluations. By the way, those establishments for higher society provided numbers for the frequency of random events. These random numbers were not only taken by a gambler to predict the outcome of subsequent throws but also by mathematicians to simulate any random process of unbiased events. The generation of random numbers by a mathematical algorithm is no trivial task. The quality of these so-called pseudorandom numbers strongly depends on the effort the algorithm invests. The need for random numbers is obvious for example in the numerical computation of high-dimensional integrations when the complexity of the integrand limits the evaluation to a small number of points. Then, choosing randomly distributed points represents a clear advantage over an equidistant grid if the distribution of the weight of points, uniform or nonuniform, can be guessed from the integrand itself. With the increased performance of computers, many fields embarked on this concept of integration. In its simplest case, the distribution can be chosen to be uniform, that is, nothing is known about it. In the main part of this book, however, we will show how the distribution of points in multidimensional integrals appearing in physics can be extracted from physics itself and its known statistical laws.

1.1.1 Classical Many-Particle Statistics and Some Thermodynamics

In this starting section we consider the relatively simple case of classical particles which move in a box and are described by their statistical behavior. This example avoids the complexity of quantum theory but shows already the statistical aspect of the general method, namely a scheme to average over many degrees of freedom.

Think of a rock concert in a huge hall filled with thousands of enthusiastic rapidly moving and hopping dancers. A significant pressure is exerted by the people onto the barrier of the stage. This pressure can be observed by how the dancers are spilled up to stage and how they are reflected in jumping back. And it is hot. So the hall, closed by doors and stage, is an example of a real gas with pressure and temperature, except that the particles are able to think (but who knows?).

The description of the behavior of many particles in terms of single-particle quantities such as exact positions or velocities quickly drops out of any feasible treatment when increasing the particle number. For more than one hundred years, the solution has been known (Boltzmann, Maxwell), focusing on practical grounds since nobody would be interested in the details. Nobody except the person himself/herself cares about the very elaborate moves another dancer of the concert hall is performing.

Moving now to physics, for a number of roughly 6 × 1023 gas atoms per mole, it would be nearly impossible to identify all their positions, enumerate them, make a table, and communicate that information to someone else. Instead, one realizes that the three quantities, volume V (pressure p), heat Q (temperature T), and particle number N (chemical potential μ) or the bracketed ones, already provide a good description of the gas for a wide range of applications. Remember the historical Magdeburg half-spheres, where even the strength of two horses pulling in opposite directions was not enough to separate them. Or consider the weather forecast, by which people are more or less strongly affected: statements on the next day’s weather are achieved by estimating the evolution of those thermodynamic quantities. The reason that such a description works (an optimistic view) is founded in statistical mechanics. The deviations from predictions, which question the reliability of weather forecasts, are influenced by turbulences whose treatment is formidable and involves statistical details not covered by the above-mentioned averaged quantities of thermodynamics. Nevertheless, the latter already yield a weather forecast we thankfully acknowledge.

Only the field of classical particles will be involved in the following statistics, leaving aside quantum properties which will be considered later. Here we focus for simplicity on a classical system. The equation of state of an ideal gas was significantly generalized as the van der Waals equation of the so-called real gases,

(1.1) c01_image001.jpg

with Boltzmann constant kB. This equation is equivalent to that of an ideal gas for a volume reduced by the residual volume b of the molecular constituents and a pressure reduced by the inner pressure a/V2, which is exerted on the container wall by the particles’ repulsion. The 1/V2 law and the constants a and b are either determined empirically or derived theoretically from statistics.

The subsequent considerations serve also as a test of the main statistical tool used in the variational quantum Monte Carlo (QMC) method besides the general common aspect. In fact, this kind of statistical investigation happened prior to the QMC development, showing at least their common roots. To be more specific we consider the derivation of the 1/V2 law for the real gas. The idea behind introducing the Monte Carlo method in statistical mechanics is the multidimensional integration in statistical averages for many particles. For those interested in the relevant equations connecting thermodynamics and statistical mechanics for this example we give a short summary.

The key quantity is the free energy F(T, V) = – kB T ln Z(T, V) obtained from the classical partition function,

(1.2) c01_image001.jpg

(1.3) c01_image001.jpg

(1.4) c01_image001.jpg

where β = 1/(kB T) is related to the absolute temperature T and vij is the potential between two particles. The integration over the momenta pi is carried out above in closed form. The remaining integral abbreviated by Zpot has to be calculated numerically once a suitable interparticle potential is fixed. Before proceeding we state a few thermodynamical relations to connect the van der Waals equation (1.1) with the partition function. The partial derivative of the free energy with respect to volume, where temperature has to be kept constant, yields the pressure which is equated to that of the van der Waals state equation (1.1),

(1.5) c01_image001.jpg

(1.6) c01_image001.jpg

(1.7) c01_image001.jpg

Expression (1.6) will be transformed into a form more suitable for a general Monte Carlo integration. To this end we differentiate the logarithm of the partition function ln Zpot with respect to –β which yields the average of the potential energy weighted by the Boltzmann probability density:

(1.8) c01_image001.jpg

(1.9) c01_image001.jpg

Reversely, we can reconstruct the partition function and furthermore the pressure from the average potential energy by integration, see (1.5), (1.8),

(1.10) c01_image001.jpg

(1.11) c01_image001.jpg

As a result, the relation between the pressure and the intended MC integration is given by an averaged potential energy Upot, see (1.9),

(1.12) c01_image001.jpg

Without special notation, in order to simplify the writing, we always keep constant the remaining variables in a partial differentiation or integration. Integrals which constitute an average as in (1.8) are within the scope of the Monte Carlo method we will dominantly use later. Here, it enables us to give an estimate for the state equation.

The potential energy will be fixed as a screened repulsive Coulomb potential, the so-called Yukawa potential. This is a convenient example for our main purpose which deals with electrons. Think here of a charged gas, though a real uncharged gas can be modeled similarly in the repulsive regime. For simplicity, we omit the hard core repulsion given by the finite extent of the molecules. As a consequence the constant b in the van der Waals equation (1.1) must be set equal to zero. With screening length λ and potential strength v0 the Yukawa potential reads as

(1.13) c01_image001.jpg

using boldface types for three-dimensional vectors as the position vector ri of particle i. The 1/V2 volume dependence of the van der Waals pressure term is only an approximation of next lowest order to the ideal gas equation in an 1/V expansion which is called a virial expansion. Our MC simulation will not only display this term but the whole correction, being exact within this potential model and within the statistical error margin. More realistic calculations are based on the Lennard-Jones potential model and yield analytical results through an Ursell–Mayer cluster expansion. This is beyond the scope of this text though the interested reader could run a MC simulation with a more suitable interparticle potential.

With b = 0 we obtain from (1.1), (1.12) and finally integrating with respect to V

(1.14) c01_image001.jpg

(1.15) c01_image001.jpg

where the integration constant vanishes at infinite volume.

We can approximately get some insight from the analytical point of view. At high temperature β → 0 the multiple space integrations in (1.8) can be carried out exactly and reduce to volume averages over the potential energy of two particles, which is approximated by averaging 1/r12 exp (–r12) over a spherical volume V = L3 [1] = 4π/3R3 of radius R for a cubic box of edge length L,

(1.16) c01_image001.jpg

(1.17) c01_image001.jpg

(1.18) c01_image001.jpg

The former case considers a screened interaction in the limit of a large volume. The latter applies to small volumes. Alternatively, if one likes to investigate the thermodynamic limit of infinite volume without screening, the screening has to be switched off first (λ → ∞) before proceeding to the limit of large volume. The former gives the van der Waals law with

(1.19) c01_image001.jpg

where a is determined by the constants of the interaction potential. The factor 1/2 N(N – 1) counts the number of terms in the potential energy. It is compensated in the state equation by the square of the volume which is N times the volume per particle. Chemists prefer volume per mole, instead, by replacing NkB by the gas constant R, see (1.1).

In view of the box geometry the approximation of (1.18) is too crude to compare with the subsequent exact numerical results. We need the exact asymptotical behavior. In Appendix A.1 the behavior for L/λ images 1 in the geometry which is used in the MC calculations is evaluated. It yields a single number 0.2353 to determine Upot, see (1.12), as Upot = N(N – 1)(4v0/L)0.2353.

In order to use comfortable quantities for the MC simulation, instead of V, T, and Upot we define dimensionless quantities

(1.20) c01_image001.jpg

(1.21) c01_image001.jpg

(1.22) c01_image001.jpg

which leads with (1.14) to

(1.23) c01_image001.jpg

The above limits of Upot control the numerical MC results obtained for the right-hand side of (1.23) in the case of high temperature, specifically

(1.24) c01_image001.jpg

(1.25) c01_image001.jpg

for T∞.

In the following we present and discuss the main parts of the program “therm95.f”. The program “therm95.f” uses the MC method to calculate the average energy of a system of particles confined in a cube under conditions of fixed temperature T, volume V, and particle number. The complete code is electronically available in the collection of programs.

A property is used which states that the statistical average of a quantity with a given probability density can be evaluated by summing a sequence of terms obtained from sequentially choosing points in density space according to their probability distribution. These terms are calculated as values of that quantity at the chosen points. Because of the sequential choice the terms can be considered as representing a fictitious time evolution of the quantity. The average over the statistical ensemble arises then as an average over evolution time where ergodicity would guarantee the equality of both averages.

Accordingly, the calculation of the average potential energy as specified in (1.9), (1.13) and (1.22) proceeds by simulating a random walk which samples the configuration space of the particles by a sequence of M MC steps where in turn the position of every individual particle is submitted to a random change according to the Boltzmann probability. The “change according to probability” is achieved by proposing a step of random length and direction within a maximum step length and accepting it, if the probability of the new position is larger or equal to that of the old one. It is accepted also in the opposite case, that is, the ratio between the new and old probability is smaller than one, if the value of a one-dimensional random number drawn from a uniform distribution between zero and one lies below that ratio. Otherwise the proposed position is rejected and the actual particle stays at its old position. This reflects the true frequency of acceptances given by the probability distribution. The potential energy is simultaneously summed up adding its value at the actual position of each step. The sum divided by the number M of steps represents at each stage a realization of a random energy variable, which is the normalized sum of sequential individual energies. According to the central limit theorem of probability theory such a sum converges under very general premises to a Gaussian distributed variable with a mean equal to the true average and a variance decreasing as 1/M to zero. Written by formulae the acceptance probability pstep for a proposed step is obtained from the Boltzmann probability density of (1.9), PBoltzmann, via

(1.26) c01_image001.jpg

(1.27) c01_image001.jpg

The factor Zpot(T, V)–1 represents the normalization of the probability. But we do not need to know it. It is a nice property of the so-called Metropolis algorithm described above which states that the procedure automatically guarantees normalization. For instance, if instead of summing the energy we sum an arbitrary constant, say c, then the sum divided by M is equal to c, which demonstrates automatic normalization.

The need for random numbers occurs at two stages. First, one has to decide upon direction and length of a step in position space. To this aim one runs over the three Cartesian directions and chooses every coordinate extent as a fraction of a maximum step length by drawing a random number uniformly distributed between zero and one. Second, one has to decide upon acceptance of a MC step in case of a smaller probability at the new particle position.

Throwing a die and letting statistical events decide one’s fate has been common to mankind from its earliest stages. From written testimonies of antiquity down to relics in the tombs of ancient Egypt, or of any region including the Far East, the dice always had a similar appearance, except that some were loaded for cheating. Thus, some statistical concepts must have been known since early times, though the art of gambling experienced a refinement in our ages with the construction of casinos as in Monte Carlo. In casinos a large reservoir of random numbers was developed, which could be utilized for statistical investigations instead of throwing the dice. Drawing random numbers has now become even more easy by the use of computers. We will present details of random number generators in a subsequent section, but here we use without discussion the built-in generator of the g95-Fortran compiler. The relevant code is contained in the “module random” of the program.

A dumb choice of starting positions, as it might occur for instance by an extremely improbable configuration could influence the subsequent positions for a number of steps and deteriorate the value of the sampled energy. To become independent of the initial conditions, one applies a first MC run without counting the energy, our so-called prerun, which otherwise has the same structure as the main run. Furthermore, we skip the formal specification part of the program as well as both outer loops, which run over the different values of two external parameters, the temperature parameter ALPHA and the cubic length parameter LENGTH. Programming of the output data does not need explanation either. Dots denote program parts left out from this representation.

c01_image001.jpg
c01_image001.jpg
c01_image001.jpg
c01_image001.jpg
c01_image001.jpg
c01_image001.jpg

Modules “random”, “position”, and “output” are used, which specify the random number generator, the updating of the particle positions and their potential energy, and the output collection. The values for the number of particles, number of steps of the prerun and of the main MC run are chosen as NE = 100, MCPRE = 10 000, MCMAX = 100 000 in this example. The maximum step width STEPMAX is taken as somewhat less than the actual length parameter LENGTH, which denotes the edge length of the container cube. A random number is calculated by subroutine GENRAN(rannumb). The initial particle positions RE(1:3,1:NE) are randomly chosen.

The arrays DIST(1:4,1:NE,1:NE) and DISTNEW(1:4,1:NE,1:NE) both store the interparticle distance vectors, the former for the old configuration and the latter for the updated one. At start both are identical. The fourth component stores its modulus. VNEW(1:NE) and VDIF(1:NE) store the updated potential and the difference between it and its nonupdated value of a particle in the field of the others, see (1.13). Initializing is completed with the initial setting of the counter MCOUNT which counts each accepted case of MC steps and with the setting of the energy variables, that is, the actual potential energy LOCEN, its average ERWEN, and its variance VAREN.

Here, we describe only the main MC run which is started after the prerun, because the prerun is essentially the main run without the calculation of observables. The MC steps are counted by IMC and for every step a run over all NE particles is carried out with IE as actual particle index. The update process begins by proposing a jump of the actual particle to a new position which is obtained by increasing its three coordinates by randomly chosen increments within the interval ±STEPMAX/2. RNEU(1:3,1:NE) stores the new position of that particle. Subsequently, the constraint for enclosing the particles in a cubic box is applied by randomly back scattering the particle if the new position crosses the container walls. Alternatively, one could think of applying reflecting boundary conditions or requiring a strict cut-off at the surface. The reflecting walls could mean nonergodic runs like the behavior of a billiard ball enclosed between ideally reflecting walls, which comes close to the case of a very dilute gas. It is left to the reader to test how this changes our results. We will come back to this point in Section 1.1.2, though a test with strict cut-off on the case of α = 0.5 in Figure 1.1 does not change the graph.

To calculate the ratio QUOT between the probability after the jump and that before it, subroutine JASPEX is called which involves the module “position” and is explained in detail later in context with that module. Here, it suffices to state that the new potential VNEW(IE) and its difference VDIFF(IE) with respect to the old one is obtained by this routine. As only the position of the actual particle IE is changed at the actual jump, interparticle potential contributions between the actual particle only and all other particles are involved. The factor ALPHA in the exponent stands for the temperature, see (1.13) and (1.20). Then, as explained in the theory, the ratio QUOT is compared with 1 and the logical variable MCSCHRITT set to “true” if QUOT is larger than or equal to 1. Otherwise MCSCHRITT gets the value “true” in those cases where a random number drawn from [0,1] lies below QUOT and “false” otherwise.

In the case that MCSCHRITT is “true” the position variable RE(3,NE) is updated to its new value and the acceptance counter is incremented. Otherwise the variable RNEU(3,NE) holding the new position is reset to the old position. The potential energy “ep” of the actual particle IE is given the actual value according to the new or old position by the last call to JASEXP, and added to the energy sum of previous particles. For the energy per particle we divide by NE and update the average energy ERWEN and its variance VAREN. The formulae for the latter quantities are addressed in the section on statistical properties. We have reached the end of the MC loop with label 100. The remaining part, not cited here, concerns the output. Note that we reserve channel 35 for control output in a file with ending MC.OUT. Channels 38, 39 direct the average energy ERWEN and its variance VAREN to the files therm_erw_yukawa.dat and therm_var_yukawa.dat, respectively, for appending the results of multiple MC runs.

It remains to discuss the subroutine JASEXP contained in the module “position” as found below.

Figure 1.1 Van der Waals pressure correction to ideal gas equation of state according to (1.22) in logarithmic representation, u(α, v) vs. L = v1/3. Note that here the edge length L of the cubic container volume is already scaled by the screening length λ of the interaction potential. Thick lines without symbols show analytical results2π(N – 1)/L3 and 4(N – 1)0.2353/L for high temperature α → 0 at both asymptotic limits vand v → 0, respectively; dots connected by thin solid lines refer to MC simulations with α = 0.5 showing separate calculation sets with different maximum step widths STEPMAX = 0.001,0.01,0.1, and 1.0 for lengths L between 0.1–1, 1–10, 10–50, and above 50, respectively, calculated at high accuracy with MC steps equal to 106 times number of particles N. Lower accuracy with only 105 times number of particles is shown by open squares for comparison; lines with star symbols refer to different values of α as indicated and lower accuracy (105), number of particles is N = 100 in all cases; inset shows an extract for very small volume.

c01_image001.jpg
c01_image001.jpg
c01_image001.jpg

The interparticle distances refer to the actual particle IE and concern their value with respect to the new DISTNEW(3,IE,NE) and to the old position DIST(3,IE,NE). With these distances the new potential energy VN(IE) and its difference VD(IE) from the old potential energy is determined according to (1.13) and transferred to the calling program. Note that the potential energy per particle must carry a factor 1/2 which is multiplied in the calling program. In contrast the probability ratio QUOT contains the total potential energy, which is twice that value because the particle IE appears twice in the particles’ double sum.

Figure 1.1 displays over several length decades how the exact result for high temperatures is approached by the MC simulation. The edge length of the volume is scaled by the screening length of the interaction potential such that at λ the transition occurs between unscreened potential at small distances and fully screened potential at large distances. The plot shows a rapid convergence towards the van der Waals 1/L3 dependence at large L. The temperature dependence is shown by the variation with parameter α ∝/T being appreciable at large volume and disappearing for small. The integral in (1.23) becomes simply u in the latter case and can be roughly estimated as an average of u over some values of α as displayed in the general plots. Remember that both asymptotic lines are valid only for large temperature, α → 0. Towards smaller volume the curve bends from a linear logarithmic behavior with slope –3 to a slope of –0.3 at the left end of the plot.

The module “random” is discussed in Appendix A.2, which contains a few random generators and the notation. Here, we state some values to show that employing different random generators lies within the error margin of the statistical accuracy. For instance, if we use the “random_number(rannumb)” subroutine of F90/95, see Table 1.1, we obtain u = 5.1789 × 10–4, σ2 = 6.4 × 10–7 for L = 100 and u = 0.41139, σ2 = 8.8 × 10–4 for L = 10 with α = 0.5, MCMAX = 106, NE = 100 in both cases. An estimate by twice the standard deviation c01_image001.jpg, which is 0.016 × 10–4 for the former and 0.000 06 for the latter, yields for the lengths LENGTH = 1 and 100 a 95% probability that this and the other three generators are equivalent. For LENGTH = 10 this hypothesis cannot be confirmed from the standard deviation.

The behavior of the van der Waals pressure correction depends on the details of the chosen interaction potential, of course. We are not much interested here in the specific values of the real gas, which could be easily obtained with this procedure once the interaction potential is properly chosen. We leave this task to the reader. Instead, we proceed to discuss some calculational properties, for example an averaging procedure for a probability measure of this specific form. It becomes of highest importance in the Monte Carlo simulations of the ground state of quantum systems. It is exactly this kind of Boltzmann-like probability with a Coulomb repulsion in the exponent the one that introduces correlation into the many-body wave function, generalizing the pure Slater determinants. This part in the probability density, called the Jastrow factor, multiplies the one-particle determinants of the Hartree–Fock theory.

Table 1.1 Values of average energy u and variance σ2 for two box sizes L and for four random number generators.

c01_image001.jpg

With this example we learned a number of facts:

We will keep these statements in mind for those cases to follow where we have no easy possibilities at hand to control the reliability of the MC simulations. This is the majority of cases.

1.1.2 How to Sample the Particle Density?

According to the probability interpretation of the Boltzmann factor we can write for the average particle density,

(1.28) c01_image001.jpg

In the program we will count at every step the number of particles which fall into each cell of a mesh into which the whole cube is divided. Again the integral is calculated numerically by a discrete support weighted with the Boltzmann probability, which is obtained in the course of a random walk executed just with this same probability. Instead of one random variable, as the energy, here we have to calculate the filling of each of say NDIV3 cells by particles at each of the MCMAX MC steps. The sampling can be illustrated by displaying the particle density at various stages of convergence. In particular, in the case of the real gas, physical differences should become apparent in changing the value of temperature. The program uses the logical variable SWIRHO and samples the density on a 20 × 20 × 20-grid (NDIV = 21) with meshes of width 1/20 of the cubic edge length LENGTH if SWIRHO is true. The sampling should be switched off in a normal run, because these inner loops afford significant runtime. The density is evaluated by a call to subroutine DENSITY(RHO) inserted in module “position” which yields the density by counting the occurrences the position RE attained by a particle falls on a specific mesh, see the program part below.

c01_image001.jpg