Cover Page

Table of Contents

Title Page




Chapter 1: Mathematical Considerations

1.1 Stochastic Perturbation Technique Basis

1.2 Least-Squares Technique Description

1.3 Time Series Analysis

Chapter 2: The Stochastic Finite Element Method

2.1 Governing Equations and Variational Formulations

2.2 Stochastic Finite Element Method Equations

2.3 Computational Illustrations

Chapter 3: Stochastic Boundary Element Method

3.1 Deterministic Formulation of the Boundary Element Method

3.2 Stochastic Generalized Perturbation Approach to the BEM

3.3 The Response Function Method in the SBEM Equations

3.4 Computational Experiments

Chapter 4: The Stochastic Finite Difference Method

4.1 Analysis of the Unidirectional Problems

4.2 Analysis of Boundary Value Problems on 2D Grids

Chapter 5: Homogenization Problem

5.1 Composite Material Model

5.2 Statement of the Problem and Basic Equations

5.3 Computational Implementation

5.4 Numerical Experiments

Chapter 6: Concluding Remarks




Title Page


The author would like to acknowledge the financial support of the Polish Ministry of Science and Higher Education in Warsaw under Research Grant No. 519-386-636 entitled “Computer modeling of the aging processes using stochastic perturbation method,” transferred recently to the Polish National Science Foundation in Cracow, Poland. This grant enabled me to make most of the research findings contained in this book. Its final shape is thanks to a professor's grant from the Rector of the Technical University of Łódzacute during 2011. Undoubtedly, my PhD students – with their curiosity, engagement in computer work, and research questions – helped me to prepare the numerical illustrations provided in the chapter focused on the stochastic finite element method.


Uncertainty and stochasticity accompany our life from the very beginning and are still a matter of interest, guesses, and predictions made by mathematicians, economists, and fortune tellers. Their results may be as dramatic as car or airplane accidents, sudden weather changes, stock price fluctuations, diseases, and mortality in larger populations. All these phenomena and processes, although completely unpredictable for most people, have mathematical models to explain some trends and limited prognosis. There is a philosophical issue undertaken by various famous scientists whether the universe has a deterministic nature and some marginal stochastic noise—some kind of chaos or, in contrast, everything is uncertain—more, less, or fully.

In civil engineering we may observe the most dangerous aspects resulting from earthquakes, tornadoes, ice covers, and extensive rainfalls. These are the case when stochastic fluctuations may also be treated as fully unpredictable, usually having no mean (expected) value and quantified coefficient of variation, so we are unable to provide any specific computer simulation. Let us recall that engineering codes usually apply the Poisson process to model huge catastrophic failures but they need extended and reliable statistics unavailable in many countries and sometimes even non-existent due to the enormous technological progress required. On a smaller scale (counting economic disasters and their possible consequences) we notice almost every day wind-blow variations and their results [158], accidental loading of cars and railways on bridges during rush hours, statistical strength properties of building materials, corrosion, interface cracks, volumetric structural defects, and a number of geometrical imperfections in structural engineering [142]. These are all included in mathematical and computational models with basic statistics coming from observations, engineering experience and, first of all, experimental verification. We need to assume that our design parameters have some distribution function and the most practical assumption is that they have Gaussian distributions. This reflects the Central Limit Theorem, stating that the mixture of different random variables tends to this particular distribution when their total number tends to infinity.

We are not interested in analyses and predictions without expectations in this book; computational analysis is strictly addressed to engineering and scientific problems having perfectly known expected values as well as standard deviations and to the case where the initial random dispersion is Gaussian or may be approximated by a Gaussian distribution with relatively small modeling error. In exceptional circumstances it is possible to consider lognormal distributions as they have recursive equations for higher-order probabilistic moments. From the probabilistic point of view we provide up to a fourth central probabilistic moments analysis of state functions like deformations, stresses, temperatures, and eigenfrequencies, because then it is possible to verify whether these functions really have Gaussian distributions or not. The stochastic perturbation technique of course has a non-statistical character so we cannot engage any statistical hypothesis and are interested in quantification of the resulting skewness and kurtosis. Recognition of the Gaussian output probability density function (PDF) will simplify further numerical experiments of similar character since these PDFs are uniquely defined by their first two moments and then the numerical determination of higher moments may be postponed.

From a historical point of view the first contribution to probability theory was by the Italian mathematician Hieronimus Cardanus in the first part of his book entitled Philologica, Logica, Moralia published more than 100 years after he finished it. As many later elaborations, it was devoted to the probability of winning in random games and had some continuation and extension in the work of Christian Huygens. It was summarized and published in London, in 1714, under the self-explanatory title The Value of All Chances in Games of Fortune; Cards, Dice, Wagers, Lotteries & C. Mathematically Demonstrated. The main objective at that time was to study the discrete nature of random events and combinatorics, as also documented by the pioneering works of Blaise Pascal and Pierre de Fermat. One of the most amazing facts joining probability theory with the world of analytical continuous functions is that the widely known PDF named after the German mathematician Karl Friedrich Gauss was nevertheless elaborated by Abraham de Moivre, most famous for his formula in complex number theory. The beginnings of modern probability theory date to the 1930s and are connected with the axioms proposed by Andriei Kolmogorof (exactly 200 years after the normal distribution introduced by de Moivre). However, the main engine of this branch of mathematics was, as in the previous century, just mechanics and, particularly, quantum mechanics based on the statistical and unpredictable nature noticed on the molecular scale, especially for gases. Studies slowly expanded to other media exhibiting strong statistical aspects in laboratory experiments performed in long repeatable series. There is no doubt today that a second milestone was the technical development in computer machinery and sciences, enabling large statistical simulations.

Probabilistic methods in engineering and applied sciences follow mathematical equations and methods [158], however the recent fast progress of computers and relevant numerical techniques has brought about some new perspectives, a little removed from the purely mathematical point of view. Historically, it is necessary to mention a variety of mathematical methods, where undoubtedly the oldest one is based on straightforward evaluation of the probabilistic moments of the resulting analytical functions on the basis of moments of some input parameters. This can be done using integral definitions of moments or using specific algebraic properties of probabilistic moments themselves; similar considerations may be provided for the time series defining some random time fluctuations of engineering systems and populations as well as related simple stochastic processes. It is possible, of course, to provide analytical calculations and justification that some structure or system gives a stationary (or not) stochastic response. According to the progress of mathematical disciplines after classical probability theory, at the beginning of the twentieth century we noticed an elaboration of the theory of stochastic differential equations and their solutions for specific cases having applications in non-stationary technical processes like structural vibrations and signal analysis [158].

Nowadays these methods have brand new applications with the enormous expansion of computer algebra systems, where analytical and visualization tools give new opportunities in conjunction with old, well-established mathematical theories. Since these systems work as neural networks, we are able to perform statistical reasoning and decision-making based on the verification of various statistical hypotheses implemented. The successive expansion of uncertainty analysis continued thanks to computers, important for large data set analysis and, naturally, additional statistical estimators. The first of the computer-based methods, following traditional observation and laboratory experiments, is of course the Monte Carlo simulation technique [5, 25, 53, 71], where a large set of computational realizations of the original deterministic problem on the generated population returns through statistical estimation the desired probabilistic moments and coefficients. The pros and cons of this technique result from the quality and subprocedures of the internal random number generator (generation itself and shuffling procedures) as well as the estimators (especially important for higher-order moments) implemented in the computer program. Usually, precise information about these estimator types is not included in commercial software guides. An application of this method needs an a priori definition of both basic moments and the PDF of the random or stochastic input, however, we do need to restrict ourselves to the Gaussian, truncated Gaussian, or lognormal PDF since we can neither recover nor process analytically probabilistic moments. The next technique to evolve was fuzzy analysis [132], where an engineer needs precise information about the maximum and minimum values of a given random parameter, which also naturally comes from observation or experiments. Then, this method operates using interval analysis to show the allowable intervals for the resulting state functions on the basis of the intervals for given input parameters. A separate direction is represented by the spectral methods widely implemented in the finite element method (FEM), with commercial software like ABAQUS or ANSYS, for instance. These are closely related to vibration analysis, where a structure with deterministic characteristics is subjected to some random excitation with the first two probabilistic moments given [117, 153]. Application of an FEM system makes it possible to determine the power spectral density (PSD) function for the nodal response. General stochastic vibration analysis is still the subject of many works [30, 143], and many problems in that area remain unsolved.

We also have the family of perturbation methods of first, second, and general order applied in computational mechanics and, also, the Karhunen–Loeve expansion techniques [38, 39] as well as some mixed hybrid techniques, popular especially for multiscale models [176]. These expansion techniques are provided using the eigenfunctions and eigenvectors of the covariance kernel for the input random fields or processes, both Gaussian and non-Gaussian [168, 174]. They need more assumptions and mathematical effort to randomize the given physical problem than the perturbation methods and, further, the determination of higher moments is not straightforward. Moreover, there is no commercial implementation in any of the popular existing FEM systems in this case. There are some new theoretical ideas in random analysis for both discrete [55] and continuous variables or processes [33, 52, 173], but they have no widely available computational realizations or general applications in engineering. The reader is advised to study [41, 154] for a comprehensive review of modern probabilistic methods in structural mechanics.

The first-order perturbation technique is useful for the very small random dispersion of input random variables (with coefficient of variation smaller than α < 0.10) to replace Monte Carlo simulations in simplified first-two-moments analysis. The second-order techniques [112, 118] are applicable for α < 0.15 in second-moment analysis also for both symmetrical distributions (second-order second-moment analysis − SOSM) and for some non-symmetrical probability functions like the Weibull distribution (the so-called Weibull second-order third-moment approach − WSOTM). The main idea of the generalized stochastic perturbation method is to calculate higher-order moments and coefficients to recognize the resulting distributions of structural response. The second purpose is to allow for larger input coefficients of variation, but higher moments were initially derived using fourth- and sixth-order expansions only. Implementation of the given general-order stochastic perturbation technique was elaborated first of all to minimize the modeling error [139] and now is based on polynomials of uncertain input variable with deterministic coefficients. It needs to be mentioned that random or stochastic polynomials appeared in probabilistic analysis before [50, 147], but were never connected with the perturbation method and deterministic structural response determination.

It should be emphasized that the perturbation method was neither strictly connected with the stochastic or probabilistic analysis nor developed for these problems [135]. The main idea of this method is to make an analytical expansion of some input parameter or phenomenon around its mean value thanks to some series representation, where Taylor series expansions are traditionally the most popular. Deterministic applications of this technique are known first of all from dynamical problems, where system vibrations are frequently found thanks to such an expansion in more complex situations. One interesting application is the homogenization method, where effective material properties tensors of some multi-material systems are found from the solution of the so-called homogenization problem including initial perturbation-based expansions of these effective tensor components with respect to various separate geometrical scales [6, 56, 151]. Further, as also demonstrated in this book, such a deterministic expansion may be linked with probabilistic analysis, where many materials constituting such a structure are separately statistically homogeneous (finite and constant expectations and deviations of physical properties) and results in a statistically heterogeneous global system (partially constant expectations and deviations of physical properties). This is the case when the geometry is perfectly periodic and the physical nature of the composite exhibits some random fluctuation. Then such a homogenization procedure returns statistical homogeneity using some mixing procedure and remains clearly deterministic, because expansion deals with geometric scales that show no uncertainty.

Let us note that the very attractive aspect of the perturbation method is that it includes sensitivity analysis [35, 44, 83, 91] since first-, second-, and higher-order partial derivatives of the objective function with respect to the design parameter(s) must be known before the expansions are provided. Therefore, before we start uncertainty analysis of some state function in the given boundary value problem, we should perform first-order sensitivity analysis and randomize only those parameters whose gradients (after normalization) have dominating and significant values. Further, the stochastic perturbation method is not really associated with any discrete computational technique available [111, 152] like FEM, the finite difference method (FDM), the finite volume method (FVM), the boundary element method (BEM), various meshless techniques, or even molecular dynamics simulations. We can use it first of all to make additional probabilistic expansions of the given analytical solutions exhibiting some parametric randomness or even to solve analytically some algebraic or differential equations using explicit, implicit, and even symbolic techniques.

The stochastic perturbation technique is shown here in two different realizations—with use of the direct differentiation method (DDM) and in conjunction with the response function method (RFM). The first of these is based on the straightforward differentiation of the basic deterministic counterpart of the stochastic problem, so that we obtain for a numerical solution a system of hierarchical equations with increasing order. The zeroth-order solution is computed from the first equations and inserted into the second equation, where first-order approximation is obtained and so on, until the highest-order solution is completed. Computational implementation of the DDM proceeds through direct implementation with the deterministic source code or, alternatively, with use of some of the automatic differentiation tools available widely as shareware. Although higher-order partial derivatives are calculated analytically at the mean values of input parameters, and so are determined exactly, the final solution of the system of algebraic equations of increasing order enlarges the final error in probabilistic moments—the higher order the solution, the larger the possible numerical error. The complexity of the general-order implementation, as well as this aspect, usually results in DDM implementations of lowest order first or second. Contrary to numerous previous models, now full tenth-order stochastic expansions are used to recover all the probabilistic moments and coefficients; this significantly increases the accuracy of the final results.

We employ the RFM consecutively, where we carry out numerical determination of the analytical function for a given structural response like displacement or temperature as the polynomial representation of the chosen random input design parameter (to determine its deterministic coefficients). It can be implemented in a global sense, where a single function connects the probabilistic output and input and, in a more delicate manner—in the local formulation, where the approximating polynomial form varies from the mesh or grid node to another node in the discrete model. It is apparent that global approximation is much faster but may show a larger modeling error; the numerical error [139] in the local formulation is partially connected with the discretization procedure and may need some special adaptivity tools similar to those worked out in deterministic analyses. The main advantages of RFM over DDM are that (i) error analysis issues deal with the deterministic approximation problems and (ii) there is an opportunity for a relatively easy interoperability with commercial (or any) packages for discrete computational techniques. The RFM procedures do not need any symbolic algebra system because we differentiate well-known polynomials of random variables, so this differentiation is also of deterministic character. The RFM is used here in a few different realizations starting from classical polynomial interpolation with the given order, some interval spline approximations, through the non-weighted least-squares method until more sophisticated weighted optimized least-squares methods. This aspect is now closely related to the computer algebra system and this choice also follows enriched visualization procedures, but may be implemented in classical programming language. The RFM is somewhat similar to the response surface method (RSM) applicable in reliability analysis [175] or the response function technique known from vibration analysis. The major and very important difference is that the RFM uses a higher-order polynomial response relating a single input random variable with the structural output, whereas the RSM is based on first- or second-order approximations of this output with respect to multiple random structural parameters. An application of the RSM is impossible in the current context because the second-order truncation of the response eliminates all higher-order terms necessary for reliable computation of the probabilistic structural response. Furthermore, the RSM has some statistical aspects and issues, while the RFM has a purely deterministic character and exhibits some errors typical for mathematical approximation theory methods only.

Finally, let us note that the generalized stochastic perturbation technique was initially worked out for a single input random variable but we have some helpful comments in this book concerning how to complete its realization in case of a vector of correlated or not random input signals. The uncorrelated situation is a simple extension of the initial single-variable case, while non-zero cross-correlations, especially of higher order, will introduce a large number of new components into the perturbation-based equations for the probabilistic moments, even for expectations.

It is clear that stochastic analysis in various branches of engineering does not result from a fascination with random dispersion and stochastic fluctuations in civil or aerospace structures, mechanical as well as electronic systems—it is directly connected with reliability assessment and durability predictions. Recently we noticed a number of probabilistic numerical studies in non-linear problems in mechanics dealing particularly with the design of experiments [45], gradient plasticity [177], and viscoelastic structures [42], summarized for multiscale random media in [140]. Even the simplest model of the first-order reliability method is based on the reliability index giving quantified information about the safety margin computed using the expected values and standard deviations for both components of the limit function. According to various numerical illustrations presented here, the tenth-order stochastic perturbation technique is as efficient for this purpose as the MCS method and does not need further comparative studies. It is also independent of the input random dispersion of the given variable of the problem and should be checked for correlated variables also. As is known, the second-order reliability methods [128] include some correction factors and/or multipliers like the curvature of the limit functions usually expressed by the second partial derivatives of the objective function with respect to the random input. The generalized perturbation technique serves in a straightforward manner in this situation, because these derivatives are needed in the Taylor expansions themselves, so there is no need for additional numerical procedures. As has been documented, this stochastic perturbation-based finite element method (SFEM) implemented using the RFM idea may be useful at least for civil engineers following Eurocode 0 statements and making simulations on commercial FEM software. It is worth emphasizing that the stochastic perturbation method may be efficient in time-dependent reliability analysis, where time series having Gaussian coefficients approximate time fluctuations of the given design parameters. There are some further issues not discussed in this book, like the adaptivity method related to the stochastic finite elements [171], which may need some new approaches to the computational implementation of the perturbation technique.

This book is organized into five main chapters—Chapter 1 is devoted to the mathematical aspects of the stochastic perturbation technique and necessary definitions and properties of the probability theory. It is also full of computational examples showing implementations of various engineering problems with uncertainty into the computer algebra system MAPLE™ [17] supporting all further examples and solutions. Some of these are shown directly as scripts with screenshots, especially once some analytical derivations have been provided. The remaining case studies, where numerical data has been processed, are focused on a discussion of the results visualized as the parametric plots of probabilistic moments and characteristics, mostly with respect to the input random dispersion coefficient. They are also illustrated with the MAPLE™ scripts accompanying the book, which are still being expanded by the author and may be obtained by special request in the most recent versions. Special attention is given to the RFM here, various-order approximations of the moments in the stochastic perturbation technique, some comparisons against the Monte Carlo technique and computerized analytical methods, as well as simple time-series analysis with the perturbation technique.

Chapter 2 is the largest in the book and is devoted entirely to the SFEM. It starts with the statements of various more important boundary-value or boundary-initial problems in engineering with random parameters, which are then transformed into additional variational statements, also convenient for general nth-order stochastic formulations. According to the above considerations, these stochastic variational principles and the resulting systems of algebraic equations are expanded using both DDM and RFM approaches to enable alternative implementations depending on the source code and automatic differentiation routines availability; there are multiple MAPLE™ source codes for most of the numerical illustrations here, as also in the preceding chapter. Theoretical developments start from the FEM for the uncoupled equilibrium problems with scalar and vector state functions and are continued until the thermo-electro-elastic couplings as well as Navier–Stokes equations for incompressible and non-turbulent Newtonian fluid flows. The particular key computational experiments obey Newtonian viscous unidirectional and 2D fluid flows, linear elastic response and buckling of a spatial elastic system, elasto-plastic behavior of a simple 2D truss, eigenvibrations analysis of a 3D steel tower, non-stationary heat transfer in a unidirectional rod, as well as forced vibrations in a 2 DOF system, all with randomized material parameters. It is demonstrated that the MAPLE™ system may be used efficiently as the FEM postprocessor, making a visualization of the mesh together with the desired probabilistic characteristics in vector form; three-dimensional graphics are not so complicated in this environment, and the physical interpretation of higher-order moments does not require such sophisticated tools right now. The discussion is restricted each time to the first four probabilistic moments and coefficients for the structural response shown as functions of the input coefficient of variation and the stochastic perturbation order. In each case we (i) check the probabilistic convergence of the SFEM together with its order, (ii) detect the influence of an initial uncertainty source, and (iii) verify the output PDF.

Chapter 3 describes the basic equilibrium equations and computational implementation of the stochastic perturbation-based boundary element method (SBEM) related to the linear isotropic elasticity of the statistically homogeneous and multi-component domains; numerical work has been completed using the open-source academic BEM code. The basic equations have all been rewritten in the response functions language with numerical illustrations showing uncertain elastic behavior of a steel plane panel, an analogous composite layered element with perfect interface, as well as a composite with some interface defects between the constituents. A comparison of the SBEM against the SFEM is also given here using the example of the first four probabilistic characteristics presented as functions of the input coefficient of variation.

Chapter 4 is addressed to anyone who is interested in stochastic analysis using the specially adopted finite difference method (SFDM) and additional source codes. According to the main philosophy of the method we rewrite the particular differential equations in the difference forms and introduce first of all their DDM versions to carry out computational modeling directly using the MAPLE™ program. The example problem with random parameters is the linear elastic equilibrium of the Euler–Bernoulli beam with constant and linearly varying cross-sectional area; further, this structure is analyzed numerically on an elastic single parametric random foundation. Let us note that the stochastic analysis of beams with random stiffness in civil and mechanical engineering is of significant practical importance and has been studied theoretically and numerically [31, 112]. Other models include non-stationary heat transfer in a homogeneous rod with Gaussian physical parameters, eigenvibration analysis of a simply supported beam and a thin plate, as well as the unidirectional diffusion equation. Some examples show the behavior of the probabilistic moments computed together with increasing density of the grid, others are shown to make a comparison with the results obtained from the analytical predictions.

Chapter 5 is particularly and entirely devoted to the homogenization procedure presented as the unique application of the double perturbation method, where deterministic expansion with respect to the scale parameter is used in conjunction with stochastic expansions of the basic elastic parameters. Homogenization of the perfectly periodic two-component composite is the main objective in this chapter, and its effective elasticity tensor in a probabilistic and stochastic version is studied for the material parameters of fiber and matrix defined as Gaussian random variables or time series with Gaussian coefficients. The main purpose is to verify the stochastic perturbation technique and its FEM realization against the Monte Carlo simulation, as well as some novel computational techniques using the RFM, and analytical integration implemented in the MAPLE™ system. The examples are used to confirm the Gaussian character of the resulting homogenized tensor components, check the perturbation technique convergence for various approximation orders, show the probabilistic entropy fluctuations in the homogenization procedures, and provide some perspectives for further development of both SFEM and RFM techniques.

The major conclusion of this book is that the stochastic perturbation technique is a universal numerical method useful with any discrete or symbolic, academic or commercial computer program, and environments. The applicability range for expectations is practically unbounded, for second moments—extremely large (much larger than before) but for third- and fourth-order statistics—limited (but may be given precisely in terms of an input random dispersion). The mathematical simplicity and time savings are attractive for engineers, but we need to remember that this is not a computational hammer to randomize everything. Special attention is necessary in case of coupled problems with huge random fluctuations, where output coefficients of variation at some iteration step (even the first one) can make it practically useless. The local and global response functions are usually matched very well by the polynomial forms proposed here, and the resulting moments show no singularities with respect to the input coefficient of variation. This situation, however, may change in systems with state-dependent physical and mechanical properties (for example, with respect to temperature).

The book in its present shape took me 20 years of extensive work, from the very beginning of my career with the SOSM version of the SFEM at the Institute of Fundamental Technological Research in Warsaw, Poland [112]. Slowly my interest in the finite elements domain evolved toward other discrete computational techniques and after that an idea of any-order Taylor expansion appeared around 10 years ago. I would like to express special thanks to my PhD students at the Technical University of Łódzacute for their help in reworking and reorganizing many numerical examples for this book, but also for their never-ending questions—pushing me to carefully check many times the same issues. I appreciate the comments of many colleagues from all around the world who are interested in my work, as well as the anonymous reviewers who took care over the precision of my formulations.

Chapter 1

Mathematical Considerations

1.1 Stochastic Perturbation Technique Basis

The input random variable of a problem is denoted here consecutively by b(ω) and its probability density by gb(x). The expected value of this variable is expressed by Feller [34] and Vanmarcke [165] as


while its mth central probabilistic moment is


Since we are mostly focused on the Gaussian distribution application, we recall now its probability density function:


The coefficient of variation, skewness, flatness and kurtosis are introduced in the form


Nowadays, computer algebra software is usually employed to provide analytical formulas following these statements. A symbolic solution provided in the system Maple™ for the well-known case of two Gaussian random variables X and Y having defined expectations and standard deviations equal to EX, EY and SIGX, SIGY is given below. As is supposed, we can have more variables, combined in all algebraic forms implemented into this system and, finally, random variables not necessarily Gaussian.

restart: with(plots): with(plottools): with(Statistics):

X:=RandomVariable(Normal(EX,SX)): Y:=RandomVariable(Normal(EY,SY)):










The second, less trivial opportunity with this program is recovery of the probabilistic moments for the other probability distributions widely applied in engineering, whose formulas are not available in the literature or are hard to find (contained in the Appendix). The cases of lognormal and Gumbel distributions serve as an example below—one can use more sophisticated algebraic combinations of course.

restart; with(Statistics): a::real, 0 < b: X1:=RandomVariable(Gumbel(a,b)):

EX1:=ExpectedValue(X1); VX1:=Variance(X1); MX1:=Median(X1); KX1:=Kurtosis(X1): SKX1:=Skewness(X1): COVX1:=Variation(X1); CM3X1:=CentralMoment(X1,3): CM4X1:=CentralMoment(X1,4):


EX2:=ExpectedValue(X2); VX2:=Variance(X2); MX2:=Median(X2); KX2:=Kurtosis(X2); SKX2:=Skewness(X2); COVX2:=Variation(X2); CM3X2:=CentralMoment(X2,3); CM4X2:=CentralMoment(X2,4);

Besides the probabilistic moments and coefficients, the entropy of random variables and processes is also sometimes considered. Probabilistic entropy [155, 156] (contrary to that popular in thermodynamics) illustrates an uncertainty of occurrence of some event in the next moment, so that entropy equal to 0 accompanies a probability equal to 1 (or 0) for any random experiment showing no randomness at all. If the countable set of random events has n elements associated with the probabilities pi for i = 1, …, n, then the entropy in this space is defined uniquely by the following sum [155, 156]:


where the logarithm basis r is the entropy unit; computational information theory is naturally based on bits, where r = 2. This discrete definition restricts the values to the non-negative real numbers, where H(x) reaches maximum for two elements' random space with both events having the same probability −1 (like a bit of entropy per single throw with a geometrically regular coin). Its generalization to continuous variables in case of the Gaussian distribution is


where m, σ denote traditionally its expectation and standard deviation.

An integration process is carried out using classical normalization:


and therefore


The entropy formula remains unimplemented in most computer algebra systems, so this integral definition may appear useful in some engineering applications, especially with time series or stochastic processes. As could be expected in the case of Gaussian variables it is entirely affected by the standard deviation, so that the proposed stochastic perturbation technique—with its perfect agreement with the other numerical techniques in determination of the second-order probabilistic moments—is a reliable computational tool to determine entropy also.

A very interesting problem for any state function and its uncertainty source would be the entropy variation, and this can be defined through initial and final values as


This entropy change shows whether the uncertainty may be amplified by the given boundary value problem, preserved or damped. The most interesting case, from a probabilistic point of view, would be Δh = 0, which can be interpreted as no influence of the problem solution method or the problem itself on the initial random dispersion.

Now let us focus on the generalized stochastic perturbation technique—the main philosophy of this method is to expand all state parameters and the response functions in an initially deterministic problem (heat conductivity, heat capacity, temperature, and its gradient as well as the material density) using a given-order Taylor series with random coefficients. It is provided by the following representation of the random function u(b) with respect to its parameter b around its mean value [74, 81]:


where epsiv is a given small perturbation (usually taken equal to 1), while the nth-order variation is given as follows:


The expected values can be derived exactly with use of the tenth-order expansion for Gaussian variables as


for any natural m with μ2m being central probabilistic moment of 2mth order. It is obtained via substitution of an expansion (1.10) into the definition (1.1), by dropping off all odd order terms and integration of all the remaining order variations. It returns even order central probabilistic moments of variable b as well as deterministic odd order partial derivatives with respect to this b at its mean value. Usually, according to some previous computational convergence studies, we may limit this expansion to tenth order but consecutively for all moments of interest here. Quite similar considerations lead to the expressions for higher moments, like the variance, for instance:


One may notice that each component corresponds to the next consecutive order in Equation (1.12), while a linear increase of the components is noticed from each order to the next one for the variance.

The third probabilistic moment may be recovered from this scheme as


while the fourth probabilistic moment computation proceeds with use of the following formula:


Of course, the higher probabilistic moment, the larger Taylor expansion and the faster increase of the components number corresponding to the neighboring order central moments.

The central moments of the Gaussian variable b may obviously be simply recovered here as


for any natural k ≤ 1. As one may suppose, the higher-order moments we need to compute the higher-order perturbations need to be included in all formulas, so that the complexity of the computational model grows non-proportionally together with the precision and size of the output information needed. Once we take the polynomial , then its general perturbation-based formula for the tenth-order expectation equals


The variance, third and fourth probabilistic moments of this function, considering their lengths, are omitted here and may be found in the Maple™ source files located on the book's website.

It is obvious that the symmetric probability density functions do not require full expansions, but for the general distribution and specifically non-symmetric distributions such as the lognormal, we need to complete them with odd-order terms. These additional terms are specified below:


for the variances:


in case of the third central probabilistic moment:


as well as the fourth one:


The situation becomes definitely more complicated when we consider a problem with multiple random variables, let's say p random variables being totally uncorrelated—we vectorize these variables here as br for r = 1, …, p. Then the Taylor expansion with random coefficients proposed in Equation 1.10 is provided for all these components as


The most fundamental difference is that the zeroth-order component is calculated only once—for the mean values of the design vector components, but higher-order terms include partial derivatives of the response function with respect to all these p components separately. So, the tenth-order expansion, instead of 11 components for the single input random variable, will contain 10p + 1 independent terms. In view of the above, the expectation for the structural response is calculated as (where the summation convention is replaced for brevity with a classical sum)


Therefore, following this idea it is relatively easy to extend Equations 1.13, 1.14, 1.15 with the additional summation procedure over the independent components of the input random variables vector to get multi-parametric equations for the variances as well as the third and fourth central probabilistic moments. As one can realize, the correlation effect in these expansions will result in cross-correlations (of higher order also) between all the components of the vector br. It yields, for the second-order expansion of three random variables after Equation 1.23,


where Cov(b1, b2) stands for the covariance matrix of two random quantities defined classically as [34]


replaced frequently with the non-dimensional and normalized correlation coefficient introduced as


taking values − 1 ≤ ρ(b1, b2) ≤ 1 only. Of course, denotes here the joint probability density function of the variables b1 and b2