Cover Page

Contents

Cover

Half Title page

Title page

Copyright page

Dedication

Preface

Chapter 1: Introduction

1.1 Summary

1.2 Opening Remarks

1.3 The Need for a Knowledge-Based Approach

1.4 Summary of Chapters

Chapter 2: Reservoir Simulation Background

2.1 Essence of Reservoir Simulation

2.2 Assumptions Behind Various Modeling Approaches

2.3 Recent Advances in Reservoir Simulation

2.4 Memory Models

2.5 Future Challenges in Reservoir Simulation

Chapter 3: Reservoir Simulator-Input/Output

3.1 Input and Output Data

3.2 Geological and Geophysical Modeling

3.3 Reservoir Characterization

3.4 Upscaling

3.5 Pressure/Production Data

3.6 Phase Saturations Distribution

3.7 Reservoir Simulator Output

3.8 History Matching

3.9 Real-Time Monitoring

Chapter 4: Reservoir Simulators: Problems, Shortcomings, and Some Solution Techniques

4.1 Multiple Solutions in Natural Phenomena

4.2 Adomian Decomposition

4.3 Some Remarks on Multiple Solutions

Chapter 5: Mathematical Formulation of Reservoir Simulation Problems

5.1 Black Oil Model and Compositional Model

5.2 General Purpose Compositional Model

5.3 Simplification of the General Compositional Model

5.4 Some Examples in Application of the General Compositional Model

Chapter 6: The Compositional Simulator Using Engineering Approach

6.1 Finite Control Volume Method

6.2 Uniform Temperature Reservoir Compositional Flow Equations in a 1-D Domain

6.3 Compositional Mass Balance Equation in a Multidimensional Domain

6.4 Variable Temperature Reservoir Compositional Flow Equations

6.5 Solution Method

6.6 The Effects of Linearization

Chapter 7: Development of a New Material Balance Equation for Oil Recovery

7.1 Summary

7.2 Introduction

7.3 Mathematical Model Development

7.3 Porosity Alteration

7.4 Pore Volume Change

7.5 Numerical Simulation

7.5 Conclusions

Appendix Chapter 7: Development of an MBE for a Compressible Undersaturated Oil Reservoir

Chapter 8: State-of-the-art on Memory Formalism for Porous Media Applications

8.1 Summary

8.2 Introduction

8.3 Historical Development of Memory Concept

8.4 State-of-the-art Memory-Based Models

8.5 Basset Force: A History Term

8.6 Anomalous Diffusion: A memory Application

8.7 Future Trends

8.8 Conclusion

Chapter 9: Modeling Viscous Fingering During Miscible Displacement in a Reservoir

9.1 Improvement of the Numerical Scheme

9.2 Application of the New Numerical Scheme to Viscous Fingering

Chapter 10: An Implicit Finite-Difference Approximation of Memory-Based Flow Equation in Porous Media

10.1 Summary

10.2 Introduction

10.3 Background

10.4 Theoretical Development

10.6 Numerical Simulation

10.7 Results and Discussion

10.8 Conclusion

Chapter 11: Towards Modeling Knowledge and Sustainable Petroleum Production

11.1 Essence of Knowledge, Science, and Emulation

11.2 The Knowledge Dimension

11.3 Aphenomenal Theories of Modern Era

11.4 Towards Modeling Truth and Knowledge

11.5 The Single-Parameter Criterion

11.6 The Conservation of Mass and Energy

11.7 The Need for Multidimensional Study

11.8 Assessing the Overall Performance of a Process

11.9 Implications of Knowledge-Based Analysis

Chapter 12: Reservoir Simulation of Unconventional Reservoirs

12.1 Introduction

12.2 Material Balance Equations

12.3 New Fluid Flow Equations

12.4 Coupled Fluid Flow and Geo-mechanical Stress Model

12.5 Fluid Flow Modeling under Thermal Stress

12.6 Challenges of Modeling Unconventional Gas Reservoirs

12.7 Comprehensive Modeling

Chapter 13: Final Conclusions

References and Bibliography

Appendix A

A.1 Introduction

A.2 The Simulator

A.3 Data File Preparation

A.4 Description of Variables Used in Preparing a Data File

A.5 Instructions to Run Simulator and Graphic Post Processor on PC

A.6 Limitations Imposed on the Compiled Versions

A.7 Example of a Prepared Data File

A.8 References

Index

Advanced Petroleum Reservoir Simulation

Scrivener Publishing
100 Cummings Center, Suite 541J
Beverly, MA 01915-6106

Publishers at Scrivener
Martin Scrivener (martin@scrivenerpublishing.com)
Phillip Carmical (pcarmical@scrivenerpublishing.com)

Title Page

Authors would like to dedicate this book to their teacher and ‘grand teacher’, Professor S.M. Farouq Ali, Encana/Petroleum Society Chair Professor at the University of Calgary.

Preface

The Information Age is synonymous with an overflow, a superflux, of “information”. Information is necessary for traveling the path of knowledge, leading to the truth. Truth sets one free; freedom is peace.

Yet, here a horrific contradiction leaps out to grab one and all by the throat: of all the characteristics that can be said to characterize the Information Age, neither freedom nor peace is one of them. The Information Age that promised infinite transparency, unlimited productivity, and true access to Knowledge (with a capital-K, but, quite distinct from “know-how”), requires a process of thinking, or imagination – the attribute that sets human beings apart.

Imagination is necessary for anyone wishing to make decisions based on science. Imagination always begins with visualization – actually, another term for simulation. Any decision devoid of a priori simulation is inherently aphenomenal. It turns out simulation itself has little value unless fundamental assumptions as well as the science (time function) are actual. While the principle of garbage in and garbage out is well known, it only leads to using accurate data, in essence covering the necessary condition for accurate modeling.

The sufficient condition, i.e., the correct time function, is little understood, let alone properly incorporated. This process of including continuous time function is emulation and is the principal theme of this book. The petroleum industry is known as the biggest user of computer models. Even though space research and weather prediction models are robust and often tagged as the “mother of all simulation”, the fact that a space probe device or a weather balloon can be launched – while a vehicle capable of moving around in a petroleum reservoir cannot – makes reservoir modeling more challenging than in any other discipline.

This challenge is two-fold. First, there is a lack of data and their proper scaling up. Second is the problem of assuring correct solutions to the mathematical models that represent the reservoir data. The petroleum industry has made tremendous progress in improving data acquisition and remote-sensing ability. However, in the absence of proper science, it is anecdotally said that a weather model of Alaska can be used to simulate a petroleum reservoir in Texas. Of course, pragmatism tells us, we’ll come across desired outcome every once in a while, but is that anything desirable in real science? This book brings back real science and solves reservoir equations with the entire history (called the ‘memory’ function) of the reservoir. The book demonstrates that a priori linearization is not justified for the realistic range of most petroleum parameters, even for single-phase flow. By solving non-linear equations, this book gives a range of solutions that can later be used to conduct scientific risk analysis.

This is a groundbreaking approach. The book answers practically all questions that emerged in the past. Anyone familiar with reservoir modeling would know how puzzling subjective and variable results – something commonly found in this field – can be. The book deciphers variability by accounting for known nonlinearities and proposing solutions with the possibility of generating results in cloud-point forms. The book takes the engineering approach, thereby minimizing unnecessary complexity of mathematical modeling. As a consequence, the book is readable and workable with applications that can cover far beyond reservoir modeling or even petroleum engineering.

Chapter 1

Introduction

1.1 Summary

It is well known that reservoir simulation studies are very subjective and vary from simulator to simulator. While SPE benchmarking has helped accept differences in predicting petroleum reservoir performance, there has been no scientific explanation behind the variability that has frustrated many policy makers and operations managers and puzzled scientists and engineers. In this book, a new approach is taken to add the Knowledge dimension to the problem. Some attempted to ‘correct’ this shortcoming by introducing ‘history matching’, often automatizing the process. This has the embedded assumption that ‘outcome justifies the process’ – the ultimate of the obsession with externals. In this book, reservoir simulation equations are shown to have embedded variability and multiple solutions that are in line with physics rather than spurious mathematical solutions. With this clear description, a fresh perspective in reservoir simulation is presented. Unlike the majority of reservoir simulation approaches available today, the ‘knowledge-based’ approach does not stop at questioning the fundamentals of reservoir simulation but offers solutions and demonstrates that proper reservoir simulation should be transparent and empower decision makers rather than creating a black box. For the first time, the fluid memory factor is introduced with a functional form. The resulting governing equations become truly non-linear. A series of clearly superior mathematical and numerical techniques are presented that allow one to solve these equations without linearization. These mathematical solutions that provide a basis for systematic tracking of multiple solutions are emulation instead of simulation. The resulting solutions are cast in cloud points that form the basis for further analysis with advanced fuzzy logic, maximizing the accuracy of unique solution that is derived. The models are applied to difficult scenarios, such as in the presence of viscous fingering, and results compared with experimental data. It is demonstrated that the currently available simulators only address very limited range of solutions for a particular reservoir engineering problem. Examples are provided to show how the Knowledge-based approach extends the currently known solutions and provide one with an extremely useful predictive tool for risk assessment.

1.2 Opening Remarks

Petroleum is still the world’s most important source of energy, and, with all of the global concerns over climate change, environmental standards, cheap gasoline, and other factors, petroleum itself has become a hotly debated topic. This book does not seek to cast aspersions, debate politics, or take any political stance. Rather, the purpose of this volume is to provide the working engineer or graduate student with a new, more accurate, and more efficient model for a very important aspect of petroleum engineering: reservoir simulations. The term, “knowledge-based,” is used throughout as a term for our unique approach, which is different from past approaches and which we hope will be a very useful and eye-opening tool for engineers in the field. We do not intend to denigrate other methods, nor do we suggest by our term that other methods do not involve “knowledge.” Rather, this is simply the term we use for our approach, and we hope that we have proven that it is more accurate and more efficient than approaches used in the past.

1.3 The Need for a Knowledge-Based Approach

In reservoir simulation, the principle of GIGO (Garbage in and garbage out) is well known (latest citation by Rose, 2000). This principle implies that the input data have to be accurate for the simulation results to be acceptable. Petroleum industry has established itself as the pioneer of subsurface data collection (Islam et al., 2010). Historically, no other discipline has taken so much care in making sure input data are as accurate as the latest technology would allow. The recent superflux of technologies dealing with subsurface mapping, real time monitoring, and high speed data transfer is an evidence of the fact that input data in reservoir simulation are not the weak link of reservoir modeling.

However, for a modeling process to be knowledge-based, it must fulfill two criteria, namely, the source has to be true (or real) and the subsequent processing has to be true (Islam et al., 2012; 2015). The source is not a problem in the petroleum industry, as great deal of advances have been made on data collection techniques. The potential problem lies within the processing of data. For the process to be knowledge-based, the following logical steps have to be taken:

Figure 1.1 The knowledge model and the direction of abstraction.

The process of aphenomenal or prejudice-based decision-making is illustrated by the inverted triangle, proceeding from the top down (Figure 1.2). The inverted representation stresses the inherent instability and unsustainability of the model. The source data from which a decision eventually emerges already incorporates their own justifications, which are then massaged by layers of opacity and disinformation.

Figure 1.2 Aphenomenal decision-making.

The disinformation referred to here is what results when information is presented or recapitulated in the service of unstated or unacknowledged ulterior intentions (Zatzman and Islam, 2007a). The methods of this disinformation achieve their effect by presenting evidence or raw data selectively, without disclosing either the fact of such selection or the criteria guiding the selection. This process of selection obscures any distinctions between the data coming from nature or from any all-natural pathway, on the one hand, and data from unverified or untested observations on the other. In social science, such maneuvering has been well known, but the recognition of this aphenomenal (unreal) model is new in science and engineering (Shapiro et al., 2007).

1.4 Summary of Chapters

Chapter 1 summarizes the main concept of the book. It introduces the knowledge-based approach as decision making tool that triggers the correct decision. This trigger, also called the criterion, is the most important outcome of the reservoir simulation. At the end, every decision hinges upon what criterion was used. If the criterion is not correct, the entire decision making process becomes aphenomenal, leading to prejudice. The entire tenet of the knowledge-based approach is to make sure the process is soundly based on truth and not perception with logic that is correct (phenomenal) throughout the cognition process.

Chapter 2 presents the background of reservoir simulation, as has been developed in last five decades. This chapter also presents the shortcomings and assumptions that do not have knowledge-base. It then outlines the need for new mathematical approach that eliminates most of the short-comings and spurious assumptions of the conventional approach.

Chapter 3 presents the requirements in data input in reservoir simulation. It highlights various sources of errors in handling such data. It also presents guideline for preserving data integrity with recommendations for data processing that does not turnish the knowledge-based approach.

Chapter 4 presents the solutions to some of the most difficult problems in reservoir simulation. It gives examples of solutions without linearization and elucidates how the knowledge-based approach eliminates the possibility of coming across spurious solutions that are common in conventional approach. It highlights the advantage of solving governing equations without linearization and demarks the degree of errors committed through linearization, as done in the conventional approach.

Chapter 5 presents a complete formulation of black oil simulation for both isothermal and non-isothermal cases, using the engineering approach. It demonstrates the simplicity and clarity of the engineering approach.

Chapter 6 presents a complete formulation of compositional simulation, using the engineering approach. It shows how very complex and long governing equations are amenable to solutions without linearization using the knowledge-based approach.

Chapter 7 presents a comprehensive formulation of the material balance equation (MBE) using the memory concept. Solutions of the selected problems are also offered in order to demonstrate the need of recasting the governing equations using fluid memory. This chapter shows a significant error can be committed in terms of reserve calculation and reservoir behavior prediction if the comprehensive formulation is not used.

Chapter 8 presents formulations using memory functions. Such modeling approach is the essence of emulation of reservoir phenomena.

Chapter 9 uses the example of miscible displacement as an effort to model enhanced oil recovery (EOR). A new solution technique is presented and its superiority in handling the problem of viscous fingering is discussed.

Chapter 10 shows how the essence to emulation is to include the entire memory function of each variable concerned. The engineering approach is used to complete the formulation.

Chapter 11 highlights the future needs of the knowledge-based approach. A new combined mass and energy balance formulation is presented. With the new formulation, various natural phenomena related to petroleum operations are modeled. It is shown that with this formulation one would be able to determine the true cause of global warming, which in turn would help develop sustainable petroleum technologies. Finally, this chapter shows how the criterion (trigger) is affected by the knowledge-based approach. This caps the argument that the knowledge-based approach is crucial for decision making.

Chapter 12 shows how to model unconventional reservoirs. Various techniques and new flow equations are presented in order to capture physical phenomena that are prevalent in such reservoirs.

Chapter 13 presents the general conclusions of the book.

Chapter 14 is the list of references.

Appendix-A presents the manual for the 3D, 3-phase reservoir simulation program. This program is attached in the form of CD with the book.

Chapter 2

Reservoir Simulation Background

The Information Age is synonymous with Knowledge. However, if proper science is not used, information alone cannot guarantee transparency. Transparency is a pre-requisite of Knowledge (with a capital-K).

Proper science requires thinking or imagination with conscience, the very essence of humanity. Imagination is necessary for anyone wishing to make decisions based on science and always begins with visualization – actually, another term for simulation. There is a commonly-held belief that physical experimentation precedes scientific analysis, but the fact of the matter is that the simulation has to be worked out and visualized even before designing an experiment. This is why the petroleum industry puts so much emphasis on simulation studies. Similarly, the petroleum industry is known to be the biggest user of computer models. Unlike other large-scale simulations, such as space research and weather models, petroleum models do not have an option of verifying with real data. Because petroleum engineers do not have the luxury of launching a ‘reservoir shuttle’ or a ‘petroleum balloon’ to roam around the reservoir, the task of modeling is the most daunting. Indeed, from the advent of computer technology, the petroleum industry pioneered the use of computer simulations in virtually all aspects of decision-making. From the golden era of petroleum industries, a very significant amount of research dollars have been spent to develop some of the most sophisticated mathematical models ever used. Even as the petroleum industry transits through its “middle age” in a business sense and the industry no longer carries the reputation of being the ‘most aggressive investor in research’, oil companies continue to spend liberally for reservoir simulation studies and even for developing new simulators.

2.1 Essence of Reservoir Simulation

Today, practically all aspects of reservoir engineering problems are solved with reservoir simulators, ranging from well testing to prediction of enhanced oil recovery. For every application, however, there is a custom-designed simulator. Even though, quite often, ‘comprehensive’, ‘All-purpose’, and other denominations are used to describe a company simulator, every simulation study is a unique process, starting from the reservoir description to the final analysis of results. Simulation is the art of combining physics, mathematics, reservoir engineering, and computer programming to develop a tool for predicting hydrocarbon reservoir performance under various operating strategies.

Figure 2.1 depicts the major steps involved in the development of a reservoir simulator (Odeh, 1982). In this figure, the formulation step outlines the basic assumptions inherent to the simulator, states these assumptions in precise mathematical terms, and applies them to a control volume in the reservoir. Newton’s approximation is used to render these control volume equations into a set of coupled, nonlinear partial differential equations (PDE’s) that describe fluid flow through porous media (Ertekin et al., 2001). These PDE’s are then discretized, giving rise to a set of non-linear algebraic equations. Taylor series expansion is used to discretize the governing PDEs. Even though this procedure has been the standard in the petroleum industry for decades, only recently Abou-Kassem (2007) pointed out that there is no need to go through this process of expressing in PDE, followed by discretization. In fact, by setting up the algebraic equations directly, one can make the process simple and yet maintain accuracy (Mustafiz et al., 2008). The PDEs derived during the formulation step, if solved analytically, would give reservoir pressure, fluid saturations, and well flow rates as continuous functions of space and time. Because of the highly nonlinear nature of a PDE, analytical techniques cannot be used and solutions must be obtained with numerical methods.

Figure 2.1 Major steps involved in reservoir simulation with highlights of knowledge modeling.

In contrast to analytical solutions, numerical solutions give the values of pressure and fluid saturations only at discrete points in the reservoir and at discrete times. Discretization is the process of converting the PDE into an algebraic equations. Several numerical methods can be used to discretize a PDEs. The most common approach in the oil industry today is the finite-difference method. To carry out discretization, a PDE is written for a given point in space at a given time level. The choice of time level (old time level, current time level, or the intermediate time level) leads to the explicit, implicit, or Crank-Nicolson formulation method. The discretization process results in a system of nonlinear algebraic equations. These equations generally cannot be solved with linear equation solvers and linearization of such equations becomes a necessary step before solutions can be obtained. Well representation is used to incorporate fluid production and injection into the nonlinear algebraic equations. Linearization involves approximating nonlinear terms in both space and time. Linearization results in a set of linear algebraic equations. Any one of several linear equation solvers can then be used to obtain the solution. The solution comprises of pressure and fluid saturation distributions in the reservoir and well flow rates. Validation of a reservoir simulator is the last step in developing a simulator, after which the simulator can be used for practical field applications. The validation step is necessary to make sure that no error was introduced in the various steps of development and in computer programming.

It is possible to bypass the step of formulating the PDE and directly express the fluid flow equation in the form of nonlinear algebraic equation as pointed out in Abou-Kassem et al. (2006). In fact, by setting up the algebraic equations directly, one can make the process simple and yet maintain accuracy. This approach is termed the “Engineering Approach” because it is closer to the engineer’s thinking and to the physical meaning of the terms in the flow equations. Both the engineering and mathematical approaches treat boundary conditions with the same accuracy if the mathematical approach uses second order approximations. The engineering approach is simple and yet general and rigorous.

There are three methods available for the discretization of any PDE: the Taylor series method, the integral method, and the variational method (Aziz and Settari, 1979). The first two methods result in the finite-difference method, whereas the third results in the variational method. The “Mathematical Approach” refers to the methods that obtain the nonlinear algebraic equations through deriving and discretizing the PDE’s. Developers of simulators relied heavily on mathematics in the mathematical approach to obtain the nonlinear algebraic equations or the finite-difference equations. A new approach that derives the finite-difference equations without going through the rigor of PDE’s and discretization and that uses fictitious wells to represent boundary conditions has been recently presented by Abou-Kassem (2007). This new approach is termed the “Engineering Approach” because it is closer to the engineer’s thinking and to the physical meaning of the terms in the flow equations. Both the engineering and mathematical approaches treat boundary conditions with the same accuracy if the mathematical approach uses second order approximations. The engineering approach is simple and yet general and rigorous. In addition, it results in the same finite-difference equations for any hydrocarbon recovery process. Because the engineering approach is independent of the mathematical approach, it reconfirms the use of central differencing in space discretization and highlights the assumptions involved in choosing a time level in the mathematical approach.

2.2 Assumptions Behind Various Modeling Approaches

Reservoir performance is traditionally predicted using three methods, namely, 1) Analogical; 2) Experimental, and 3) Mathematical. The analogical method consists of using mature reservoir properties that are similar to the target reservoir to predict the behavior of the reservoir. This method is especially useful when there is a limited available data. The data from the reservoir in the same geologic basin or province may be applied to predict the performance of the target reservoir. Experimental methods measure the reservoir characteristics in the laboratory models and scale these results to the entire hydrocarbons accumulation. The mathematical method applied basic conservation laws and constitutive equations to formulate the behavior of the flow inside the reservoir and the other characteristics in mathematical notations and formulations.

The two basic equations are the material balance equation or continuity equation and the equation of motion or momentum equation. These two equations are expressed for different phases of the flow in the reservoir and combine to obtain single equations for each phase of the flow. However, it is necessary to apply other equations or laws for modeling enhance oil recovery. As an example, the energy balance equation is necessary to analyze the reservoir behavior for the steam injection or in situ combustion reservoirs.

The mathematical model traditionally includes material balance equation, decline curve, statistical approaches and also analytical methods. The Darcy’s law is almost used in all of available reservoir simulators to model the fluid motion. The numerical computations of the derived mathematical model are mostly based on the finite difference method. All these models and approaches are based on several assumption and approximations that may cause to produce erroneous results and predictions.

2.2.1 Material Balance Equation

The material balance equation is known to be the classical mathematical representation of the reservoir. According to this principle, the amount of material remaining in the reservoir after a production time interval is equal to the amount of material originally present in the reservoir minus the amount of material removed from the reservoir due to production plus the amount of material added to the reservoir due to injection.

This equation describes the fundamental physics of the production scheme of the reservoir. There are several assumptions in the material balance equation

The advent of advanced well logging techniques, core-analysis methods, and reservoir characterization tools has eliminated (or at least created an opportunity to eliminate) the guesswork in volumetric methods. In absence of production history, volumetric methods offer a proper basis for the estimation of reservoir performance.

2.2.2 Decline Curve

The rate of oil production decline generally follows one of the following mathematical forms: exponential, hyperbolic and harmonic. The following assumptions apply to the decline curve analysis

Figure 2.2 renders a typical portrayal of decline curve fitting. Note that all three declining curves fit closely during the first 2 years of production period, for which data are available. However, they produce quite different forecasts for later period of prediction. In old days, this was more difficult to discern because of the fact that a logarithmic curve was often used that skew the data even more. If any of the decline curve analysis is to be used for estimating reserves and subsequent performance prediction, the forecast needs reflect a “reasonable certainty” standard, which is almost certainly absent in new fields. This is why modern day use of the decline curve method is limited to generating multiple forecasts, with sensitivity data that create a boundary of forecast results (or cloud points), rather than exact numbers.

Figure 2.2 Decline curve for various forms.

The usefulness of decline curve is limited under the most prevalent scenario of production curtail as well as very low productivity (or marginal reservoirs) that exhibit constant production rates. Also, for unconventional reservoirs, production decline curves have little significance.

2.2.3 Statistical Method

In this method, the past performance of numerous reservoirs is statistically accounted for to derive the empirical correlations, which are used for future predictions. It may be described as a ‘formal extension of the analogical method’. The statistical methods have the following assumptions:

In addition, Islam et al. (2015a) recently pointed out a more subtle, yet far more important shortcoming of statistical methods. Practically, all statistical methods assume that two or more objects based on a limited number of tangible expressions makes it appropriate to comment on the underlying science. It is equivalent to stating if effects show a reasonable correlation, the causes can also be correlated.

As Islam et al. (2015a) pointed out, this poses a serious problem as, in absence of time space correlation (pathway rather than end result), anything can be correlated with anything, discrediting the whole process of scientific investigation spurious. They make their point by showing the correlation between global warming (increases) with a decrease in the number of pirates. The absurdity of the statistical process becomes evident by drawing this analogy.

Islam et al. (2014) pointed out another severe limitation of the statistical method. Even though they commented on the polling techniques used in various surveys, their comments are equally applicable in any statistical modeling. They wrote: “Frequently, opinion polls generalize their results to a U.S. population of 300 million or a Canadian population of 32 million on the basis of what 1,000 or 1,500 ‘randomly selected’ people are recorded to have said or answered. In the absence of any further information to the contrary, the underlying theory of mathematical statistics and random variability assumes that the individual selected ‘perfectly’ randomly is no more nor less likely to have any one opinion over any other. How perfect the randomness may be determined from the ‘confidence’ level attached to a survey, expressed in the phrase that describes the margin of error of the poll sample lying plus or minus some low single-digit percentage “nineteen times out of twenty”, i.e., a confidence level of 0.95. Clearly, however, assuming — in the absence of any knowledge otherwise — a certain state of affairs to be the case, viz., that the sample is random and no one opinion is more likely than any other, seems more useful for projecting horoscopes than scientifically assessing public opinion.”

Figure 2.3 Using statistical data to develop a theoretical correlation can make an aphenomenal model appealing, depending on which conclusion would appeal to the audience.

The above difficulty with statistical processing of data was brought into highlight through the publication of following correlation between number of pirates vs. global temperature with the slogan: Join piracy, save the planet.

Scientifically, numerous paradoxes appear owing to spurious assumptions that are embedded in the models for which statistical model is being used. One such paradox is, Simpson’s paradox for continuous data (Figure 2.4). In this, a positive trend appears for two separate groups (blue and red), a negative trend (black, dashed) appears when the data are combined. In probability and statistics, Simpson’s paradox, or the Yule–Simpson effect, is a paradox in which a trend that appears in different groups of data disappears when these groups are combined, and the reverse trend appears for the aggregate data. This result is often encountered in social-science and medical-science statistics. Islam et al. (2015) discussed this phenomenon as something that is embedded in Newtonian calculus that allows taking the infinitely small differential and turning that into any desired integrated value, while giving the impression that a scientific process has been followed. Furthermore, Khan and Islam (2012) showed that true trendline should contain all known parameters. The Simpson’s paradox can be avoided by including full historical data, followed by scientifically true processing (Islam et al., 2014a). In the case of reservoir simulation, the inclusion of full historical data would be equivalent to including memory effects for both fluid and rock systems. This is discussed in latter chapters.

Figure 2.4 Simpson’s paradox highlights the problem of targeted statistics.

2.2.4 Analytical Methods

In most of the cases, the fluid flow inside the porous rock is too complicated to solve analytically. These methods can apply to some simplified model. The problem in question is the solution of the diffusivity equation (Eq. 2.1), where p is the pressure, ϕ the porosity, μ the viscosity, ct the total compressibility and k is the permeability. This equation is obtained by applying mass balance over a control volume. As such all implicit assumptions of the material balance equation apply.

(2.1) equation

Solution of the diffusivity equation requires an initial condition and two boundary conditions. In addition, the assumptions of homogeneous, isotropic formation and 100% saturated pore space are invoked. In order to keep the equation linear so that the problem is amenable to analytical solutions, simple geometries, such as linear, radial, cylindrical are considered, in addition to assuming validity of Darcy’s Law, and uniform equation of state. Notwithstanding, analytical methods have kept some important advantages when compared with numerical ones. Analytical methods provide exact solutions, continuous in space and time, while numerical codes work with discrete points in the domain and progressive steps in time. Analytical solutions provide straightforward parametric variation inspections without requiring a complete numerical solution. Also, analytical solutions are often treated as benchmarks for numerical code validation. It is also true that most numerical solutions also linearize the governing equations, albeit after casting them in discretized forms.

2.2.5 Finite-Difference Methods

Finite-difference calculus is a mathematical technique which may be used to approximate values of functions and their derivatives at discrete points, where the actual values are not otherwise known. The history of differential calculus dates back to the time of Leibnitz and Newton. In this concept, the derivative of a continuous function is related to the function itself. Newton’s formula is the core of differential calculus and suffers from the approximation that the magnitude and direction change independently of one another. There is no problem in having separate derivatives for each component of the vector or in superimposing their effects separately and regardless of order. That is what mathematicians mean when they describe or discuss Newton’s derivative being used as a “linear operator”.

Following this comes Newton’s difference-quotient formula. When the value of a function is inadequate to solve a problem, the rate at which the function changes, sometimes, becomes useful. Therefore, the derivatives are also important in reservoir simulation. In Newton’s difference-quotient formula, the derivative of a continuous function is obtained. This method relies implicitly on the notion of approximating instantaneous moments of curvature, or infinitely small segments, by means of straight lines. This alone should have tipped everyone off that his derivative is a linear operator precisely because, and to the extent that, it examines change over time (or distance) within an already established function (Islam, 2006). This function is applicable to an infinitely small domain, making it non-existent. When, integration is performed, however, this non-existent domain is assumed to be extended to finite and realistic domain, making the entire process questionable.

The publication of his Principia Mathematica by Sir Isaac Newton at the end of 17th century remains one of the most significant developments of European-centered civilization. It is also evident that some of the most important assumptions of Newton were just as aphenomenal (Zatzman and Islam, 2007a). By examining the first assumptions involved, Zatzman and Islam (2007) were able to characterize Newton’s laws as aphenomenal, for three reasons that they 1) remove time-consciousness; 2) recognize the role of ‘external force’; and 3) do not include the role of first premise. In brief, Newton’s law ignore, albeit implicitly, all intangibles from nature science. Zatzman and Islam (2007) identified the most significant contribution of Newton in mathematics as the famous definition of the derivative as the limit of a difference quotient involving changes in space or in time as small as anyone might like, but not zero, viz.

(2.2) equation

Without regards to further conditions being defined as to when and where differentiation would produce a meaningful result, it was entirely possible to arrive at “derivatives” that would generate values in the range of a function at points of the domain where the function was not defined or did not exist. Indeed: it took another century following Newton’s death before mathematicians would work out the conditions – especially the requirements for continuity of the function to be differentiated within the domain of values – in which its derivative (the name given to the ratio-quotient generated by the limit formula) could be applied and yield reliable results. Kline (1972) detailed the problems involving this breakthrough formulation of Newton. However, no one in the past did propose an alternative to this differential formulation, at least not explicitly. The following figure (Figure 2.5) illustrates this difficulty.

Figure 2.5 Economic wellbeing is known to fluctuate with time

(adapted from Zatzman et al., 2009).

In this figure, an economic index (it may be one of many indicators) is plotted as a function of time. In nature, all functions are very similar. They do have local trends as well as a global trend (in time). One can imagine how the slope of this graph on a very small time frame would be quite arbitrary and how devastating it would be to take that slope to a long term. One can easily show the trend, emerging from Newton’s differential quotient would be diametrically opposite to the real trend.

The finite difference methods are extensively applied in petroleum industry to simulate the fluid flow inside the porous medium. The following assumptions are inherent to the finite difference method.

1. The relationship between derivative and the finite difference operators, e.g., forward difference operator, backward difference operator and the central difference operator is established through the Taylor series expansion. The Taylor series expansion is the based element in providing the differential form of a function. It converts a function into polynomial of infinite order. This provides an approximate description of a function by considering a finite number of terms and ignoring the higher order parts. In other word, it assumes that a relationship between the operators for discrete points and the operators of the continuous functions is acceptable.
2. The relationship involves truncation of the Taylor series of the unknown variables after few terms. Such truncation leads to accumulation of error. Mathematically, it can be shown that most of the error occurs in the lowest order terms.
a. The forward difference and the backward difference approximations are the first order approximations to the first derivative.
b. Although the approximation to the second derivative by central difference operator increases accuracy because of a second order approximation, it still suffers from the truncation problem.
c. As the spacing size reduces, the truncation error approaches to zero more rapidly. Therefore, a higher order approximation will eliminate the need of same number of measurements or discrete points. It might maintain the same level of accuracy; however, less information at discrete points might be risky as well.
3. The solutions of the finite difference equations are obtained only at the discrete points. These discrete points are defined either according to block-centered or point distributed grid system. However, the boundary condition, to be specific, the constant pressure boundary, may appear important in selecting the grid system with inherent restrictions and higher order approximations.
4. The solutions obtained for grid-points are in contrast to the solutions of the continuous equations.
5. In the finite difference scheme, the local truncation error or the local discretization error is not readily quantifiable because the calculation involves both continuous and discrete forms. Such difficulty can be overcome when the mesh-size or the time step or both are decreased leading to minimization in local truncation error. However, at the same time the computational operation increases, which eventually increases the computer round-off error.

2.2.6 Darcy’s Law

Because practically all reservoir simulation studies involve the use of Darcy’s Law, it is important to understand the assumptions behind this momentum balance equation. The following assumptions are inherent to Darcy’s Law and its extension:

2.3 Recent Advances in Reservoir Simulation

The recent advances in reservoir simulation may be viewed as:

2.3.1 Speed and Accuracy

The need for new equations in oil reservoirs arises mainly for fractured reservoirs as they constitute the largest departure from Darcy’s flow behavior. Advances have been made in many fronts. As the speed of computers increased following Moore’s law (doubling every 12 to 18 months), the memory also increased. For reservoir simulation studies, this translated into the use of higher accuracy through inclusion of higher order terms in Taylor series approximation as well as great number of grid blocks, reaching as many as billion blocks.

et al.,