Cover Page

TORUS 1 – Toward an Open Resource Using Services

Cloud Computing for Environmental Data

Edited by

Dominique Laffly

Wiley Logo

Preface: Why TORUS? Toward an Open Resource Using Services, or How to Bring Environmental Science Closer to Cloud Computing

Geography, Ecology, Urbanism, Geology and Climatology – in short, all environmental disciplines are inspired by the great paradigms of Science: they were first descriptive before evolving toward systemic and complexity. The methods followed the same evolution, from the inductive of the initial observations one approached the deductive of models of prediction based on learning. For example, the Bayesian is the preferred approach in this book (see Volume 1, Chapter 5), but random trees, neural networks, classifications and data reductions could all be developed. In the end, all the methods of artificial intelligence (IA) are ubiquitous today in the era of Big Data. We are not unaware, however, that, forged in Dartmouth in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon, the term artificial intelligence is, after a long period of neglect at the heart of the future issues of the exploitation of massive data (just like the functional and logical languages that accompanied the theory: LISP, 1958, PROLOG, 1977 and SCALA, today – see Chapter 8).

All the environmental disciplines are confronted with this reality of massive data, with the rule of the 3+2Vs: Volume, Speed (from the French translation, “Vitesse”), Variety, Veracity, Value. Every five days – or even less – and only for the optical remote sensing data of the Sentinel 2a and 2b satellites, do we have a complete coverage of the Earth at a spatial resolution of 10 m for a dozen wavelengths. How do we integrate all this, how do we rethink the environmental disciplines where we must now consider at the pixel scale (10 m) an overall analysis of 510 million km2 or more than 5 billion pixels of which there are 1.53 billion for land only? And more important in fact, how do we validate automatic processes and accuracy of results?

images

Figure P.1. At the beginnig of AI, Dartmouth Summer Research Project, 1956.

Source: http://www.oezratty.net/wordpress/2017/semantique-intelligence-artificielle/

Including social network data, Internet of Things (IoT) and archive data, for many topics such as Smart Cities, it is not surprising that environmental disciplines are interested in cloud computing.

Before understanding the technique (why this shape, why a cloud?), it would seem that to represent a node of connection of a network, we have, as of the last 50 years, drawn a potatoid freehand, which, drawn took the form of a cloud. Figure P.2 gives a perfect illustration on the left, while on the right we see that the cloud is now the norm (screenshot offered by a search engine in relation to the keywords: Internet and network).

What is cloud computing? Let us remember that, even before the term was dedicated to it, cloud computing was based on networks (see Chapter 4), the Internet and this is: “since the 50s when users accessed, from their terminals, applications running on central systems” (Wikipedia). The cloud, as we understand it today, has evolved considerably since the 2000s; it consists of the mutualization of remote computing resources to store data and use services dynamically – to understand software – dedicated via browser interfaces.

images

Figure P.2. From freehand potatoid to the cloud icon. The first figure is a schematic illustration of a distributed SFPS switch. For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

This answers the needs of the environmental sciences overwhelmed by the massive data flows: everything is stored in the cloud, everything is processed in the cloud, even the results expected by the end-users recover them according to their needs. It is no wonder that, one after the other, Google and NASA offered in December 2016 – mid-term of TORUS! – cloud-based solutions for the management and processing of satellite data: Google Earth Engine and NASA Earth Exchange.

But how do you do it? Why is it preferable – or not – for HPC (High Performance Computing) and GRIDS? How do we evaluate “Cloud & High Scalability Computing” versus “Grid & High-Performance Computing”? What are the costs? How do you transfer the applications commonly used by environmental science to the cloud? What is the added value for environmental sciences? In short, how does it work?

All these questions and more are at the heart of the TORUS program developed to learn from each other, understand each other and communicate with a common language mastered: geoscience, computer science and information science; and the geosciences between them; computer science and information sciences. TORUS is not a research program. It is an action that aims to bring together too (often) remote scientific communities, in order to bridge the gap that now separates contemporary computing from environmental disciplines for the most part. One evolving at speeds that cannot be followed by others, one that is greedy for data that others provide, one that can offer technical solutions to scientific questioning that is being developed by others and so on.

TORUS is also the result of multiple scientific collaborations initiated in 2008–2010: between the geographer and the computer scientist, between France and Vietnam with an increasing diversity of specialties involved (e.g. remote sensing and image processing, mathematics and statistics, optimization and modeling, erosion and geochemistry, temporal dynamics and social surveys) all within various scientific and university structures (universities, engineering schools, research institutes – IRD, SFRI and IAE Vietnam, central administrations: the Midi-Pyrénées region and Son La district, France–Vietnam partnership) and between research and higher education through national and international PhDs.

Naturally, I would like to say, the Erasmus+ capacity building program of the European Union appeared to be a solution adapted to our project:

“The objectives of the Capacity Building projects are: to support the modernization, accessibility and internationalization of higher education in partner countries; improve the quality, relevance and governance of higher education in partner countries; strengthen the capacity of higher education institutions in partner countries and in the EU, in terms of international cooperation and the process of permanent modernization in particular; and to help them open up to society at large and to the world of work in order to reinforce the interdisciplinary and transdisciplinary nature of higher education, to improve the employability of university graduates, to give the European higher education more visibility and attractiveness in the world, foster the reciprocal development of human resources, promote a better understanding between the peoples and cultures of the EU and partner countries.”1

In 2015, TORUS – funded to the tune of 1 million euros for three years – was part of the projects selected in a pool of more than 575 applications and only 120 retentions. The partnership brings together (Figure P.3) the University of Toulouse 2 Jean Jaurès (coordinator – FR), the International School of Information Processing Sciences (EISTI – FR), the University of Ferrara in Italy, the Vrije University of Brussels, the National University from Vietnam to Hanoi, Nong Lam University in Ho Chi Minh City and two Thai institutions: Pathumthani’s Asian Institute of Technology (AIT) and Walaikak University in Nakhon Si Thammarat.

images

Figure P.3. The heart of TORUS, partnership between Asia and Europe. For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

With an equal share between Europe and Asia, 30 researchers, teachers-researchers and engineers are involved in learning from each other during these three years, which will be punctuated by eight workshops between France, Vietnam, Italy, Thailand and Belgium. Finally, after the installation of the two servers in Asia (Asian Institute of Technology – Thailand; and Vietnam National University Hanoi – Vietnam), more than 400 cores will fight in unison with TORUS to bring cloud computing closer to environmental sciences. More than 400 computer hearts beat in unison for TORUS, as well as those of Nathalie, Astrid, Eleonora, Ann, Imeshi, Thanh, Sukhuma, Janitra, Kim, Daniel, Yannick, Florent, Peio, Alex, Lucca, Stefano, Hichem, Hung(s), Thuy, Huy, Le Quoc, Kim Loi, Agustian, Hong, Sothea, Tongchai, Stephane, Simone, Marco, Mario, Trinh, Thiet, Massimiliano, Nikolaos, Minh Tu, Vincent and Dominique.

To all of you, a big thank you.

Structure of the book

This book is divided into three volumes.

Volume 1 raises the problem of voluminous data in geosciences before presenting the main methods of analysis and computer solutions mobilized to meet them.

Volume 2 presents remote sensing, geographic information systems (GIS) and spatial data infrastructures (SDI) that are central to all disciplines that deal with geographic space.

Volume 3 is a collection of thematic application cases representative of the specificities of the teams involved in TORUS and which motivated their needs in terms of cloud computing.

Dominique LAFFLY

January 2020

  1. 1 http://www.agence-erasmus.fr/page/developpement-des-capacites.

PART 1
Integrated Analysis in Geography: The Way to Cloud Computing

Introduction to Part 1

What is Geography? Literally “writing of the Earth”, the Larousse dictionary gives the following definition:

“Science which has for object the description and the explanation of the current aspect, natural and human, of the surface of the Earth.”

And Robert dictionary “Science that studies and describes the Earth, as a habitat of the human being and all living organisms.”

It is therefore a Science, one that has its roots in China, Egypt, the Inca Empire and Greece for thousands of years because “all societies have constructed an idea of their situation in the world and present a cosmogony that is at the same time, a great account of the origins(ibid.). The map was always a central element to accompany the thought of the representation of the world, and to manage and act on the territory. All the thinkers and scientists of the time were geographers or at least were geographers at the same time as they were philosophers, anthropologists, mathematicians, biologists or astronomers – Herodotus, Eratosthenes, Qian, Polo, Ptolemy, Al Idrisi, Al-Khuwarizmi, Mercator, Cassini, Van Humboldt, Darwin. Today, perhaps, are we all still geographers? Maybe most of us do not know it or do not (especially) want to claim it. Hence the initial question – what is Geography? Geography is a Geoscience. It is one of the Geosciences that is interested in the interactions of human societies with the geographical space – “the environment”, one that does not go without the other to build the landscapes – visible manifestations of interacting forces. For Geography, thinking about space without the social is an aberration just as the social is thinking while denying space. The spatialization of information is at the heart of Geography; the map – to put it simply – is the bedrock of geographic synthesis: concepts, methods and information to answer the central question “why here and now but not elsewhere?”. It is not enough to superimpose tracks, calculate an index and color shapes to make this cartographic synthesis and get a “good” map. We will see that, like all Geosciences, today’s Geography with the concepts and methods they mobilize are confronted with having to integrate massive data – Big Data. For this, it must not only evolve in its own paradigms but also in the control of analytical methods – artificial intelligence – and computer techniques dedicated to massive data – cloud computing.

“Is it permissible to assimilate the world to what we have seen and experienced? The common claim, as reprehensible as the refusal to dream – at least (for want of something better) – on the future! First: the question of recognizing the order of what is in the elements that the senses (or the tools that arm the senses) grasp is perhaps the very one of Philosophy in all its nobility. It has been said in Latin […] that all knowledge begins with what is sensible in the objects of nature: ‘Omnis cognito initium habet a naturalibus… vel: a sensibilibus.’ Beyond this knowledge, there is only mystery; and the very revelation given by God is meditated on the example of what we have known by the natural play of reason. It is necessary here that the statistician, the surveyor and the sociologist are modest! In seeking what we have always had to look for, each generation cannot have done more than its share: the question remains.”1

Introduction: the landscape as a system

Protean word, a little magic in the geographical discourse – as Jean-Claude Wieber (1984)2 liked to say – the landscape is dear to geographers although they have not truly defined a real status of the landscape within the discipline. Is it reasonable after all when a word has so many different meanings? The same author proposes that we refine the content:

“Is the use of the word Landscape in this case an abuse of language? Probably not completely. No one would think of denying relief a fundamental role in the differentiation of landscapes […] by the influence it exerts on the aptitudes of the soils and the adaptations made of them by people and vegetation. In the same way, the examination of the Roman cadastres […] of an ancient organization of space which one can sometimes perceive or guess [is called] ‘landscape analysis’. In these two cases, we study directly, by the measurement of the processes, or indirectly, through the resulting traces, how work sets of forces produce the Landscape.”

The geographical envelope would therefore be the place of expression for all the landscapes themselves considered as a whole that can be approached by the instrumentalization under the constraint of the data available to describe it; the consideration of the landscape is then partial and biased, and the information and the protocols of collection and analysis are at the heart of the analysis of the landscapes considered as a system. E. Schwarz (1988)3 gives a concise definition of systemic analysis that complements the Cartesian analytic approach:

“The systemic approach is a state of mind, a way of seeing the world […] looking for regularities (invariant), to identify structures, functions, processes, evolution, organization. [It] is characterized above all by taking into account the global nature of phenomena, their structure, their interactions, their organization and their own dynamics. […] The systemic brings together the theoretical, practical and methodological approaches to the study of what is recognized as too complex to be approached in a reductionist manner and which poses problems of borders, internal and external relations, structure, emerging laws or properties characterizing the system as such or problems of mode of observation, representation, modeling or simulation of a complex totality.”

Brossard and Wieber4 propose a conceptual diagram of a systemic definition of landscape (Figure I.1). Between production – the “physical” producing system – and consumption – the “social” user system – the landscape is expressed by what is visible and non-reducible – the “visible landscape” system – to one or the other previous subsystems. This specificity of the geographer to understand the landscape so as to make sense of space places it at the crossroads of multidisciplinary scientific paths:

“The specialists of other disciplines now know that ‘nature’ is never quite ‘natural’, or, conversely, that the analysis of social systems can no longer be considered detached from the environments in which they are located. Also, they very often want the intervention of geographers, in the field as in the processing of data provided by satellites; one cannot go without the other.”5

In fact, the satellite images mentioned by the author are not sufficient to describe landscapes. Other information is also available, their collection is essential, as is the methodological and technical mastery to ensure their analysis.

images

Figure I.1. In the early days of the “Systemic Landscape” (modified from Brossard and Wieber). For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

Thus chosen as a key concept, the landscape is an entry point for themes that have a practical impact. This concept is linked to an analysis method specific to the geographer and their needs to spatialize – in the sense of continuously covering the space. The landscape’s “signs” – information – allow for a quantitative approach that relies on the use of statistical and computer tools in search of the fundamental structures to, in a way, “replace the ‘visible complicated’ perceived landscapes by ‘the invisible simple’ spatial structure.”6

Introduction written by Dominique LAFFLY.

  1. 1 Benzécri J.-P., “In memoriam… Pierre Bourdieu – L’@nalyse des données : histoire, bilan, projets, …, perspective”, Revue MODULAD, no. 35, 2006.
  2. 2 Wieber J.-C., “Étude du paysage et/ou analyse écologique ?”, Travaux de l’institut géographique de Reims, nos 45–46, 1981.
  3. 3 Schwarz É. (ed.), La révolution des systèmes. Une introduction à l’approche systémique, Editions DelVal, 1988.
  4. 4 Brossard T. and Wieber J.-C., “Essai de formalisation systémique d’un mode d’approche du paysage”, Bulletin de l’association des géographes français 468, pp. 103–111, 1981.
  5. 5 Frémont A., “La télédétection spatiale et la géographie en France aujourd’hui”, L’Espace géographique, no. 3, pp. 285–287, 1984.
  6. 6 Perrin F., Les atomes : présentation et complément, Gallimard, Paris, 1970.

Conclusion to Part 1
Why Here But Not There?

We hope to have shown by our words that Geography is a science that offers a specific way of looking at the world, and whose underpinning can be summarized by the following question: Why here and now but not there? A primordial question (Figure C.1) – to which current issues around location-based information testify – common to many disciplines; more specifically, contemporary to many disciplines, and yet it is at the heart of Geography, heard beyond the French university, it is age-old. We like to quote the Jesuit Father Jean Francois who, in 1652, wrote in a book entitled The Science of Geography:

“Geography has had, until now, the job of distributing and enumerating the parts that compose the globe. Until now, this has been an art of memory rather than due to a discourse of reason. But the understanding, on the contrary, needs a master who teaches him to see and understand.”

Then, there is the incredible speech that actualizes A. Fremont in 1984:

“But geographers, and particularly French geographers, were they made for remote sensing space…? Without falling into an excessive pessimism or a too sharp critical mind, it seems that the negative answer is imperative. […] It [Geography] now collects the disadvantages when its ‘specialists’ are considered technically underqualified. […] Faced with the current stakes, the geographers of the 1980s do not seem to me any more to have the choice of hesitations, on pain of accompanying their discipline in the decline of archaisms. With computer science for data processing, automatic mapping and infographic, remote sensing space is undoubtedly one of the instruments of technological change in the discipline. […] Refusing these perspectives and the consequences they imply is most likely to be definitively resolved to provincial dimensions for the school or the French schools.”

We make ours the conclusion of A. Fremont: “Faced with the current stakes, the geographers of the years [2010] no longer seem to me to have the choice of hesitations, on pain of accompanying their discipline in the decline of archaisms.”

images

Figure C.1. The feeling of landscape (modified from Schuiten F. and Peeters B., The Invisible Border, Casterman Editions, 2002). For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

We have seen that these issues are those of massive data, artificial intelligence and cloud computing. Integrating these methods and mastering these techniques will contribute positively to the evolution of the questioning inherent in Geography. First and foremost, it is now unthinkable to not evolve in a multidisciplinary context that is ideally transdisciplinary. Mutual recognition and a common language are now the true keystones of science.

TORUS has been a program that has met all these expectations of contemporary science. And since we are all geographers, now I invite G. Bertrand1 to conclude:

“You have to use geography to cross other disciplines as long as you draw a path. As Antonio Machado says, ‘the way is done by walking’. We must consider that when we talk about landscape, the environment, development or territory, we always talk about the same subject. It is a set that can’t be used with a single methodology. It is a paradigm that takes into consideration all the elements and hybridizes the opposites (example: nature/society, individual/collective, ordinary/extra-ordinary).”

The way we do it is by walking… what else do we do here by opening cloud computing and artificial intelligence to Geography, to delve deep into massive data? (Figure C.2).

images

Figure C.2. The ideal of geoscience in the age of Big Data and cloud computing. For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

How to not conclude about cloud computing, Big Data, satellite imagery, social network with the “Anatomy of a Killing”2 (Figure C.3). “From a simple amateur video, the BBC managed to identify the place, the date and the perpetrators of a massacre that occurred in Cameroon in July. A scientific survey that relies on Big Data, and allows journalists a new exploitation of images.” (Le Monde, October 6, 2018). Illustration of datajournalism, “In this context, would the image become a database like any other?” This is what Karen Bastien, co-founder of WeDoData, an agency specializing in data visualization, argues: “A digital image is a set of pixels in which we can now detect a modification, a ‘re-work’, that, on the colorimetric, luminosity or other side, it has metadata that describe it and are attached to it, and that can also be exploited. And if, until recently, the images had to be analyzed at length by a human eye, now the technologies that analyze them massively have become accessible and make it possible to identify stronger correlations because they no longer rely on one but on hundreds, thousands or even tens of thousands of images. Crossed with other databases, the video reviewed by the BBC has revealed many secrets.” (ibid.)

images

Figure C.3. Datajournalism or how journalists have identified the murderers with the help of Big Data and artifical intelligence on the cloud – Where, When and Who? Unfortunatly we know Why (from the BBC, see footnote 2 of this chapter). For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

Conclusion written by Dominique LAFFLY.

  1. 1 See http://cafe-geo.net/wp-content/uploads/CR-Paysage-22.10.03-.pdf.
  2. 2 See https://www.youtube.com/watch?v=4G9S-eoLgX4.

PART 2
Basic Mathematical, Statistical and Computational Tools

PART 3
Computer Science

1
Geographical Information and Landscape, Elements of Formalization

“Using measures, observations and systems of knowledge inevitably means introducing the notion of representativity in various ways. It includes questions about sampling strategies, the nature of the data, their disaggregation and aggregation, the equations used to model, extrapolate or interpolate information (the same mathematical function can be used for one or the other of these methods)… Any reasoned approach in information analysis tries to integrate at best these different aspects of the measurement and their spatiotemporal representability.”1

The analysis of the landscape thus formulated implies four principles:

In geography, we are also confronted with the difficulty of linking thematically specialized (endogenous punctual information) specific descriptions with general ones (exogenous areal information), the only ones that are amenable to taking into account the spatial continuum (Figure 1.1). To do this, in the framework of the systemic formulation of the landscape and the mode of analysis related to it, we present a formalization based on four key elements: point, trace, order and inference.

images

Figure 1.1. Elements of formalization of the landscape system. For a color version of this figure, see www.iste.co.uk/laffly/torus1.zip

Point: the basic spatial unit of endogenous observations made in situ. It is the subject of a precise location (differential GNNS and/or geocoding of addresses) and a standard description. Surveys are conducted according to a cybernetic logic and a systematic protocol, so as to lend themselves to quantitative analyses that describe and parameterize information structures. Sampling strategies are based on thematic and spatial criteria. For example, for biogeographic facies surveys, stratified nonaligned systematic sampling is commonly used at two levels2: the first to define the overall sampling plan of the points to observe in the field and the second to stop the in situ observation strategy for each previously defined entity3. Here, we find the notion of integrated or holistic analysis.

Trace: this is the message or sign that reflects the links between the structures identified from the analysis of endogenous data and the exogenous information that will serve as a reference for spatialization. This element includes images of satellites and other geographical information, such as altitude, slope, orientation, age of surfaces, distance to objects and any information likely to describe the landscapes and available under the continuous blanket form of space. It is the extension, via the geographical coordinates, of the description of the point in the exogenous information base. Beyond the pixels of images that are ideally suited to our approach, it can nevertheless be generalized to socio-economic data identified by a reference administrative unit, i.e. the most detailed level available: IRIS4 in France, NUTS5 for GADM6. It is still necessary that these data exist and that they are validated, updated and accessible. The point data observed in situ will first be summarized (pivot table) by a reference administrative unit and then confronted with the potential identification of links, here, the trace.

Order: this essentially refers to the spatial structuring of data, the arrangement of landscape elements relative to each other that induces differentiated spatial constraints and practices. In image analysis, order refers to the notions of textures and texture mosaics and spatial autocorrelation, and opens the perspective of the frequency analysis of Fourier transforms and wavelets. From vector objects – typically reference administrative entities – the analysis of spatial structuring uses topological operators of graph theory: shape descriptors (perimeter, surface, width, length, etc.); contiguity; inclusion; neighborhood; connection of smaller distances and so on (see landscape ecology).

Inference: this is inference in the statistical sense of the term, i.e. the application of the rules developed in the previous steps to ensure the link between endogenous and exogenous information. It is an ergodic approach – “which makes it possible to statistically determine all the achievements of a random process from an isolated realization of this process” – based on probabilistic models, which makes it possible to restore the continuity of geographical space from partial knowledge. We think in particular of Bayesian probability models (Bayes, the way!) as well as the Metropolis–Hastings algorithm:

“It is today the whole field of MCMCs, the Monte-Carlo Markow-Chain, whose unreasonable effectiveness in physics, chemistry and biology […] has still not been explained. It is not a deterministic exploration, nor is it a completely random exploration; it is a random walk exploration. But deep down, it’s not new; it’s the same in life: by going a little randomly from one situation to another, we explore so many more possibilities, like a researcher who changes scientific continents with the passing of time.”7

From an operational point of view, the proposed formalization consists of measuring the degrees of connection between endogenous and exogenous information. When they are significant, we use them to generalize all of the space and all or part of the data observed punctually. It is important to distinguish now between the analysis methods we propose and the interpolation calculation procedures that also contribute to spatializing information. These last ones consist of filling the gaps in data of the same nature contained in the same grid of description as a phenomenon. For this we choose a calculation method inspired by the cases encountered, equation [1.1] which are binary, linear, quadratic, polynomial, exponential, cyclic and so on.

[1.1]images

where: X: variable to explain AND explanatory,

lat, long: latitude and longitude in the reference grid.

While relying on the same mathematical functions, the spatialization of endogenous data via exogenous information consists of developing a function, integrating different elements of the system taken into account:

[1.2]images

where: X: variable to explain;

Y, Z: explanatory variables;

lat, long: latitude and longitude in the reference grid;

f0, f1, f2…: functions on the explanatory variables.

The formalization, in time and space, of the interactions and dynamics of endogenous and exogenous data mobilize both the methods and the fundamental questions of nesting scales and resolutions or even scale invariants. Limit the term scale to the computational ratio between a field dimension and the map dimension, and instead use the term resolution to describe the basic level of description, observation and sampling of information8. In remote sensing (see below), we are talking about spatial resolution (pixel size) and radiometric resolution (characteristics of the bands), in no case of scale!

What about the problem that concerns us here, that of landscape?

Chapter written by Dominique LAFFLY.

  1. 1 Laffly D., “Approche numérique du paysage : formalisation, enjeux et pratiques de recherche”, Editions Publibook, 2009.
  2. 2 de Keersmaecker M.-L, “Stratégie d’échantillonnage des données de terrain intégrées dans l’analyse des images satellitaires”, L’Espace géographique, vol. 16–3, pp. 195–205, 1987.
  3. 3 Laffly D. and Mercier D., “Global change and paraglacial morphodynamic modification in Svalbard”, International Journal of Remote Sensing, vol. 23, no. 21, 2002; Moreau M., Mercier D., Laffly D., “Un siècle de dynamiques paraglaciaires et végétales au Svalbard (Midre Lovénbreen, Spitsberg nord-occidental)”, Géomorphologie, vol. 2, pp. 157–168, 2004.
  4. 4 “In order to prepare for the dissemination of the 1999 population census, INSEE has developed a division of the homogeneous size territory called IRIS2000. This acronym stands for ‘Grouped Islands for Statistical Information’ (Ilots regroupés pour l‘Information Statistique) and refers to the target size of 2000 inhabitants per elementary mesh” (quote from https://www.insee.fr/fr/metadonnees/definition/c1523, translated into English from French).
  5. 5 Nomenclature of Territorial Units for Statistics (Nomenclature des unités territoriales statistiques), 6 levels, see https://gadm.org/metadata.html.
  6. 6 See https://gadm.org.
  7. 7 Villani C., Théorème vivant, Éditions Grasset at Fasquelle, Paris, 2013.
  8. 8 Spatial, temporal and radiometric resolutions…. The very high spatial resolution – in the order of 50 cm – satellite images, such as those provided by GeoEye, Quickbird or Pléïades, versus the low spatial resolutions of Terra MODIS data (250 m to 500 m) or Spot VEGETATION and NOAA AVHRR (in the order of one kilometer), for example.