Cover Page

To Michèle

Series Editor

Bernard Reber

The Carousel of Time

Theory of Knowledge and Acceleration of Time

Bernard Ancori

Wiley Logo

Foreword

In these tormented times when time itself is swirling, this book is like a breath of fresh air, at the same time as it warms the heart: it is about the knowledge to which the sciences give us access, but not only this, insofar as it is difficult to get rid of a few doses of more or less reliable beliefs which sometimes enter it surreptitiously. For Bernard Ancori’s epistemology, which encompasses but goes beyond a philosophy of science, a set of logical and critical reflections on the nature of knowledge lead us into what Spinoza calls “a certain kind of eternity”, i.e. into timelessness. But it is this very fact that allows us to further question our diverse experiences of the passing of time. And, in particular, a certain acceleration that would characterize our present time, making it lean towards a kind of permanent present. Already at the end of the 19th Century, as Bernard Ancori points out, William James, a pioneer in psychophysics, had shown the specious, almost misleading, nature of our perceptions of the present between past and future. The concept of propensity to communicate introduced by Bernard Ancori constitutes a possible formalization of the notion of a specious present, and extends this notion to the spatial dimension of the network of individual actors whose model he proposes. As a result, this concept constitutes the pivotal point on which the spatial and temporal dimensions of this network are articulated, which justifies the notion of space/time of the latter, as it is highlighted here.

The timeless aspect mentioned above is not an aboveground level, which would take us out of this world. We recognize in it a way of approaching one of the crises that the knowledge accumulated by the human species over the past two centuries has been going through. This is the gap reported by chemist-writer Charles Snow in the 1950s between the “two cultures”, that of the natural sciences and that of the social sciences and the humanities in general.

From this point of view, we can ask ourselves whether the 21st Century will really be a new era or only a continuation of the physical and biological revolutions experienced by the 20th Century. Probably both, because the division of history’s time into hundreds of solar years is after all only a convention that is not without arbitrariness. And everything happens as if Bernard Ancori took up the challenge of closing this gap in his own original way, by building a bridge between these two aspects of Homo sapiens, a living being, the object of biology and a being of language, psychosocial. His multidisciplinary background confronted with the applications of the mathematical theory of information allows him first to identify, at the level of this meeting, a breaking point. Seeing it as at least one of the origins of the growing gap between the two cultures, he has found a way to clear a path and, in a way, to mend them.

In doing so, he contributes to the ongoing realization of John von Neumann’s prediction of the evolution of 20th Century science. Von Neumann, a physicist and co-inventor of the electronic computer, also predicted in the 1950s that this century would be for the sciences the century of complexity, just as the 19th Century had been the century of energy. This prediction seems to be coming true, albeit with some delay, since we have entered the 21st Century. This raises a question about the passage of time and its possible creative role, which constitutes the basis of Bernard Ancori’s reflection and gives the book its title. This ambitious work addresses the diverse nature of our psychological, social, physical and temporal experiences, combined with the different ways in which we learn about things. Only the past is rigorously the object of knowledge, constituted by bases memorized and connected in different ways. The present seems to be perceived as such, felt or sensed, before being forgotten or memorized. As for the future, it is imagined or predicted with varying degrees of success based on projections from the past, while it is shaped in a concrete and largely unconscious way. But from all this results a form of timeless knowledge, that of what philosophers have called “eternal truths”, of which mathematics serves as a model. The use of reason tends to bring scientific activity closer to this ideal, in a more or less approximate way depending on the disciplines. The result is the creation of new concepts in the history of science, which seem to be all the closer to this ideal because they are supported by mathematical formulation and operations. But not all objects of investigation are equally suitable because the times of experience and experimentation, with their difficulties to overcome, cannot be neglected.

Thus, according to von Neumann’s prediction, scientific knowledge has already been enriched in physics by the notion of information, mathematized in the eponymous theory, in relation to that of complexity – as well as of energy through that of entropy. And it is here that there is indeed a breaking point and a possible meeting between its uses in the natural sciences, first physical and then extended to biological physico-chemistry, and its possible but more problematic applications in the social sciences.

Indeed, as in the case of force in the 18th Century and energy in the 19th Century, information has been rigorously defined as a physico-mathematical quantity by borrowing the word from everyday language. But the latter is still a purely qualitative and relatively vague notion, used in the psychosocial context of interpersonal, language and other relationships. And the statistical theory of information produced by Claude Shannon and his successors, as well as the theory of algorithmic complexity of Kolmogorov–Chaitin’s computer programs, makes it undergo a transformation, by which it becomes precise and univocal enough to enter the language of the natural sciences.

But this transformation makes it lose what was thought to be its very essence, namely, the meaning of the information processed, sent and received.

In other words, the definition by theory has moved from the vagueness and polysemy of natural language to the univocal precision of the logico-mathematical form and its techno-scientific uses, first in telecommunications engineering, then in computer science. But this passage, as is often the case, makes us lose in semantic richness what it makes us gain in precision and operational efficiency. And what is lost in this case is precisely the meaning of the information usually transmitted in communications between speakers of a natural language. In exchange, mathematical theory, which allows its quantification and measurement, thus extends its applications to all kinds of non-human, physical and biological entities, known as “information carriers” in the sense of theory and treated as if they were telecommunications channels and computer machines.

It is in this sense that the great successes of molecular biology have benefited from the discovery of molecules carrying information in the linear structure of DNA, RNA and proteins, which have been treated as sets of alphabetical or numerical letters. The genetic code has thus been treated as a communication channel whose physical material is recognized in the chemical mechanisms of protein synthesis. But in all this, as in computer science, the meaning of the information thus quantified is not taken into account. In the mathematical theory, the amount of information is expressed by a number of bits that say nothing about its meaning. This is why we can say that an algorithm or a computer does not understand what it does because the transmission of meaning involves speakers who understand it.

This flaw in the theory is not such when dealing with message communication systems between speakers who transmit and receive, and are assumed to understand their meaning, without the need to formalize it in the theory. Similarly, algorithmic complexity does not suffer from the seemingly paradoxical fact that its definition implies maximum complexity for a random sequence of 0 and 1, as if it were meaningless.

This is why the transmission of meaning in communication channels and computer programs is carried out by specific additional operations: those of coding through several levels of programming languages up to the “machine language” reduced to sequences of 0 and 1, at the input of artificial machines, designed and manufactured for this purpose, followed by decoding at the output, for the use of human speakers.

Hence the search, by analogy, for such coding systems in self-organized natural machines, particularly those constituted by organisms. The discovery of what is called the “genetic code”, the same in all organisms – which is in fact only a projection of the linear structures of DNA on those of proteins and thus carries out a transmission of information in the strict sense of the theory – is probably the most spectacular success of this research, although it is not strictly speaking the coding of a computer program, in line with what was believed for a long time. Indeed, the meaning of genetic information here is metaphorically reduced to the effects observed at the output of the communication pathway of a particular protein synthesis and its effects on the structure and functioning of the cells where it takes place; but we now know that these effects, because of the three-dimensional structure of proteins, depend only partially on their linear structure, the only one coded by that of DNA.

Bernard Ancori opposes, or rather associates, the “telegraphic” theory of engineer Claude Shannon with that of anthropologist Gregory Bateson. The latter is the author of a more qualitative theory of communication, called “orchestral” by some of his commentators, and of two volumes on an immanent ecology of the mind. He inspired Paul Watzlawick and the famous Palo Alto school, among others. For Bernard Ancori, Bateson complements Shannon advantageously in his own theory of knowledge. In particular, he highlights the role of self-organization models by “complexity from noise” (Atlan) in attempts to formalize the creation of meanings.

Incidentally, we are brought into contact with one of the excesses of considerations about cosmological time, measured in billions of years, when it is conceived within the framework of an unbridled idealism that makes the meaningful human consciousness play a truly creative role in scientific objectivity. This is the so-called “paradox of ancestrality” whose denunciation of paralogism is welcomed here. Astrophysical theories on the origins of the universe would be paradoxical – both true and not true – in that they would concern a reality that existed before the appearance of human life and consciousness that established its reality, in times when the human species did not exist. The paradox is here dismantled step by step by showing the various confusions on which it is itself built, including that between objectivity and intersubjectivity.

Judging by another paralogism that has flourished among some physicists, it would seem that the time of origins in relation to human capacities to take cognizance of it is such as to derail the reason. This is the “anthropic principle” in its so-called “strong” interpretation. The universal physical constants are such that only they could have allowed the evolution of the universe as it occurred with the appearance of life and the human species of which we observe, obviously after the fact that the latter is capable of developing awareness. This observation gave rise to the idea of an initial adaptation of the universal constants to the future appearance of a conscious being capable of knowing them, with all that followed until the appearance of this being. According to this finalist interpretation, which is in no way binding, the universe would thus have been formed from its origin in such a way that human consciousness could appear. However, the so-called “weak” interpretation – from the point of view of the supporters of this teleological conception where the origin of the universe would be determined by a kind of divine project involving the subsequent appearance of humanity – is, in fact, quite simply reasonable in its use of the counterfactual, i.e. things being as they are, if physics were determined by other universal constants, another universe might have evolved, bringing to light beings otherwise organized, with the possibility for some, or not, to acquire a form of knowledge, which we can possibly imagine using other universe models.

The question of information coding is currently building on the successes recorded by cognitive neurosciences, thanks to the extraordinary developments of functional explorations of brain activities correlated with subjective mental states that can only be expressed in the language of the subjects, objects of observation and experimentation. After several decades of research, the question, much more complex than that of coding genetic information, about the existence and nature of possible neural codes remains unanswered. On the one hand, these are brain activities described in the physico-chemical language of electrical and chemical activities, where we can identify transmissions of information, in the technical sense of the term, between neural circuits, and on the other hand, human or animal cognitive activities, expressed and described to some extent, including by experimenters, in vernacular or psychological languages with the practically infinite diversity of their semantic components. This is what may have made us speak about human cognition and its models of self-organization in the past as “machines for making meaning”. But we are dealing here with a confusion, whose heuristic interest is certainly not to be neglected, between the two notions of information, technical and typical.

This is why the use of information and coding concepts, which do not have the same meaning in the two fields of investigation, cerebral and cognitive, only provides a very limited bridge from one language to the other.

Thus, the meaning of information appears to be a place where the problems of the transition from physico-chemical to psychosocial are linked. Some models of functional and even intentional self-organization, as well as tests of formalization of algorithmic complexity carrying meaning, are presented by Bernard Ancori as tests to better formalize this passage. But like machine translation programs that can only work if they are limited to predefined fields, they are for the moment only attempts to build a bridge on both sides of the border between the two cultures, demonstrating the challenges and difficulties. Perhaps, before these difficulties are overcome, we need to know again those still posed by the classical philosophical problem of the relationship between body and mind. This problem is still far from being solved as long as we remain in the context, still largely prevalent, of dualistic ontologies, more or less assumed, failing to see in it on the contrary the expression of a radical monism, where the very question of interactions between matter and thought disappears into the unitary conception of their union. The original encyclopedic approach developed by Bernard Ancori is not the least of the interests of this book.

Henri ATLAN
MD, PhD, Biologist and Philosopher
Professor Emeritus of Biophysics, Universitiy of Paris VI and the Hebrew University of Jerusalem, and Director of Studies at the École des hautes études en sciences sociales (EHESS, Paris)

Acknowledgments

The writing of a book, achieved by solitary labor, its progressive creation from the initial idea to the final product, is obviously a collective work. This is constructed through a succession of concentric waves centered on the author’s person. The most peripheral is occupied by colleagues, students or friends who, often without their knowledge, change a path that previously seemed secure with a word or sentence. It is them who must be thanked here.

The influence of the following wave on the work is already more significant because this median wave is that of colleagues who were kind enough to invite the author to express himself in an institutional framework that would allow him to confront his ideas inter pares. For the past 15 years or so, Simon Laflamme, Pascal Roggero and Claude Vautier have allowed me to publish some of the aspects developed in this book in Nouvelles perspectives en sciences sociales, the excellent journal they edit in Sudbury and Toulouse. They must be thanked warmly for this because many of these aspects have been taken up and made consistent within a common perspective. More recently, Carlos Alberto Lobo and Bernard Guy gave me the opportunity to speak at their respective seminars, at the ENS in Paris and at Jean Moulin University Lyon 3. I would also like to thank them very much for this.

The closest and most decisive wave is made up of colleagues and friends who, through their daily encouragement, or through a careful review of the various stages of the manuscript, have allowed it to mature and then prove publishable. In this regard, I would like to thank, in particular, my colleagues Isabelle Bianquis, Bernard Carrière, Patrick Cohendet, Jean-Luc Gaffard and Anne Kempf who have kindly temporarily taken off their anthropologist, physicist or economist hats to examine the transdisciplinary kaleidoscope I was giving them with a critical and benevolent eye.

I would also like to thank Bernard Reber for having welcomed my book into the collection he manages at ISTE.

My most important intellectual debt is probably the one I contracted, through his work, with Henri Atlan. I thank him for the honor of writing a foreword to this book.

Wherever he is, I thank my dear friend Jean Gayon who would have been, alas too briefly, a part of all these waves, and especially the closest one.

Finally, I would like to thank my wife, Michèle, who has been with me on this carousel for a long time.

Introduction

The actors of our societies say they feel a phenomenon of time acceleration, and this phenomenon would paradoxically lead to a new regime of historicity: a presentism (Hartog 2003). This presentism is often interpreted as being the symptom of a capitalism more eager than ever for immediate profitability, and whose ingrained passion for the short term would culminate in a perfect match between the past, present and future. In terms of the first as a field of experience, it would definitely be a clean slate, and of the second as a horizon of expectation, it would only keep the promise of an endless repetition (Laïdi 2000; Augé 2011)1, and yet nobody would have more time to themselves (Baier 2002; Rosa 2010, 2012; Birnbaum 2012; Baschet 2018).

How can this sensation of time acceleration be combined with a tendency towards perfect immobility? What is the true meaning of the expression “acceleration of time” thus used? Does it have only one, given that the notion of acceleration is precisely defined in relation to time? Would it then only point to a kind of vertigo (Jeanneney 2001)?

The purpose of this book is to answer these questions within the framework of a theory of knowledge. As we will see, this response combines a psychological and an historical explanation of the perceived variations in the speeds of time, and a socio-historical explanation of the phenomenon of acceleration that our contemporaries say they are experiencing while seeming to be getting used to this eternal present.

The field of the theory of knowledge proposed to explain this carousel of time goes far beyond that of the scientific sphere alone. From Rudolf Carnap to Karl Popper, the philosophy of science has failed to provide a certain foundation (empirical or methodological) for the statements produced by scientists’ activity so that the distinction between knowledge (as true beliefs reliably justified) and representations (as beliefs that may prove false) has little empirical relevance: it certainly remains analytically useful in a normative perspective, but, once the analysis is positive, it is more a matter of consensus among actors. However, from this point of view, science is not absolutely distinct from other types of knowledge within a broader set of representations among the members of a given society.

We therefore agree with Susan Haack’s (2003) pragmatist position, using the expression “long arm of common sense” introduced by Gustav Bergmann, that scientific research is in perfect continuity with other types of empirical research, especially with those that everyone conducts when they wish to answer a question that arises for them: certainly, “scientists have devised many and varied ways to extend and refine the resources we use in our everyday empirical research” (op. cit., p. 297), but they are not of a fundamentally different nature, for at least three reasons.

Firstly because knowledge is so widely distributed in our societies that it has become difficult to continue to establish the existence of a single hierarchy of knowledge, at the top of which would be a new clergy of scholars dominating a shapeless mass of ignorant people. This is why many voices are now calling for a more horizontal vision of the distribution of knowledge and the mobilization of all in our common adventure of exploring reality (Calame 2011; Lipinski 2011).

Secondly because the cognitive processes implemented and the argumentation regimes used are similar for all actors, academic or not. Thus, the modes of revising beliefs follow the same paths everywhere: although they are often more sophisticated than those of the common man, scientists’ representations result identically from rearrangements of current beliefs in the face of new information: these rearrangements are such that Homo academicus and Homo vulgaris give the same priority to new information while demonstrating a concern for conservatism and memory according to a mix that depends on the context (Zwirn and Zwirn 2003). As for argumentation regimes, they use the same figures of rhetoric here and there: omnipresent in the cognitive processes of all actors (Lakoff and Johnson 1980, 1999; Hofstadter and Sander 2013), metaphors and analogies are identically mobilized to support their positions by the average individual (Ortony 1993; Gineste 1997) and by those who want to promote artistic and scientific creation (Miller 2000)2.

Finally, the “long arm of common sense” extends into the scientist’s sophisticated hand because the learning processes are put into action in the same way by all actors: in this matter, scientists and non-scientists demonstrate the same capacities, hierarchized into a theoretical plurality of logical levels (Bateson 1977, 1980, 1984, 1996)3.

This is why it will be a question here of epistemology in the English sense of this term (i.e. the theory of knowledge in general) rather than in the narrower sense of the French tradition (i.e. the philosophy of science)4. Without refraining from illustrating our discussion with examples from the history of science, or from using the philosophy of science to shed light on this or that point, we will therefore consider in this book a global population of individual actors (scientific or not) whose representations will be analyzed in terms of their formation, structure and evolution, whether or not these representations are transformed into knowledge recognized as such.

Such a position is not entirely self-evident, since the analysis of scientific phenomena has long been one of the reserved fields of philosophy, and for some, this should still be the case today. Whether it is a question of the French school of thought or the English-language approach, this philosophy of science has mainly focused on the conditions conferring scientific legitimacy to statements in which knowledge is materialized. To give only two particularly salient examples, it is in this normative perspective, and with a focus on “accomplished” science, that Gaston Bachelard focused on the formation of the scientific mind, or that Karl Popper attempted to define a criterion for distinguishing between scientific and non-scientific statements. More recently, the Social Studies of Science movement has emerged, which has focused on situating science and technology in their socio-historical contexts of production, as well as assessing the societal implications of their developments. With a positive view of an analysis of the production of scientific and technological representations like some “sciences in action”, the anthropology and sociology of science have extended and amplified – even diverted – the approach initiated by Thomas Samuel Kuhn (1972) by bringing out the field of science–technologies–societies and then multiplying their analyses of the latter (Vinck 1995, 2007; Pestre 2006).

Normative or positive: in order to distinguish these two perspectives, it first seemed convenient to describe them as internalistic and externalistic, respectively. From the outset, such a dichotomy turned out to be outdated, and many voices rose up to ask to overcome it. Thus, the call made by Anouk Barberousse et al. (2000, p. 175) assumes that normative (i.e. philosophy of science) and descriptive (i.e. social studies of science) traditions “can, and even must, converge”. Indeed, if it is obvious that scientific activity is placed in a social and historical context, it is no less obvious that it is a cognitive activity of human beings:

“Doing science means at least observing phenomena, trying to explain them, acting by building experimental systems to test these explanations, communicating the conclusions to other members of a community” (ibid., pp. 175–176).

The convergence between normative and descriptive traditions therefore requires a unified analysis of cognitive (observe, explain), architectural (build) and social (communicate) gestures. We propose to place this analysis within the framework of a complexity5 paradigm that is particularly appropriate for our subject, particularly because it exhibits certain analogies with the theory of evolution. Consider Murray Gell-Mann’s (1995, p. 33 sq.) description of complex adaptive systems. According to this author, such systems obtain information about their environment and their interactions with it, identify regularities within this information and condense these regularities by formulating models to act in the real world based on them. In each case, there are several models in competition, and action in the real world has a retroactive influence on this competition. More precisely, each of these models is then enriched with additional information, including those that had been overlooked when extracting regularities from the initially observable data stream. This is in order to obtain a result applicable to the “real world”, i.e. the description of an observed system, the prediction of events or the indication of a behavior for the complex adaptive system itself.

Without reducing itself to it, this very general description applies, in particular, to what Gell-Mann calls “the scientific enterprise”. Models are theories here, and what happens in the “real world” is the confrontation between theories and observations. New theories can compete with existing ones, thus engaging in competition based on the coherence and degree of generality of each, the outcome of which will ultimately depend on their respective capacities to explain existing observations and correctly predict new observations. Each theory of this kind constitutes a highly condensed description of a very large class of situations and must therefore be supplemented by a detailed description of each particular situation in order to make specific predictions (ibid., p. 94).

The theory of knowledge proposed in this book will therefore have an evolutionary character, in the analogical (rather than literal) sense of this term: without identifying human cognitive faculties with the product of a biological process of variation and natural selection, we adopt here a mode of explanation similar to that of evolutionary biological theories (Soler 2000a). It will also be part of a naturalized epistemology in the sense of Willard Van Orman Quine (1969) by being in continuity with certain scientific results currently in force, in particular, the cognitive sciences and a renewed sociology of networks (Latour 2006).

In this perspective, we will propose a model of the structure and evolution of a complex socio-cognitive network of individual actors, thus aiming to formulate a theory of the construction of our cognitive space–time. As this dash indicates, the categories of space and time are absolutely inseparable here, and this model is also that of our cognitive space–time in a dual sense. First, because, according to the bias described above, it is as much about “field” or “experiential” knowledge as it is about what is called “scientific knowledge” so that it is the cognitive territory common to all these representations that is ours. Secondly, because the space–time thus shared by all the individual actors takes on a particular meaning in our current situation, characterized by the acceleration of the time felt and presentism, mentioned from the outset in this introduction.

The structure of this book is derived from the global perspective we have just outlined. It is divided into three parts. The first part, entitled “Foundations”, presents the main ingredients of our model and is divided into four chapters. The realization of the cognitive and social gestures mentioned above implies the realization of learning processes by the individual or collective subject, and these processes lead to gains in information that can be communicated. The conceptual basis for the convergence sought between normative and positive approaches thus consists of the critical integration of a nebula of notions that is organized around those of information, communication and learning. It is therefore through the search for this critical integration that this book begins, and we confront in this regard two alternative approaches to the nebula of notions mentioned: that of the engineer Claude Elwood Shannon and that of the anthropologist Gregory Bateson, each presenting advantages and disadvantages (Chapter 1).

The following chapter proposes to discern a first synthesis of these two approaches in the paradigm of self-organization of complex systems developed by Henri Atlan. This is a natural complexity, which characterizes systems not built by humans and whose purpose, if it exists, is unknown to the observer – such as biological and social systems – and not the algorithmic complexity introduced by Andrei Kolmogorov and Gregory Chaitin, which essentially concerns the world of theoretical computing. This paradigm of natural complexity contains a concept of structural and functional self-organization that makes it possible to preserve the formalism used by C. Shannon while integrating into it effects of meaning similar to those put forward by G. Bateson (Chapter 2).

We will then show that the Atlanian paradigm is an excellent model of how human memory works, as, demonstrated by Israel Rosenfield’s analysis based on the work of Gerald Maurice Edelman. It shares three main characteristics with this operation:

The path taken by these first three chapters leads to the following two stages:

This model is generic rather than explanatory6: far from putting in order experimental data whose detail would be sufficiently known to enable us to control the mechanisms of phenomena observed in their entirety, it is content to produce some conditions of possibilities. Its usefulness is to suggest a likely logical structure for this global phenomenon which is too complex to be analyzed in all its details, namely, the space–time in question. Thus, although it occasionally encounters some empirical data, this model does not rely on specific experimental data (Chapter 4).

Despite the ontologically unbreakable nature of the space and time of our network, these two dimensions are necessarily explored successively during the progressive construction of our model. The analysis of the spatial aspects of the network is developed in the second part of this book, entitled “Space” and divided into two chapters. We first specify the nature of the boundaries delimiting the perimeter of the network, by defining their dimensions and giving them a measure and then by analyzing their modes of mobilization. Since the network space is thus circumscribed in relation to its environment, we introduce these two driving forces governing its evolution, namely, inter-individual communication and the analogy that is a source of categorization7. The comparison of the weak novelty, resulting from the first, and the strong novelty, created by the second, leads to three possible types of network trajectories (Chapter 5).

The following chapter focuses on the analysis of the internal structure of the network space thus delimited and measured, as well as on the evolution of this structure. We propose a concept of propensity to communicate between the actors. Under the assumption that communication would be the only driving force behind the evolution of the network, the model shows that these actors tend to merge by engaging in a cumulative process of forming cognitive clusters of increasingly similar actors within a given cluster and simultaneously increasingly cognitively dissimilar from the actors contained in all the other clusters. This cumulative process of spatial regionalization is quickly accompanied by a tendency towards homogenization, which eventually prevails over this partition of the network into distinct regions so that the network adopts one of the three trajectories previously analyzed (Chapter 6).

The third part of this book, entitled “Time”, is divided into four chapters. Thanks to our concept of the propensity to communicate, we begin by linking the temporality of the network to its spatial characteristics because this concept is both a marker of the internal structure of the network’s space and of the evolution of this structure. It represents a possible formalization of the notion of the specious present, popularized in the 19th Century by William James and taken up by our modern cognitive sciences, even though the space of the network is nothing more than the gathering of all the specious presents of individual actors. This space thus reveals itself to be only a moment of time as such, which overstates the subjective time of the actors observed within the network and the objective time of the latter’s observer. The combination of the point of view of a particular observed actor with that of this observer makes it possible to conceive the asymptotic existence from the point of view of everywhere, as well as to affirm the primacy of subjective time over objective time by revisiting in a critical mode the paradox of ancestrality that claims to deny it (Chapter 7).

In connection with the notion of the specious present, the following chapter proposes an interpretation of the phenomenon of déjà vu which shows that historical time is irreversibly constructed by storing in the memory the events produced and perceived by individual actors within the network: far from being part of a temporal framework that is always “already there”, these events produce time as a sequence of successive specious presents of the actors (Chapter 8).

This type of temporality is not always experienced by the latter in a uniform way, and we know that the actors of our societies say they feel a phenomenon of acceleration of time paradoxically associated with presentism. For some, the expression “acceleration of time” would be meaningless since an acceleration or deceleration of time is defined and measured precisely in relation to time itself, while for others, a notion of acceleration of the time felt in relation to an objective measure of duration retains all its meaning. Our model explains such a phenomenon while dispelling the apparent paradox of its conjugation with the presentism mentioned, and it proposes an interpretation of the content that the concept of entropy could present in this respect (Chapter 9).

The last chapter of our book explains the theoretical articulation between the succession of discrete states made up of the specious gifts of individual actors and the continuity of thought flows that William James has never ceased to proclaim. This articulation is based on the distinction between two hierarchical levels of inscription of psychological categories in the individual memories of the actors: a conscious level, at which such categories are combined in order to ensure overall semantic consistency in their registration in memory; and a non-conscious level, that of psychological meta-categories that govern the mode of selection and organization of the categories present at the conscious level. As long as the composition of this meta-category level is invariant, the continuity of the actors’ thought flows responds to that of the mode of selection and organization of the psychological categories consciously recorded in memory, although this flow is simultaneously divided, at the level of the contents of these categories thus selected and organized, into a succession of discrete states. But when this meta-category level is modified, the type of learning then achieved causes a temporal disruption in individual actors’ flow of thoughts and consequently in the evolution of their network. This form of temporal disruption marks the Kuhnian periods of “extraordinary science” (Kuhn op. cit.), which illustrate in an exemplary way the learning processes that everyone achieves when their confrontation with a new problem forces them to change their point of view on the world around them (Chapter 10).

  1. 1 The concepts of “field of experience” and “horizon of expectation” are borrowed from Koselleck (1979). The presentism in question here is distinct from philosophical presentism, the doctrine according to which only the present is real and which is the temporal analogy of the modal actualist doctrine according to which everything is actual. This actualism is opposed to possibilism, according to which things do not actually exist. On this point, see Théodore Sider (1999). On the genealogy of the notion of “regime of historicity” and, beyond its meanings for historians, its declinations in anthropology, psychoanalysis and geography, see Delacroix et al. (2009).
  2. 2 Whether it is our most daily appeals or arguments deployed in the natural sciences or in the humanities and social sciences, analogies and metaphors are used everywhere to gain conviction (de Coster 1978; Lichnerowicz et al. 1980; Hallyn 2004; Durand-Richard et al. 2008).
  3. 3 It is on the basis of this categorization that Erving Goffman (1974) introduced frame analysis in the social sciences: what he calls “primary frameworks” and “frame transformations” refer to Batesonian learning at levels 2 and 3, respectively.
  4. 4 On the evolution of the different meanings of the term “epistemology” in German, English and French, see Chevalley (2004).
  5. 5 For a long time largely ignored by the traditional philosophy of science (Morin 1990), the concept of complexity is relatively recent – the first occurrence of the word in the title of a scientific text dates back only to Warren Weaver (1948). But since then, the multiple faces of complexity (Klir 1986) have taken over most disciplines (Fogelman-Soulié 1991; Capra 2003, 2004), to such an extent that one can legitimately wonder whether complexity will not constitute the epistemological framework favored by the 21st Century (according to the title of a special issue of La Recherche, December 2003).
  6. 6 We borrow this distinction between generic and explanatory models from H. Atlan (2011, p. 9).
  7. 7 For a recent perspective on the relationship between social sciences and cognitive sciences, see Kaufmann and Clément (2011).

PART 1
Foundations