Appendices

Appendix 1
Functional model notation

images

Figure A1.1. A functional model notation of the OMT method described in [RUM 95]

Appendix 2
Dynamic model notation

images

Figure A2.1. A dynamic model notation of the OMT method described in [RUM 95]

Appendix 3
MOOC.py and Quiz.py

images

Figure A3.1. “MOOC-PY & Quiz” file – a screenshot from Pyzo software version 4.2.1

Appendix 4
Étudiants.py

images

Figure A4.1(a). “Étudiants.py” file – a screenshot from Pyzo software version 4.2.1

images

Figure A4.1(b). “Étudiants.py”- a screenshot from Pyzo software version 4.2.1

Appendix 5
Chapitre.py

images

Figure A5.1. “Chapitre.py” file – a screenshot from Pyzo software version 4.2.1

Conclusion

The purpose of this book is notv to propose, once again, an improvement of the current DLE analysis models. It is rather to propose a new approach consisting of representing these environments, taking into account their complexity. We thus decided to abandon some past practices and use systems science and in particular the complex systems theory. As Bachelard said, access to science involves accepting the contradiction of the past. This epistemological rupture comes from our desire to break from the illusory belief that Cartesian and analytical modeling can reach the objectivity built in principle. Therefore, we decided to register the models on which we will reason in the paradigm of systemic modeling, one of the features of which is to consider models as living and evolutionary constructions rather than as preconceived data. Given the highly innovative nature of this change, our goal is not to just apply this systemic modeling to a particular environment (although practical applications are often very telling), but rather to cast the foundations of this new approach in order to initiate new research in this sense. Remember: this model has no vocation to serve to the design of an adaptive learning environment, which would claim to manage all the cases in order to allow personalized learning. So far, this goal seems impossible to achieve in view of the excessive amount of variables involved. The approach we propose instead seeks to model latest generation DLE using methods adapted to the infinite diversity of these variables, in the single hope of discovering some emerging properties or a few non-programmed adaptive behaviors. We will, however, always keep in mind that the complexity that characterizes these environments, and in particular their sensitivity to initial conditions, dismisses all hope to predict their behavior with certainty.

However, is this degree of uncertainty a good reason to give up on this modeling project? Some Cartesian experimentalists will probably seek to move from this type of research to take interest in the study of objects that better lend themselves to the so-called “exact” sciences. The ensuing results will certainly be more rewarding. These experimentalists will have the satisfaction of producing knowledge, making it possible to produce with certainty, at the same time ignoring other key issues which do not have this property. However, complex systems have become ubiquitous; the prediction of their behavior becomes almost obsessive to us humans beings. People want to know where they are going and what to expect. Predicting the weather in the long term, describing the trajectory of a leaf deposited on a stream, trying to regulate fishing worldwide, knowing the advantage of introducing digital tablets at a school or even predicting what MOOC will become in the future are just some examples of forward-looking or predictive research applicable to complex systems. And if the systemic modeling of complexity seems capable of producing interesting results, it is not unfortunately immune to errors or inaccuracies. Only the probability that an event may occur can sometimes be provided with certainty. In this regard, predictions remain uncertain.

What approach should be taken? Initially, it is a matter of identifying the system boundaries to study. Thus, it is essential to consider a DLE (for example, a MOOC) as the union of the “digital environment” tool and all of its users. Indeed, assuming that a MOOC would only consist of a technical system, it would no longer concern a natural system of complexity, but simply a system of algorithmic complexity: this is the fundamental message of this book. We then break down the system (DLE) into subsystems: the “digital platform” system, its functional subsystems and “user” subsystems. It is essential to analyze each of these subsystems in terms of system modeling and avoid the pitfall of analytical modeling, at all costs. Rather than considering these subsystems as static organs of the system, whose behavior is independant from the other subsystems, we consider them instead as “black boxes” that accept inflow, emit outflow and change according to these flows and with time. To concretely model this in practice, it is advantageous to use the object concept in a computer sense. Thus, we create a “digital platform” class object and several objects all instantiated from a “user” class. Even before knowing the behaviors of these objects, the flow of each object through methods and attributes is defined. We therefore connect all these objects together, giving rise to the complex system skeleton being studied.

Now, we come to the methodology that should be followed in order to analyze each of these subsystems. This varies greatly between the “digital platform” and “user” objects. The digital platform on which a MOOC operates is a system of algorithmic complexity. Its analysis is therefore tedious, but its behavior is entirely known. It is up to the modeler to define different objects that interact within this subsystem, that is, inflow and outflow and the behavior in response to this flow. This can be done with more or less detail according to the desired fineness1, but there is a maximum detail level, which exactly corresponds to the digital platform’s source code: this shows that it is an algorithmic complexity system. Thus, the modeler can make a user-robot interacting relationship (via the “user” object) with the original digital platform; but this approach, however, is not necessarily the most sensible because it would be necessary to code “user” objects that are complex enough to be able to interact with the digital platform, which is extremely difficult and quite possibly pointless.

The user of a latest generation DLE, such as a MOOC, for example, is instead a system of natural complexity, and one can advance without too many risks that a human being is the most complex system that we know. Analytical methods must be different, because it is clearly impossible to consider the modeling of all actions of human activity. It is therefore necessary to restrict the diversity of human activity by only allowing the “user” object a limited number of stereotyped behaviors that have an important influence on the evolution of the general system and abandon the behaviors deemed as irrelevant in the context of the studied model. The modeler’s approach must follow a cycle of trial and error. First, they list all the parameters likely to influence the user’s behavior: these are the so-called active variables. They then look to obtain information concerning the relevance of these parameters, either by questioning users via a questionnaire, or by “spying” on their behavior in the DLE. Thanks to exploratory data analysis techniques (such as factorial correspondence analysis and, notably, the utilization of dendrograms), the modeler determines which parameters separate and which parameters do not. The so-called “dividing” parameters are those that allow us to create stereotypical user classes. The modeler can then determine which parameters to include in the modeling of the “user” objects and which settings to leave aside. At least two options are then offered: the modeller can either determine, through user classes, paragons on which they will base themself to code some “user” object types, either to give random values to separating settings of the “user” object according to a law of probability, reflecting the statistical data that has previously been identified. The second option was preferred in our study, as it allows for better observation of the sensitivity and resistivity of the system to changes in initial conditions.

Once these subsystems have been coded, they must then be set in motion. A time dimension is added to the code by inserting a script2. We can observe the system’s behavior by observing the evolution of the variables that interest us. We can also repeat the modeling on several occasions to gauge the system’s sensitivity to random settings. The modeler can then compare the system’s different evolutions based on the values of the starting parameters. All this must be done first in order to check the consistency of the model: whether the behavior corresponds well enough to the observed reality of the DLE and whether the variables have been carefully chosen. Otherwise, they have either been poorly chosen or have not been modeled with sufficient accuracy in the DLE framework. The modeler then resumes their model in the light of all this data and enters a cycle of perpetual reassessment, until they obtain a model that reflects the observed reality. Once the model is considered coherent, they can finally use it for predictive purposes. Therefore, they can choose a particular population, initiate the simulation and observe the evolution of the system. They will be particularly attentive to the appearance in their model of unexpected phenomena. They can then alert decision-makers and question themself in the following way: “Given the population using the MOOC, is it possible that such a phenomena can happen?” But they will have to stay humble regarding the system complexity being studied and never say that a phenomenon, since it took place in their model, will necessarily take place in reality. In fact, the modeler must remain a scientist in all circumstances and never become a fortune-teller.

The conclusions in this book may initially seem disappointing. We indeed renounce a fundamental objective in science: explaining a phenomenon completely and exhaustively. This is what we intended to do from CALL to ITS, with admittedly more or less success. But the more these technologies evolve, the more it seems irrefutable that these digital learning environments are ultimately complex systems. Yet, our mathematical models are unanimous: it is simply impossible to predict the evolution of such a system with certainty. It is, however, not necessary to stop at this defeatist statement, and this book tries to make use of it. Other scientific disappointments, like the fact that it is impossible to accurately predict the weather or even the development of a three-body system, have helped to break the deadlock of positivist determinism and to advance the sciences in other ways. Just as we are able to predict the weather for a few days, accepting that this forecast might be wrong, the transition from analytical modeling to systemic modeling will allow us to predict certain aspects of digital learning environment evolution by accepting the fact that what is presented is a possible evolution and not an absolute certainty. The price of this paradigm shift is that of scientific rigor: just as in they physical sciences, it will accurately define our uncertainties and determine the limits of our models in advance. In this way, and fortunately, our digital learning environment models will cease to be outdistanced by technopedagogical innovations; they can finally anticipate these innovations and predict some of their effects. We as researchers can thus guide our political decision-makers, support our educational engineers and confront the cognitive sciences to the reality of digital tools. For these reasons, we gladly accept to renounce analytical modeling for the benefit of system modeling; as Henry Ford said, “failure is simply an opportunity to begin again, this time more intelligently”3.