Cover Page

Advances in Information Systems Set

coordinated by
Camille Rosenthal-Sabroux

Volume 9

Traceable Human Experiment Design Research

Theoretical Model and Practical Guide

Nadine Mandran

Wiley Logo


This book proposes a research methodology named THEDRE, which stands for Traceable Human Experiment Design Research, which is a result of testing carried out in human-centered computer science. Since 2008, approximately 50 tests have been monitored, enabling us to understand the best ways to efficiently carry out testing within the scope of scientific research. In order to perform the methodological work correctly, we have referred to the work of epistemologists and quality managers and studied the research methods currently used in computer science.

We begin with an introduction to the central problem that we shall share our thoughts on. This book is organized into six chapters. Chapter 1 defines the characteristics of human-centered computer science research (HCCSR). Chapter 2 presents the theoretical notions required to develop our research method (i.e. epistemological paradigms, quality processes, data production methods and user-centered approaches). Chapter 3 examines current research methods used in human-centered computer science research. Chapter 4 examines the THEDRE theoretical model. Chapter 5 focuses on the implementation of the THEDRE model as well as on the practical guidelines designed to coach researchers throughout the research process. Finally, Chapter 6 discusses the way in which THEDRE was built and evaluated on the basis of testing carried out from 2008 onward.

The characteristics discussed in Chapter 1 are crucial to the understanding of the THEDRE model. Chapter 2 examines the theoretical foundations of the tools we use. Similarly, in Chapter 3, we discuss the current methods used in order to provide an overview of the state of affairs in this field. However, these two chapters are not essential to the understanding of the method explained in Chapters 4 and 5.

In order to gain a swift understanding of the THEDRE method, we advise readers to begin with the Introduction and Chapter 1, before going directly to Chapters 4 and 5. Chapters 2 and 3 go into further detail (where necessary) on concepts used in THEDRE.


December 2017


Conducting research is a specialist profession because it requires precise knowledge of a field as well as skills in experimental methodology. On this thread, Claude Bernard (1813–1878) stated the following:

“A true scholar embraces both theory and experimental practice. They state a fact; an idea is formed in their mind relating to this fact; they reason it, establish an experience, foresee and form it within material conditions. New processes result from this experience which must observed, and so forth.”

Experimentation skills are not systematically held by young researchers, and they often find themselves in difficulty when experimental steps must be developed. Experimental processes are trickier to implement when it comes to studying humans, particularly when also considering the context within which they live. This investigation is all the more difficult than it is necessary to mobilize methods of Humanities and Social Sciences (HSS).

As such, this problem was identified within research on technology, which required finding users able to develop and evaluate scientific knowledge. Users are defined as people who are mobilized by the researcher and upon whom the researcher may choose to build an activity model, for example. They are the end-users of applications produced by research work. This type of research is therefore faced with the integration of humans and their environments (family, professional, etc.). It is referred to as “human-centered computer science research” (HCCSR).

We have worked with PhD students since 2008 in a bid to accompany them in the development of these multidisciplinary experimental protocols, giving them the tools they need to address the problem in their theses. We co-supervised work at the Laboratoire d’Informatique de Grenoble (LIG, Grenoble Informatics Laboratory), in other laboratories (Cristal Lille, LIP6 Paris, IFE Lyon, Saint Etienne Department of Geography at the Université Jean Monnet) and in two companies during the follow-up of CIFRE theses. We have followed up a total of 29 PhD studies and six research projects at the date of publication.

Within the framework of these studies and projects, five specialist areas concerning HCCSR have been defined: (1) engineering of human–machine interfaces (HCI), (2) information systems (IS), (3) technology-enhanced learning (TEL), (4) multi-agent system engineering (MAS) and (5) geomatics (GEO). The research objectives of these specialist areas as well as their applicable tools differ. This being said, some common points have been identified: (1) the need to integrate the user and their context at certain points in the process, and/or to develop or evaluate the object, (2) the need to develop a tool so that user testing can be carried out and (3) the need to develop the above in an iterative manner in order to encourage evolution of both tools and research. Over the course of this work, we also identified a lack of best practice concerning the traceability of the various steps monitored in order to formulate their research work. Traceability plays an important role as it guarantees a certain level of repeatability of results in the field of HCCSR. The notion of traceability of research corresponds to the capitalization of the completed actions, data and documents produced and the obtained results. According to Collins, the verb “to capitalize” is defined as “using something to gain some advantage for yourself”. As such, capitalization does not simply involve archiving, but rather the containment of a set of functions such as storage, accessibility, availability, relevance and reuse, in order to produce benefits and new abilities. The above definition forms the basis of this book in which capitalization is examined.

The challenge of integrating humans into the experimental process of HCCSR as well as into the traceability of this type of research may seem surprising because not only have HCCSR methods been formalized, but also an abundance of literature concerning data production methods is available [CRE 13] and engineering methods concerning computer science are taught [MAR 03]. However, work carried out by researchers showed that this specialist activity is difficult to acquire, especially for experiments that require a human component in order to develop and evaluate scientific knowledge with technical content.

This book aims to provide a response to this problem via the THEDRE approach. It is intended to support PhD students and researchers in their research work by focusing on experimental aspects within a multidisciplinary context, and to provide them with the tools required to ensure their work is traceable. THEDRE also aims to develop knowledge of experimental HCCSR practices among young PhD students, in order to respond to emerging research and to link this work with quality management tools.

THEDRE is a global approach that encourages the use of quality management tools, namely the Deming cycle and quality indicators for the research process and for data quality. THEDRE is formalized using a vocabulary designed to structure the research process. This vocabulary enables each researcher to refer to this process while also adapting it to their specialist field (e.g. HCI engineering, IS engineering and TEL). The research process developed by the researcher enables them to monitor research projects and to support PhD students by applying quality management principles to the process.

Before describing our approach, we will first examine the elements produced by HCCSR as well as their characteristics in order to discover the way in which they are developed and evaluated. We have classified this research as science of the artificial [SIM 04].

Human-Centered Computer Science Research (HCCSR)

1.1. Concepts and features of HCCSR

Introducing the user is one of the principle characteristics of HCCSR. The second fundamental characteristic of this type of research is its dual purpose. On the one hand, it focuses on the production of scientific knowledge, and on the other hand, it looks at tools to support human activity (e.g. language, dictionary, interface, model). These two focus points are completely intertwined and interdependent. As such, professional expertise can be modeled using a language (e.g. a UML extension), with the resulting model contributing to the design of a computer application. This language constitutes scientific knowledge. The computer application is a tool in the sense that the user is able to use it to perform an activity (such as developing a specific information system or providing a new interactive tool). This tool is dependent on the scientific knowledge produced, and users can use it to produce new scientific knowledge. For example, modeling a professional task, such as taking into account execution time, requires the modeling language to be developed.

First, we will provide clarification for two terms in order to avoid confusion when reading this book.

Methodology is “a branch of logic used to study the approaches of various sciences. A set of rules and processes that are used to lead research” [TLF 16]. The etymology of “methodology” comes from Latin, borrowed from Ancient Greek: the pursuit or search of a pathway and logos which signifies knowledge disciplines. Methodology is therefore the study of methods to create new ones and to help them to evolve.

The method is the result of work carried out in methodology. The definition that we follow is: “A way of doing something in following a certain habit, according to a certain design or with a certain application” [TLF 16]. The term “approach” requires certification. In this book, we will discuss research methods, experimental methods, data production methods and data analysis methods.

We will also define six terms in order to clarify the terminology used as part of HCCSR.

Scientific knowledge in the context of HCCSR: this represents the production of research. It is developed on the basis of prior knowledge. The construction of new knowledge brings an added value to previous scientific knowledge. This added value will be evaluated during testing phases. This takes different forms within HCCSR, such as model, concept, language and application. For example, Ceret [CER 14] produced a new process model for HCI design. We will refine the definition of scientific knowledge within HCCSR by positioning it within an epistemological paradigm (section 4.2). The epistemological paradigm corresponds to the way in which scientific knowledge is built and evaluated, with or without taking into account humans and their context.

Activatable tool: this represents the scientific knowledge in a form that can be accessed by the user. The activatable tool is the medium between the user and scientific knowledge. If it is supported by a technique (such as an application), then it is “dynamic”. If the tool exists but is not supported by a technical device, then it is “static”. In practice, it takes the form of a dictionary of concepts designed to support the development of a conceptual model. Terms and definitions from this dictionary are presented to users with the aim of enabling them to share their opinion on proposed terms. It may also take the form of a paper model used to observe the primary reactions of a user, as well as a computer application in beta that the researcher wishes to improve and/or where the user is part of the difficulties encountered during testing. For information systems (IS), it may take the form of symbols to represent concepts designed and validated by users. During testing, the activatable tool is built, improved and evaluated. In some cases, the activatable tool can be split into subparts referred to as “activatable components”.

Activatable components are the various parts of the activatable tool: these parts form a whole but they can be separated from each other, enabling them to be developed and evaluated along with the user. The components themselves are activatable tools in the sense that they can be used by the user. Activatable components are built and evaluated independently from each other, both with and without users. The composition of components forms a whole, this being the activatable tool. For example, a geomatics application [SAI 16] designed for SNCF officials responsible for railway maintenance is composed of terminology specific to this profession, data organization, features relevant to the officials, a key of the symbols used and a human–machine interface. The various activatable components forming the activatable tool as well as their development progress must be identified in order to build and evaluate them during the testing phase. This breakdown brings maximal precision to what needs to be developed or tested. It also enables experimental objectives to be accurately defined.

Instrument composed of scientific knowledge and the activatable tool: as a general rule, scientific knowledge and the activatable tool are intertwined and interdependent. Aboulafia [ABO 91] specifies the complementary relationship between artifact and theory: “who argues that truth (justified theory) and utility (artifacts that are effective) are two sides of the same coin and that scientific research should be evaluated in light of its practical implications”.

Testing: it is a research step for collecting and analyzing field data in order to develop and evaluate a research instrument. More specifically, testing will enable the activatable tool to be developed and evaluated for the instantiation of scientific knowledge. This step can serve to mobilize the user from their perspective (on-site) in order to collect their representations of the “known world”. The user can also be studied remotely (in a laboratory). Testing also enables technical features of the activatable tool to be tested without necessarily requiring input from the user (e.g. performance or speed testing). A number of tests are carried out within the framework of HCCSR in order to develop and evaluate the instrument. An experimenter is a person who manages in situ testing with the user. This field management is referred to as “experiment management”.

To illustrate these terms, we will refer to the DOP8 model proposed by Mandran et al. [MAN 15] by using an example. The purpose of this model is to define concepts and their relationships to support developers in the development of data analysis platforms that combine three features: (1) data production (light gray part of the graph), (2) production of data analysis operators (dark gray part) and (3) data analysis (black part), i.e. the implementation of data operators to produce results that can be interpreted. The end-user of this type of platform is not an expert in data analysis. For example, in terms of data production, a teacher collects their pupils’ marks in Mathematics and French. In terms of operator production, a developer asks an operator to calculate the level of pupil success. For analysis, teachers link the operator to the data produced. To do this, they need an environment in which they can link data to operators and produce results. The DOP8 model formalizes the following three concepts: instrument, scientific knowledge (see Figure 1.1, right) and activatable tool and components (see Figure 1.1, left).


Figure 1.1. Illustration of concepts applied to the DOP8 model: instrument, scientific knowledge and activatable tool and components

To build the DOP8 model, data analysis experts were observed in order to build a model and a tool accessible to non-experts. They were observed while carrying out work in the field. Following this, an expert-tested activatable tool was built in beta. It was later improved and tested by non-experts. Today, this activatable tool takes the form of a website1 composed of two activatable components: terminology and a set of functions (see Figure 1.1, right). The Undertracks [UND 14a, UND 14b] website is one of the possible instantiations of the DOP8 model. The research instrument contains the DOP8 model and its instantiation in the form of a website.

As such, HCCSR is characterized by research whose goal is to produce an instrument that combines scientific knowledge and an activatable tool. In order to develop these tools, users and their contexts are integrated into the research process. The activatable tool acts as the medium between the user and scientific knowledge. In order to engage users in the research process, testing is carried out with the aim of producing data. Analysis of the latter facilitates the development of both scientific knowledge and the activatable tool. HCCSR is therefore research in which the instrument is composed of scientific knowledge and an activatable tool (link symbol) (see Figure 1.2). The researcher calls upon users during iterative testing (see the cycle symbol in the diagram) in order to build and evaluate scientific knowledge and the activatable tool. The activatable tool is created by the researcher using human observation; in return, this activatable tool facilitates a better understanding of humans and scientific knowledge in HCCSR. This duality is characteristic of the science of the artificial, which will be addressed in the next section.


Figure 1.2. Features of the HCCSR composed of scientific knowledge, linked to an activatable tool (link symbol) and built using successive iterations (loop symbol)

1.2. HCCSR: science of the artificial

With respect to computer science, J.L. Le Moigne notes that “to be understood, systems must first be built, then their behaviors observed”. The development of artificial objects is required for research development. He adds that “theoretical analysis must be accompanied with a lot of experimental work”. As such, artificial objects must be developed along with the user and their context. Following this, they should be tested with this same user.

To clarify the specifics of objects designed using sciences of the artificial, Simon [SIM 04] uses the example of a clock. It was designed with the intention of giving the time; it can be described using physical elements (e.g. cog wheels), properties (e.g. forces generated by springs) and an operating environment (e.g. cutting hours, the place of use). As such, the design of an “artificial object” involves multiple elements “intention, characteristics of the artificial object (i.e. properties and physical elements) and the environment in which it is implemented”. The artificial object2 “can be seen perceived as an interface between an ‘internal’ environment, the substance and organization of the artifact itself, and an ‘external’ environment, the surroundings in which it is implemented”.

Addressing artificial objects, Simon [SIM 04, p. 31] offers “the frontiers of sciences of the artificial”:

  • Proposal 1: Artificial objects are synthesized by humans, although not always within the scope of a clear or forward-facing vision.
  • Proposal 2: Artificial objects can mimic the appearance of natural objects, although they lack the “reality” of the natural object in one or more aspects.
  • Proposal 3: Artificial objects can be characterized in terms of functions, goals and adaptation.
  • Proposal 4: Artificial objects are considered in both imperative and descriptive terms, particularly during their design.

From our point of view, an artificial object proposed by HCCSR meets these characteristics for the following reasons:

  • – the final version of the object is not always known at the start of the development process, and the various steps constantly change its condition, causing it to develop in line with user needs and contexts. It is because a true forward-facing vision does not exist (Proposal 1);
  • – because the vision is not necessarily clear, building the object requires several consultations with users during the building, development and evaluation of the object. A number of iterative testing phases are involved (Proposal 1);
  • – it is built to meet an intention (Proposal 3) (e.g. teaching surgery using a simulator);
  • – in order to be operational, this object attempts to meet the needs of users in a given context (Proposal 4) (e.g. a simulator used for surgery will be useful for teaching interns);
  • – this object resembles a natural object in the sense that it will replace certain human-activated tasks (Proposal 2) (e.g. using a haptic arm with force feedback in order to carry out the operation using a simulator).

As such, from the four proposals made by J.L. Le Moigne, HCCSR can be defined as artificial intelligence: scientific knowledge is built by referring to user behaviors and practices in order to design objects that address purposes. These objects can be used in a given context. The use of these objects is involved in refining the understanding of behaviors and the development/improvement practices. In turn, these developments enable progress in scientific knowledge. As a result, this is an iterative process.

In conclusion, HCCSR is a science of the artificial that produces scientific knowledge in conjunction with the activatable tool. These productions are constructed iteratively along with users. The activatable tool acts as the medium between the user and scientific knowledge. It is within the context of artificial intelligence that our research method is anchored.

The next section examines the difficulties related to the evaluation of scientific knowledge in HCCSR.

1.3. Difficulties in building and evaluating HCCSR instruments

Examining research methodologies for building and evaluating HCCSR instruments is complex for various reasons. This work should include a multidisciplinary dimension and a transverse dimension. Such works are multidisciplinary in the sense that they are concerned with problems linked to computer science that require the use of Humanities and Social Sciences (HSS) approaches. They are transverse because the problem is present in various specialist fields of research in computer science. As such, this has enabled us to observe the problem within the five specialist areas mentioned previously: HCI, TEL, IS, MAS and GEO. Multidisciplinarity and transversality are the primary complex elements of the problem.

During the creation of this test work, human-centered computer science researchers face the following challenges:

  • – The complexity of the field to be investigated: humans in ecological situations: research led with the aim of building the instrument sits within a global context. On the one hand, the testing strategy is faced with including hands-on experience in ecological situations. On the other hand, the object of study is human, with all its complexity and inconstancy. For example, for TEL research, building an application requires the study of pupil’s and teachers’ behavior as well as their interactions in the classroom, and in some instances, the curriculum too. Sein et al. [SEI 11] highlight this requirement: “A new research method is needed to conduct research that recognizes that the artifact emerges from interaction with the organizational context even when its initial design is guided by the researchers’ intent”.
  • – A testing strategy that integrates users with a dual purpose: for sciences of artificial, Simon [SIM 04] advised alternating between design and evaluation phases for solutions in line with requirements, until a satisfactory design is achieved. A particularity of HCCSR testing is also to build (i.e. analysis and design) and evaluate an instrument along with users. User integration for these objectives is important in order to correctly model human activity and to produce a relevant research instrument. The problem is identifying the point in the research process when it is appropriate to integrate the user and for what purpose: to build or to evaluate.
  • – Limitations of case studies in terms of time and quantity: recruiting people to participate in the build process of these tools is difficult. Few people are available to participate in the construction and evaluation of these instruments, and the time investment is considerable. These two difficulties lead the researcher to reduce the number of people integrated in the research. However, for statistical methods, a minimum number of people are required in order to carry out valid analyses [HOW 08]. For example, the percentage calculation is a reduction based out of 100. However, if our sample contains fewer than 100 individuals, calculating the percentage requires us to extrapolate information. If the percentage is calculated over 65 individuals, data extrapolation occurs for 35 people. Deming [DEM 65] highlighted the risks behind these practices, as basic data is skewed by this type of calculation. The produced result is not correct, and incorrect interpretations can be made.
  • – Combining data analysis and production methods: for the investigative method, it is important to offer alternatives to the statistical/quantitative methods that are applied primarily in the evaluation phase, but it cannot be implemented when the instrument is being built. It is best to combine the investigative method: the qualitative method to facilitate understanding and exploration [PAI 11], and the quantitative method to quantify and validate [HOW 07]. For example, in order to evaluate a dictionary of concepts within a specialist area comprising few users, it is impossible to use quantitative measurements and statistical tests. The interview or focus group method can be used to identify business practices and therefore to develop a conceptual model.
  • – Composite activatable tool: the activatable tool is a composite (e.g. terminology, conceptual model, features, applications, HCI, language). The various components that make up the activatable tool must be identified in order to build and evaluate it [GRE 13, MAN 13]. The difficulty is encountered in identifying these components and knowing whether it is possible to build them independently and if so, how.
  • – Iterative processes for building the activatable tool: designing an activatable tool requires iterative processes. In computer engineering, many methods of this type exist (e.g. Agile Method [WIK 16]). Iterations become difficult apropos, the frequency of the iterations, the elements to capitalize at each iteration, and when determining the point at which iterations should stop. Furthermore, these engineering methods do not refer to studies in HSS, and so do not include users.

Numerous studies exist within the user-centered approach and in participative design. Data production methodologies for building and evaluating these instruments are also numerous. They demonstrate interest in user integration, phased approaches and the combining of qualitative and quantitative methodologies. However, the organization involved in these various steps, the ways in which data quality and results are tracked and accounted for in each phase, are not summarized. This need for organization and tools is especially crucial for the support and training of researchers in a multidisciplinary context.

This book attempts to address the following problem:

How do we build and evaluate instruments produced by HCCSR in a multidisciplinary context within the scope of guiding research?

This problem can be broken down in two ways:

  1. 1) Which data production and analysis process could be implemented in order to build and evaluate instruments produced by HCCSR?
  2. 2) How do we ensure the traceability of this process? How do we also guarantee the quality of the research results?

1.4. Conclusion

Responding to the previous questions requires giving thought to HCCSR methods so as to identify useful steps and tools for the creation of scientific knowledge in this field and to the ways in which the creation of knowledge can be tracked. Working in the field of methodology implies defining how scientific knowledge is produced, as well as being situated within a scientific framework dedicated to selecting appropriate epistemological paradigms. The latter is “a theoretical conception prevailing from a given scientific community, who build the possible types of explanation, as well as the types of facts to be discovered in a given scientific field”. In other words, it is the way in which scientific knowledge is created and evaluated in a scientific field. The researcher’s choice of epistemological paradigm justifies the way in which they will lead research processes and account for the value and validity of knowledge produced.

To do this, an epistemological paradigm must first be chosen. The researcher will define a scientific framework for HCCSR and the way in which the instrument (i.e. scientific knowledge and activatable tool) will be built and evaluated in the field of HCCSR. The epistemological paradigm focuses on the way in which the “known world” is mobilized by research. Within the framework of HCCSR, this “known world” is addressed via the user, who provides a representation of the field. To address this point, our approach should focus on user integration in order to produce data to build and evaluate scientific knowledge. As such, we summarize data production and analysis methods to identify measurement tools for user activity. Our approach is user-centered: it is therefore logical to refer to user-centered approach tools. Within the framework of this research, users are involved in a regular and iterative way to build the research instrument. These various user interactions must be tracked to ensure the quality of the data produced and, ultimately, the quality of the results. For this, we have chosen a set of quality management tools and studies on data quality indicators. As such, this book proposes a research method for HCCSR. Emphasis here is placed on carrying out user testing and the traceability of testing. It is aimed at HCCSR researchers who wish to lead iterative research with users and who are not familiar with concepts within the framework of HSS testing. One of the objectives of our approach is also to provide guidelines concerning best practices. The approach caters primarily to young researchers in computer science, who have little knowledge of data production or analysis methods in the field of HSS, and who need tools to guide them during their research.