Aliya Amirova
5 min readJun 12, 2021

--

The Human Behaviour Change (HBC) Project offers consistency in the vocabulary used to describe behaviour change evidence, a structured knowledge system of evidence, and its representation in a computable format. These three achievements facilitate efficiency, consistency, reproducibility, and ease in how evidence is searched, represented and analysed in its full breadth and detail. Furthermore, such an approach makes the best of available AI methods to better behaviour change research.

The consistency in the vocabulary — lingua franca

Unlike medical sciences, where database search is equipped with MeSH terms and Disease Ontology (Schriml et al., 2012), or biological sciences which make use of taxonomies (Thessen et al., 2012), evidence search in behaviour change research is hindered by the lack of a conventional, clear, and consistent vocabulary.

Characteristics of Behaviour Change Interventions (BCIs) can be described in terms of Behaviour Change Techniques (BCTs; Michie et al., 2013), settings (Norris et al., 2020), facilitator, delivery mode (Marques et al., 2020), theoretical constructs (West et al., 2019), and mechanism of action (Carey et al., 2018). The clear and consistent vocabulary offered by these taxonomies helps comprehensive literature search. This is especially useful for reviewing if BCIs are described in consistent terms from the start — at the intervention development stage. This enables inclusive coverage of evidence, spanning different disciplines and schools of thought. It also makes search scalable. Thus, this lingua franca for behaviour change research bears fruit, including comprehensiveness and breadth of coverage and scalability and consistency with which evidence is synthesised.

A structured knowledge system — the HBC ontology

The limitations of traditional reviews are rooted in heterogeneity — systematic variance in the effects brought about by different study procedures, context, setting, population samples, and the content of BCIs. Such heterogeneity is inescapable due to the complexity of the subject — dynamic and complex human behaviour in dynamic and diverse environments in response to complex interventions. Some methods (Bornstein et al., 2009) offer ways to explore heterogeneity using grouping and data representation. However, the reproducibility of such reviews is often compromised by the flexibility in how complex BCIs can be described, annotated, grouped, and compared. The representation of knowledge in a structured and well-

defined ontology can be used to provide a reliable grouping of BCIs. The grouping consistent with the ontology overcomes the limitations introduced by heterogeneity in samples, methods, and intervention characteristics. Furthermore, this helps answer questions like What (i.e. what BCTs, for whom, where) works in what combinations? What works better than the rest? (Michie & Johnston, 2017).

Our understanding of why and how BCIs produce their effects is impeded by the lack of appropriate methods. A meta-analysis offers claims about the association, not causality. However, an ontology in-built into a meta-analysis as a set of assumptions helps formulate causal claims and seek evidence supporting them (Michie & Johnston, 2017; West et al., 2019). Furthermore, a detailed knowledge representation helps to gain statistical precision and power (Leeuw et al., 1998; Sterne Higgins et al., 2011; Pearl 2009) for causal inference and the evaluation of the BCIs’ effects.

On the other hand, the assumptions embedded in an ontology may bias evaluation. The data used to output query responses from a Machine Learning (ML) algorithm may also bias the findings in unpredictable ways. Both biases compromise the validity of conclusions. Furthermore, the lack of explainability of ML outputs, if handled inappropriately, is dangerous given that the findings are used to inform public health policies and local decisions. Therefore, it is essential to design fully explainable and satisfying transparency, accountability, and fairness principles (Reddy et al., 2020). Also, ongoing maintenance and development of the system require users’ trust and engagement for the system to be continuously updated.

In health psychology, we evaluate evidence to support or reject behaviour change theories. Such a workflow is slow and offers too few practical answers (Michie et al., 2005). Sometimes a purely empirical approach scatters valuable data insights without appropriate synthesis (Michie, & Johnston, 2012). An ontology can be refined incrementally with new evidence as it arises (Wright et al., 2020) — facilitating living reviews. An inclusive representation of behaviour change research evidence promotes the field's rapid growth without any need to dispute or support any specific single theory or hypothesis while also bringing the available evidence together.

To summarise, the Human Behaviour-Change Ontology equips the reviewer with tools to explore the heterogeneity of intervention effects. It gifts new opportunities in investigating the causal structure of BCIs (i.e. why and how the BCIs work). And, it helps to make the most of the available evidence by increasing the power and precision of analysis. Finally, it ensures the timely and continuous renewal of the knowledge representation in response to new evidence. This promotes scalability. However, the system should be

built with careful consideration of explainability and ethicality, while public engagement should be continuously encouraged.

A representation in a computable format

Traditional reviews are further confined by a lack of reproducible methods to rigorously combine evidence gained from studies of different research designs (Roberts et al., 2002; Spiegelhalter et al., 2004). The computable format may help systematic, consistent, reproducible synthesis, evaluation, and comparison of evidence generated using various research study designs.

The traditional review process is lengthy and tedious. A diagrammatic and quantitative representation of evidence (West et al., 2019) makes such data compatible with ML methods. This promises a semi-automated, scalable, rapid, and up-to-date synthesis of an insurmountable amount of BCI research papers.

Since the dataset of interest (i.e. available BCI studies) is small by ML standards, an ontology expressed in a computable format may even be critical for getting data-hungry methods like supervised deep learning off the ground. Furthermore, Human Behaviour-Change Project promotes the compatibility of behaviour change approaches with computational modelling, which may improve rigour with which behaviour change models are developed.

The findings of the traditional reviews are not transferable from one context to another and thus limits the generalisability of the resulting conclusions (Higgins et al., 2011). For example, findings of a physical activity interventions review for adults may not apply to juvenile populations. What was found to work in urban settings may not work in rural areas. Via ML data can be partitioned by many factors simultaneously, enabling some inference for transferability across contexts. ML also may help in generating new insights from patterns in the data.

Overall, the computable format solves poor generalisability and inability to combine epistemologically diverse studies and makes a living systematic review possible.

--

--

Aliya Amirova

Research Associate, Institute of Psychiatry, Psychology & Neuroscience, King’s College London.