Skip to main content

Modelling, uncertainty and decision making

Date

How do we represent uncertainty and decisions in the models we use to help make decisions? I’ve been to quite a few events in the past couple of months that have been looking at how we better represent reality in energy and infrastructure system modelling so have been dwelling on this question a lot. The events have tried to assess how well uncertainty is characterised in current models or how to better represent governance structures in engineering models. The models we were discussing ranged from cost optimisation models to engineering system models but the discussion always seemed to drift to whether these models could or should reflect the realities of how decisions were made. I don’t think we really got any nearer to answering this question but there were some really interesting points raised along the way about:

  • The different kinds of uncertainty that exist and how we do (or don’t) recognise and manage them when we use models or their outputs.
  • The danger of over-interpreting results from a model with a particular aim (e.g. cost optimisation) or scope.
  • The complexity of decision making and the challenge of ‘quantifying’ it.

Different type of uncertainty

Uncertainty is a really tricky thing to manage generally and in modelling particularly, not least because there are so many interpretations and definitions of the word itself. Several people have tried to define the many different aspects of uncertainty from quite a quantitative/modelling perspective to a more qualitative/individual decision making perspective. While the terminology is inconsistent, many authors agree there are different types of uncertainty that need to be managed in different ways. The first type, often called aleatory uncertainty or risk, comes from random errors inherent in data sampling and measurement.  The second, often called epistemic uncertainty, comes from lack of knowledge or statistical biases. The third type, often called indeterminacy, irreducible uncertainty or recognised ignorance, is the inability to observe or capture all relevant factors affecting the phenomenon of interest. These have very different implications for how we manage or communicate the uncertainty associated with our analysis and this should be recognised and the very different forms of uncertainty  should be treated differently.

Assessing modelling uncertainty

The presence of so many different sources of uncertainty could erode our trust in any models (or mean that we ignore them all and trust the model implicitly, which would arguably be worse). Fortunately, the Netherlands (who seem to be ahead in all things uncertainty-related) has published Guidance for uncertainty assessment and communication  to help policy makers interpret and communicate the outputs of models/data in a way that reflects the underlying uncertainties. The NUSAP framework was developed in parallel, with the godfathers of uncertainty Jerry Ravetz and Silvio Funtowicz. It provides a structured and transparent way to document different types of uncertainties in assumptions used to support modelling/decision making. The framework includes quantitative uncertainties in numerals, units and spread of data and qualitative uncertainties in assessment (what does the range really mean?) and pedigree (what is the background of the assumption?).

I spent a very useful day analysing the pedigree of some of the most sensitive assumptions underpinning a widely used energy system cost optimisation model identified by Will Usher  and Steve Pye et al. This required us (as a group because deliberation is a key constituent of qualitative uncertainty analysis) to assess such things as the quality of the proxy (i.e. how well does the assumption represent the real situation?) the empirical basis (how much and what type of data was used to determine the assumption?), the methodological rigour (how was the assumption calculated using data?) and validation (what was the assumption compared with?) and the theoretical understanding (how reliable is our theoretical understanding?). This is a really interesting process to go through and makes you think very carefully about what the numbers you are using really represent and how this might affect model outcomes.

(Un)Recognised ignorance in energy system modelling

Perhaps more interesting for me was the question that we didn’t really address in the NUSAP analysis - of the empirical foundations of the whole model, not just individual assumptions. This is where the conversation got very animated! Cost-optimisations models are designed to identify the lowest cost way that the whole energy system (or at least the technologies that generate and use energy) would have to change to meet pre-determined carbon budgets i.e. what should happen. They were never designed to work out what could happen i.e. they aren’t designed to recognise the complexity of decisions and the multiplicity of actors required to make those decisions to change the whole energy system. There is a great deal of ‘recognised ignorance’ in these models; crucial processes that represent the reality of decision making that are not represented in the model. However, this type of uncertainty frequently gets overlooked and users often over-interpret the outcomes of optimisation models as options that will get implemented if some of the assumptions in the model are replicated (for example a carbon price). It is vitally important that this is avoided and the empirical basis of models and particularly the uncertainty related to this empirical framing is communicated with model outcomes.

Balancing complexity and representation

I think the process of analysing uncertainties in detail and the ensuing discussion of model empirics really helped me to work out why I find it so hard to use cost-optimisations models to answer the kind of research questions I’m interested in, like “how do we accelerate change?” “what do we need to do now to keep options open in an uncertain future” and “if it’s so obviously cost optimal why isn’t it happening?”. Questions like this require a far more nuanced understanding of how people (and that includes businesses, individuals, energy companies and policy makers) make decisions and what stops them from doing seemingly desirable things. And this requires a whole different approach to modelling that is able to represent this complexity without including all aspects of a complex system which has been explored in detail (and remarkable clarity) by Frin Bale here. There is a great deal of interest in hard-linking this detailed understanding of decisions into optimisation models to ensure that optimal scenarios are more ‘realistic’. You can see the allure of this kind of approach but to combine two already complicated models, which have fundamental epistemological differences poses a number of real risks to the quality of outcomes as well as their ease of use. Will McDowall and Frank Geels articulates these risks really nicely here and propose a looser coupling using between modelling and socio-technical system analysis.

This approach was nicely illustrated in another workshop I attended where the social system (in this case the governance system that affects infrastructure) was represented as a nested system of narrative (things that direct or constrain decisions) and strategies (packages of rules or objectives that affect how decisions are made). These both affected what infrastructure could be built. This shows an elegant simplicity that could explore how engineering-optimal solutions are constrained by reality without trying to fully represent reality in the model. The immediate priority of this work is to ensure that ‘stupid’ results are avoided – i.e. those that wouldn’t be possible within the current system of governance. Even this approach to representing constraints poses real challenges in relation to how these narratives and strategies are formalised in a model. The longer-term aspiration is to ‘embed’ governance structures into modelling which implies hard linking governance and engineering systems and adding more complexity. I will follow progress with interest.

Modelling, uncertainty and decision making in MAADM

All this thinking has really helped to clarify the kind of modelling I want to do as part of MAADM and how I think about uncertainty. My focus has always been on the irreducible forms of uncertainty associated with decision making processes, particularly the interaction between multiple decision makers, and how these interacting processes affect the sustainable transformation of infrastructure systems. Therefore, any modelling we do will need to help explore how decision making processes affect the likelihood of a diverse set of actors to act in a way that is beneficial for transformation. I particularly want to be able to explore whether changes to the decision making environment (through removing constraints, encouraging interaction between actors, incentivising actors in line with their decision making processes) increase the likelihood that actors will collaborate to enable system change. More to follow on this!