Theories of change: easier said than done

DECI-3 Blog

Theories of change: easier said than done

By Ricardo Ramirez, DECI-3.

Rick Davies has provided us with a refreshing review of the challenges with Theories of Change and the evaluation consequences.  His report focuses on the use of theory of change for evaluation purposes.  He emphasizes participatory design to create ownership, something that will sound familiar to many readers.  And yet, in practice, this design process is very demanding for several reasons.

One problem is that most diagrams are weak with regards to the expected connections between events. This situation may be the result of the dominance of logical frameworks where arrows often mask complex change trajectories.  Davies refers to this process as either ‘unlabeled connections’ or as ‘missing connections’  – and he provides plenty of examples to illustrate this point. He also suggests that aesthetics often dominate the design, leading to oversimplification; this point is especially tricky for his fourth challenge of there being numerous pathways. The example he shares, based on participatory post-it-note brainstorming is familiar to many of us. When he adds the further challenge of illustrating feedback loops, we get to see examples that are complex and very difficult to understand, let alone digest as the trajectory of change is no longer evident.

To his credit he provides us with six possible ways forward:

  1. Better descriptions of the connections: he emphasizes the value of coding to show the nature of the connections (necessary Vs sufficient); or their weight displayed by a different width of arrows. He includes examples where the arrow is hyperlinked to a narrative that expands on the nature of the link.
  2. Better software for drawing Theory of Change diagrams: The first option is to employ software designed to assist in rendering the diagrams (DoView and Changeroo are examples). The second is network analysis packages that also allow for text annotations along connections (yED is one example). Lastly, he mentions packages available for collaborative work (yWorks or KUMU).
  3. Basic forms of network analysis: This way focuses on examining complicated network models to find events with high “betweenness centrality” (i.e. which are part of multiple casual pathways). Also, models within which all links are weighted by the strength of their expected causal influences where it can also be possible to find “spanning trees” (i.e. routes through a network that represent the most influential causal pathways).
  4. Participatory network mapping: He illustrates this option with an Excel-based process where a matrix of 16 x 9 possible relationships was projected on a screen and the participants allocated points to each combination on the basis of their expected level of contribution of outputs (rows) to the achievement list (column). This exercise allowed them to collectively select the most relevant links from 176 down to 17.
  5. Predictive modeling: this variation is software dependent and relevant for situations where stakeholders’ expectations about causal connections are complicated by a large number of outputs and where the casual connections with several outcomes are not identifiable in advance. The process requires algorithms to detect the stronger associations, which are evaluated using a ‘confusion matrix’ that helps differentiate between sufficient Vs necessary associations. This work is done during implementation after data has been collected to help identify “what works”. He mentions software including BigML and Rapid Miner Studio, although he is also familiar with an Excel application called EvalC3 that he has pioneered. He underlines that this approach is appropriate for ‘loose’ theories of change where a great deal of adaptation is needed, something that would fit as part of Developmental Evaluation.
  6. Dynamic models: these are the most complex and overwhelming, as there are multiple feedback loops with connections which have both a direction and a value. He mentions that these loops can be modeled using Fuzzy Cognitive Maps that have been around since the 1980s.  These models are software driven and allow one to manipulate parameters to see network-wide consequences. There several software packages mentioned (Mental Modeler, FCMapper, FCM Expert, to name a few).  He closes by indicating that the struggle continues to represent a complex reality with tools that are better suited for complicated situations.

In closing, he quotes a HIVOS study and asks why so little progress has been achieved, then offers some explanations. He posits that we are still reliant on a linear representation of hugely complex pathways.  Also, he suggests that we seek to simplify, which produces communicable Theories of Change, while in turn we often require evaluable ones. The latter requires more precise questions that include testable hypotheses, and these are more challenging to propose and yet they are more necessary for adaptive programming.

His main message is that we need more testable Theories of Change, both for evaluation purposes and for supporting projects that are adaptive.  This view resonates with our fourth blog that includes reference to the need for more rigour in Theory of Change development.

Rick Davies’ report will also appear as an article in the October 2018 issue of the Journal of Development Effectiveness.