Ricardo Ramírez and Dal Brodhead
In this blog, we draw on an article by Steven Teles and Mark Schmitt which appeared in the Stanford Social Innovation Review (2011) entitled “The elusive craft of evaluating advocacy”; and on an article, we wrote in 2015 about practical wisdom entitled “Can participatory communication be taught?”
We refer to “policy impact” as a variation on the term “advocacy”, because we are collaborating with research organizations that most often produce evidence that other partners utilize in advocacy efforts. However, the ultimate result is the same: policy impact, and especially policy implementation.
Teles and Schmitt emphasize that “…The political process is chaotic and often takes years to unfold, making it difficult to use traditional measures to evaluate the effectiveness of advocacy organizations.” (p. 39)
This assertion is a reminder for us that, while conventional evaluation approaches may not work in this constantly evolving area, we have alternatives to explore, such as Developmental Evaluation and Outcome Mapping/Outcome Harvesting. However, the reference in the quote to a long-time scale is a good and sobering reminder that short-term projects and the longer-term political process are often not well matched.
Further, the authors indicate that: “… sophisticated tools are almost wholly unhelpful in evaluating advocacy efforts. That’s because advocacy, even when carefully nonpartisan and based in research, is inherently political, and it’s the nature of politics that events evolve rapidly and in a nonlinear fashion, so an effort that doesn’t seem to be working might suddenly bear fruit, or one that seemed to be on track can suddenly lose momentum.” (p. 39 – our emphasis)
They go onto say that the process is complex, with ‘foggy chains of causality” – something that one should capture in a Theory of Change to drive home this underlying process “…because advocacy by its nature is complicated and its impact often indirect” (p.40).
In other words, the policy landscape is the context and the changes, when they occur, often happen in unpredictable ways: “Practices that once worked beautifully get stale once the losers figure out how to adopt the winner’s strategy or discover an effective counterstrategy” (p. 41).
Most interesting to us is their focus on adaptation: “Successful advocacy efforts are characterized not by their ability to proceed along a predefined track, but by their capacity to adapt to changing circumstances” (p.40). If advocacy strategies are a place where adaptive management is required, then our focus on ‘practical wisdom’ makes sense.
The concept of practical wisdom is based on the notion of phronēsis by Aristotle. For non-philosophers, it is made accessible in the book ‘Practical Wisdom’ (Schwartz & Sharpe 2010) as well as an engaging TED Talk by the same title.
“The term refers to the acquired skill of knowing what to do when facing unique circumstances (it is not a skill or a craft and cannot be taught). Phronēsis emerges from experience, from many incidents of trial and error. In the book, Schwartz and Sharpe offer numerous examples of people making instant decisions by some form of intuition. The notion of ‘practical wisdom’ is far from the notion of ‘best practices’ that some bureaucracies cherish. Best practices are akin to recipes, where there is the assumption that many factors are known and predictable to the extent that similar decisions will be warranted (Courtright 2004). Best practices suggest replication, while practical wisdom suggests uniqueness and tailoring to each moment. One could argue that best practices have an important role to play in some circumstances (such as safety checklists in the health profession). Best practices could also be seen as part of formal job descriptions; however, they fall short of capturing the essence of experience-based, decision-making that is highly desirable for many profession.
What practical wisdom embodies is a [intuitive and rational] capacity to make [prompt and wise] decisions based on experience, [where the unique character of the particular circumstances is immediately discerned, and an appropriate determination or decision is made]. This applies to the fields of participatory research, communication for development, and evaluation (Ramírez & Quarry 2010). It would likely apply to several others where individuals need to respond to urgent requests on the spot. As noted, we acknowledge that there are a few individuals who seem to have been born with practical wisdom (we are sure you know a few yourselves). We can think of a few colleagues with facilitation skills that emerge naturally, even in the most challenging situations. This is similar to the notion of innate networkers -or ‘mavens’- a term that Gladwell uses to refer to people who have a natural information sharing instinct (2002). However, in most cases we are talking about most people (ourselves included) who require a concerted effort to build their skill set and to gain confidence by experimenting to fine tune it.” (Ramirez et al., 2011: 105-107).
Practical wisdom gives a name to a process of adaptation that is unique and most often not replicable; we feel that this reflects the nature of policy influence, where impact is difficult to predict.
“The most effective advocacy and idea-generating organizations…are not defined by a single measurable goal, but by a general organizing principle that can be adapted to hundreds of situations” (p.41).
Teles and Schmitt emphasize that: “…The key is not strategy so much as strategic capacity: the ability to read the shifting environment of politics for subtle signals of change, to understand the opposition, and to adapt deftly.” (p.41 our emphasis).
Reading subtle signals is a behaviour that is reminiscent of ‘practical wisdom’: “…the proper focus for evaluation is the long-term adaptability, strategic capacity, and ultimately influence of organizations themselves” (p.42).
For Teles and Schmitt evaluating policy impact is a craft: They refer to it as “…an exercise in trained judgment—one in which tacit knowledge, skill, and networks are more useful than the application of an all-purpose methodology. Evaluators must acquire and accurately weigh and synthesize imperfect information, from biased sources with incomplete knowledge, under rapidly changing circumstances where causal links are almost impossible to establish.” (p.43)
The challenge in our practice is to help our partners select evaluation uses and Key Evaluation Questions that capture this elusive set of intuitive behaviours. However, there is, according to the authors “…a natural temptation to formalize this process in order to create at least the appearance of objective criteria, but it is far better to acknowledge that tacit knowledge and situational judgment are what really underlie good advocacy evaluation, and to find evaluators who can exercise that judgment well. It’s the evaluator, rather than the formal qualities of the evaluation, that matters” (p.43). In the DECI project, this means that we must recognize practical wisdom, both in terms of what to evaluate, and the skill set we wish to instill.
Courtright, C. (2004). Which lessons are learned? Best practices and World Bank tele-communications policy. The Information Society, 20(5), 345-356.
Gladwell, M. (2002). The tipping point: How little things can make a difference. Boston: Black Bay.
Patton, M.Q. (2018). Principles-focused evaluation: The guide. Sage.
Ramírez, R., Quarry, W. & Guerin, F. 2015. Community Note. Can participatory communication be taught? Finding your inner phronēsis. Knowledge Management for Development Journal 11(2): 101-111. http://journal.km4dev.org/index.php/km4dj/article/viewFile/286/369
Ramírez, R. & Quarry, W. (2010). Communication for another development. Development 53(1): 54-57.
Schwartz, B., & Sharpe, K. (2010). Practical wisdom: The right way to do the right thing. New York: Riverhead Books.
Teles, S. & Schmitt, M. (2011) The elusive craft of evaluating advocacy. Stanford Social Innovation Review: Summer.