top of page

System Evaluation Theory

A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

Screen Shot 2022-10-10 at 10.45.32 AM.png
EES_edited.png

Click here to view a recent blog post about the book from the European Evaluation Association. 

Click here to view a recent Systems Evauation Theory bookchat with ARTD Consultants. 

SYSTEM EVALUATION THEORY: PRESS

SYSTEM EVALUATION THEORY: BLOG POSTS

There is evidence of systems thinking starting to surface at the federal level – Alleluia!
October 30 , 2023

It only took 21 years!!! – but we have the first evidence that federal agencies are understanding the limitations of reductionist thinking…well at least somewhat!

​

On a recent orientation call the Centers for Disease Control (CDC) shared its evaluation strategy with new grant recipients and their respective external evaluators (i.e., me).  The CDC shared that the “program” is adopting an integrated approach.  And unlike other initiatives that use the systems jargon but still operate as siloes, this program is truly integrated, dovetailing with another initiative in intricate ways.  At last, evidence of systems thinking is creeping into the dark and dusty corners of the offices of federal program designers!

​

The CDC then shared its evaluation strategy…..the program logic model.  Initially my heart sank to the floor with disappointment, here we go again…a disconnect between program design and evaluation.  BUT, then, a glimmer of light when the lead evaluator and presenter stated “recipients are only responsible for immediate and intermediate outcomes, not the long-term outcomes”.  Say what?  I first starting publishing about this problem in 2002.  In the opening chapters of my book I explain why holding programs account for long term outcomes is a strategy fraught with failure.  I show, by way of a context map, the myriad of conditions contributing to said social problem.  I then show how any one program only targets a small subset, or thread, of the conditions contributing to the problem.  In looking at the context map it becomes immediately obvious why holding any single program to account for change in long-term outcomes is a fools errand and only sets the program up to fail.

image.png

In recent and upcoming talks, I’ve strived to help my fellow evaluators understand the “why” behind the current state of program design and evaluation, linking it directly to reductionist research.  In short, reductionist research focuses on getting at the root cause of a particular issue.  In the example above, reductionist trained researchers discovered that the belief/reality that nutritional food choices are expensive is one root cause of obesity.  The program then mirrors the research, in this case perhaps a media campaign targeting the belief that nutritional food choices are expensive.

​

In turn, the evaluation strategy then mirrors the reductionist research and a logic model is developed to assess all the outcomes in the causal thread.  My mentor Charles Huntington (God rest his soul) once testified to congress that holding programs accountable for the immediate outcomes, those within their control, is reasonable.  However, as you move from intermediate to long term outcomes the control any one program has to influence change in those outcomes diminishes and therefore individual programs should not be held responsible for demonstrating change in long-term outcomes.

​

Who then has the responsibility for evaluating long-term outcomes?  In my book, I lean heavily on the work of Friedman and argue that from a system thinking perspective responsibility for evaluating those outcomes must reside at the next highest system level.  What the heck does that mean?  Well, in the case of the CDC example that stimulated this blog, they got it exactly right.  The CDC funds numerous chronic disease programs targeting numerous root causes.  It makes sense that each program is held to account for the immediate outcomes, the root causes, they target, but that the CDC evaluate the overall health of the population, to determine the synergistic impact of their programs.

​

The CDC should be applauded for making this first stride out of the darkness.  Other agencies, like the National Institutes of Health (NIH) still operate in the evaluation darkness, funding truly integrated programs, but mandating that individual components be evaluated independently.  Ridiculous.

The next step, of course, is to supplement or perhaps abandon the logic model strategy completely for integrated programs.  Why? The logic model approach fails to capture the interdependencies between the program components necessary for success, it assumes each component operates independently.  It also fails to recognize that the intervention components are trying to accomplish something that no component can independently achieve, the emergent property.

​

One important take away from the CDC experience is that funders are slowly becoming more sophisticated in their design; designing programs with multiple components that are intended to work together.  Evaluators need to use evaluation approaches that are fit for purpose; evaluation approaches grounded in systems thinking are needed to evaluate interventions designed using systems thinking.  I know my book will help practitioners build those skills.

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

The RICO Act:  A Great Example of the Power of System Thinking in the Law
September 19th , 2023

Thanks to the indictment in Fulton County Georgia of Donald Trump and 18 others, the Racketeer Influenced and Corrupt Organizations Act (RICO) has rapidly permeated the American lexicon.  As an educator and system evaluator I am always looking for ways to explain system thinking concepts in practical ways.  Afterall, many educators that seem to gravitate to system thinking often turn people off with their academic speak and dissuade people from embarking on their system thinking journey by referring to systems thinking as “wicked” and “messy”, and their pontificating arguments as to what constitutes the difference between complex and complicated that have absolutely no value for evaluators who are actually doing evaluations.  I digress.

 

Systems thinking is literally to think like a system.  The definition of a system is that an essential property emerges from the interdependence of its parts. 

 

The RICO statute embraces the essence of the system definition and brings to light in a very practical way the power of system thinking.  The parts in this case are the 19 codefendants.  The 19 are accused of coordinating together and more specifically and importantly, of organizing together.  The emphasis of RICO on organizing together, is an example of the RICO law understanding interdependence.   In my experience as an evaluator, most system thinking neophytes can grasp the concept of interdependence rather quickly and how it is a more nuanced and deeper concept than simply a “relationship”.  The harder concept to grasp is that of emergence.  The emergent system property is the “thing” that surfaces as it were, as a result of the interdependence.  It only surfaces as a result of the interdependence between parts.  That is, no single part of the system can produce the thing on its own, that no single part can produce independently. In the RICO law what emerges is the conspiracy.

 

What makes RICO so powerful is that otherwise independent acts, not prosecutable by themselves, can now be grouped together to show a pattern of organized behavior to commit a conspiracy to overthrow an election.  A codefendant that makes a call to a governor questioning election results is just a call.  Lawyer lying about voting machine accuracy in select states may just be expressing their right to free speech.  Someone, intimidating an election worker, might just be an as&%^e.  Someone creating a fake list of electors is simply an angry citizen engaged in a theoretical exercise.  None of these acts independently would likely be considered a crime or if they were would be difficult to prosecute.  However, when looked at holistically, through a systems thinking lens, then every act becomes key in executing the conspiracy.  The RICO act is steeped in systems thinking and provides prosecutors a way to take what would otherwise be seen as independent, disparate, pardonable acts and show the big picture to a jury.

 

Just like in evaluation there are many within the law who obviously struggle with the premise that underpins the RICO act.  The judge who is allowing some of the codefendants to sever their case, justifies doing so, in part, because he believes its logistically impossible to prosecute everyone.  He is an great example of a single loop thinker. 

 

A judge familiar with system thinking would understand that the collective evidence applies to all codefendants and thus it is far more efficient to present the evidence just once rather than 19 different times.  Codefendants who want to sever their case from the whole are hoping to make the argument that “their little part” wasn’t a crime.  They want their act to be looked at independently.  But under RICO it doesn’t matter who was responsible for what part of the conspiracy, all the acts were necessary to fulfill the conspiracy work and therefore all face the exact the same charge; they are interdependent.  It seems that the judge lacks an understanding if the basic premise underpinning RICO, especially how interdependence and emergence are related to each other.

 

Might the RICO act, steeped in systems thinking, just be what saves American democracy?

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

A Video is Worth a Thousand words: What the Reductionist Approach to Evaluating Complex Interventions Misses
August 28th, 2023

In my many workshops and in my book I spend time explaining why the Theory Driven Evaluation (TDE) reductionist approach to evaluating complex interventions misses evaluating key intervention design features.  I use slides like the ones below to show how reductionists scaffold logic models to cover each component of a complex intervention.

image.png

Slide 1:  A complex intervention consisting of many components.

image.png

Slide 2:  Reductionist-driven logic models are developed for each component of the complex intervention.  This is called coupling or scaffolding and is an attempt to get what I term “evaluation coverage”.

image.png

​Slide 3:  Long-term outcomes are developed for each component of the complex intervention.

I then use the following three slides to depict what the logic model approach misses in evaluating the complex intervention.

image.png

Slide 4: From an outcomes perspective, the reductionist approach fails to evaluate the emergent property:  that is the outcome that all the components are working together to achieve that they cannot do independently.  In this example, all components are needed and dependent on each other to improve quality of life of residents living in public housing.

image.png

Slide 5:  From a process evaluation perspective the reductionist approach views the delivery of each component of the complex intervention as siloed and misses evaluating the interdependencies (the blue connecting lines) between components of the complex intervention.

image.png

Slide 6: This is another way of depicting the interdependencies that a reductionist approach using logic model fails to evaluate.

​However, I think this 48 second video does a better job than any workshop slides could of capturing the limitations of the logic models in evaluating complex interventions.  Each window represents a logic model and the siloed, incomplete picture it provides.  I shot this video at the cabin I am building in Wyoming and didn’t want to scare off the wildlife, so you will need to turn the VOLUME UP. 

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

Our discipline lacks the spine to deal with the reductionist researchers high-jacking evaluation
July 26th, 2023

Reductionist researchers add to the world’s knowledge base by focusing on cause-and-effect relationships. They operate from the assumption that every effect has a cause.  For example, the reason I am writing this blog (effect) is because I’m disappointed, angry, you pick the adjective, with the inability of our discipline to become a profession (cause).  Of course, in reductionist research the cause identified may itself be an effect for which another cause must be identified.  For example, my disappointment itself stems from evaluation associations coddling and genuflecting at the feet of researchers. 

​

This process of working upstream continues until a root cause is found.  Randomized designs with control groups, such as the RCT are useful in isolating cause and effect, in controlling for threats to internal validity so as to increase the probability that the cause-and-effect relationship being observed is indeed an accurate reflection of reality.

image.png

Presumably, with successful replication of the findings and on the assumption that the knowledge will do more good than harm, the intent is to apply the knowledge learned to bring good to the world.  The bridge from knowledge theory to application can take the form of a program, policy, strategy, activity, etc., which I collectively refer to as some form of an intervention. 

 

Now its time for evaluation and here is where the problem starts for evaluators.  So often the same methods and design are required by funders to evaluate the intervention.  For example, I recall being asked to evaluate a Department of Education (DOE) intervention that integrates arts and math curricula.  The DOE claimed they had over 20 years of research to support the benefits of integrating arts and math.  The DOE was funding hundreds of implementation projects nationwide.  BUT, the DOE required that the evaluation use a quasi-experimental, nested design; randomly assigning some school districts (and by extension the students in those districts) to a control and intervention condition. WTF?Why? 

 

When I questioned the DOE they gave the standard reply “RCT is the gold standard”.  When I countered that while RCT is the gold standard when the purpose is to develop knowledge, it is unethical and a waste of taxpayer resources to keep insisting on an evaluation approach whose purpose was to replicate 20+ years of knowledge. Their reply was “crickets”. 

Image Credit to Issues and debates

image.png

After all I continued, if you had enough confidence in the research findings to build an intervention that you are taking to a national scale, then why do you need to continue to validate that knowledge?  Should we not be focused on how well the intervention can be implemented in different contexts? 

Their reply was also textbook: “Well, you must implement the intervention without variation to the protocol, if you deviate from the standardized protocol, if you implement without fidelity, then the intervention likely will not work” was the reply.  Again, WTF?  Why?

 

Anyone working in the field knows that is not only impractical, but likely impossible, and not to mention unethical to adhere to some cooked up in the lab “best practice” intervention that isn’t working for its participants.  For example, if the process evaluation of the arts-math curriculum implemented in a Mexican-USA border community shows that kids need the curriculum translated, then you should recommend that immediate changes be made to the standardized curriculum protocol.  And, ethically such a recommendation should be implemented.  Why?  For one, we should not be treating everyone in intervention equally, we need to treat them equitably.  To do so means making adjustment to protocols to maximize intervention benefits for all participants.   Further, not to make ongoing improvements to a protocol means we are knowingly limiting the success of the intervention at the expense of those it is intended to benefit.

image.png

So how did we get to such a ridiculous place in evaluation?  Michael Scriven once said that our field was “hijacked” by the researchers.  I couldn’t agree more.  It is the reductionist researchers who have the ear of those commissioning evaluations.  Commissioners, often administrators and politicians, don’t understand the difference between research and evaluation.  In their defense that’s not their problem, it’s our problem.  Everything to them is just small “r” research as in we want to research what is happening.  I’m also certain that many of the researchers who do have their ear either do not understand the difference either or simply don’t care to note the difference because they are protecting their territory;  to acknowledge that an evaluator is better suited than a researcher to provide information needed to improve the delivery of an intervention or to establish its merit and worth, well that would mean they are working themselves out of a job.

image.png

I’ll even go one step further and suggest that many view evaluation as some type of pseudo-science.  Their attitude towards us reminds me of how quantitative researchers belittled qualitative researchers when they first came on the scene.  They view evaluators as the “weak sister” in every pejorative, disrespectful, condescending, and sexist sense of the phrase.  They fail to understand the different purposes research and evaluation serve.  One purpose of evaluation is to provide information to assist decision-making.  That can require different approaches than those needed for knowledge development.

 

Historically, I believe the “root cause” of the problem can be traced back to the earliest days when pioneers like Suchman recognized the need to use different approaches when trying to apply knowledge in the form of an intervention and tried to differentiate this purpose by calling it evaluative research;  what we now call evaluation.

 

The current stance of every evaluation association is that everyone should be welcomed into the evaluation sphere.  That’s simply code for “we will prostitute ourselves for membership fees to so we can pay administrative salaries”.

 

I’m all for inclusivity, but not when the wolf (researcher) is in sheep’s (evaluators) clothes.  And I fear evaluators are just that, sheep, and the wolves we include are killing our discipline.  I don’t go to an evaluation conference to learn about the evaluation findings of a cancer study.  Those are important findings that should be presented at a cancer conference, not at an evaluation conference.  Similarly, do we really need a stats topical interest group where they debate the nuances of error terms of sophisticated research designs that never get implemented in practice?  Please do not misunderstand me, their work is important, and I put myself through college as a statistician.  But, these nuanced statistical arguments are best shared with other statisticians among their profession. 

image.png

I go to an evaluation conference to learn about how to do an evaluation.   These reductionist trained researchers are moving evaluation on the wrong trajectory.  They are taking away valuable spots at a conference for true evaluators to present and learn from each other about doing evaluation.  This is why many of my colleagues have stopped going to “evaluation” conferences.

 

We need to purge ourselves of the reductionist researchers if we are to finally transition from a discipline to a profession.  Evaluation associations need to step up and start becoming what they purport to be: evaluation associations.  They need to grow a spine and stop coddling the researchers who insist it’s their way or the highway. 

 

I think Frank sums up how I feel best in his classic song “My way”:

 

“for what is a man (me, an evaluator), what has he got?

If not himself than he has naught

to say the things he truly feels (researchers are killing our field)

And not the words of someone who kneels (evaluations associations, funders)

Let the record show I took all the blows

and did it my way”

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

Rules vs principles and context
February 27th, 2023

I was recently at a conference with Michael Quinn Patton in Ottawa Canada learning Developmental Evaluation from the master himself.  Why anyone would plan a conference in Ottawa in February is beyond me, it was -20c plus windchill. 

​

As part of our training, Michael was discussing how in the absence of a set evaluation strategy, evaluators might choose to rely on principles to guide their work.  He, of course, has written a book on that topic too. 

​

Michael went on to explain the difference between a principle and a rule.  Little did I know that the night after the conference, a fire alarm would go off in my hotel, an event which would perfectly illustrate the difference between a principle and rule, but also how important context is in making evaluative judgements as to which standard, the rule or principle, might be at play.

 

When I first checked into the hotel, I had a very noisy room next to me. I surmised that the hotel was hosting this family until more permanent and suitable living accommodations could be found.  The two children were screaming all the time.  I was frustrated because it was impacting my sleep, but not angry.  I’m sure the parents were having a tougher time than myself.  Having my own children, I had some empathy.  They too must be tired; they too must be stressed I thought.

 

On the second night of my stay the fire alarm went off around 1:00 am.  I had my usual first reaction, false alarm, it’ll stop.  But it didn’t.  After the tenth request to evacuate, I thought I better act.  The RULE of course, is to leave your personal belongings behind.  I didn’t do that.  I packed my suitcase and headed down the stairwell. 

 

As I stood outside in the freezing temperatures I could see people passing judgement on me.  No one, except me, had a suitcase.  Their looks of disgust at me said it all: “What a jerk for taking his suitcase”.  As I stood there, I saw the family exit with their two children, nothing to protect them from the cold.  Luckily within a minute, they were able to sneak back into a side door for warmth, just before I could open my suitcase and hand them the comforter I had packed. 

 

You see I was operating on a principle dictated by the context, call it empathy, kindness, or whatever…not a rule. 

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

What a colonoscopy, kidney stone, and peanut allergy taught me about evaluating complex interventions operating as systems
January 24th, 2023

As I enter the mid to later stages of my life I am confronted with the realties of disease.  One joy of getting older is the colonoscopy.  As those who have had the procedure will tell you, it’s the preparation that really sucks.  Nothing like being wed to the porcelain thrown for a few days.  But I’m not complaining. I am fortunate and privileged to live in a country where I have access to preventive health services and the good news is that my colon is cancer free.  And there is a certain irony that a colonoscopy has me blogging about evaluating system waste.

​

Last night, within just a few minutes, I found myself on my knees in excruciating pain, crying like a baby and pleading to God to take me so the pain would end.  Our neighbors came over and watched our youngest daughter as my wife rushed me to the hospital emergency department.  I’m not going to waste your time talking about the lack of empathy in the hospital setting.  I’m convinced after a lifetime of interacting with the health care system that it lacks compassion.  Perhaps that should be a defined emergent property to which all health care systems should aspire.

​

Last year, I was working with food health inspectors evaluating restaurant food safety.  As we were evaluating the establishment, I noticed that a food order was being repeated over and over as it passed down the line.  Why would they bother doing that I thought when they can all read the order on the electronic screen?  As I was chatting with Aaron the owner of Snakes and Latte he let me know that it was a safety precaution.  A customer had a peanut allergy and this was their way of ensuring that everyone on the line was aware and that this important piece of information was not going to be lost amid the other orders.

​

So what do these three experiences teach us about evaluating complex interventions operating and functioning as systems?  Well, as I was being passed from one health care provider to the next during my colonoscopy I was asked for my name and date of birth each time.  At the time I wondered why?  And as I lay there in the hospital hopped up on morphine to help manage my kidney stone pain, I was annoyed that every time the staff came into my room they were asking me to confirm who I was.  I was miffed, “don’t they know that I’ve been in that same room for 5 hours” I thought.  The system evaluator in me was upset with the redundancies.

​

Then my experience with Aaron came flooding back to me and it all became clear.  When evaluating complex interventions, I am always looking to evaluate interdependencies with the goal of making the interactions between components more efficient.  Reworking steps is a sign of system waste.  But what I learned, is that reworking process steps is an indicator of a potential problem.  What we, as evaluators, need to do, is then dig deeper to understanding whether the process being reworked is actually wasteful (for example, resulting from poorly operating feedback loops) or whether it is a necessary redundancy. Especially in systems with many fast-moving parts, like a hospital, I’m thankful that these redundancies are in place for my safety.  Getting the wrong medication or the wrong procedure might actually result in something more serious than a wasted process step; a wasted life.

 

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

What the war in Ukraine makes clear about the motivation underpinning system change
January 9th, 2023

In evaluating complex interventions, I often find myself making recommendations targeting incremental change.  My thought was that having some “small wins” can help build confidence in the evaluation and hopefully lead to Lorenz’ butterfly effect (When Lorenz Discovered the Butterfly Effect | OpenMind (bbvaopenmind.com).  But the war in Ukraine, has made me question my incremental change philosophy and the motivation underpinning system change. 

 

In the stroke of a pen policy makers made major changes to an immigration system, that for decades they claimed needed fixing.  So why is it that less than a year ago the same Polish policy makers who erected barbed wire and shot at war torn Syrian immigrants welcome in Ukrainians? 

​

Polish soldiers keeping out Syrian refugees

Polish soldiers keeping out Syrian refugees

So why is it that the USA policy makers who erected barbed wire and continue to force immigrants who traveled by foot for thousands of miles to wait in tents in freezing temperatures and to be victimized by border gangs, welcome Ukrainian immigrants across the same Mexican-USA border?  It was gut wrenching for me to watch a Nicaraguan mother and her child who had been waiting a year in a tent, just meters from a better life, watch a Ukrainian mother cross unimpeded with her child into the USA.

Border crossings scene in El Paso, TX

Border crossings scene in El Paso, TX

The circumstances of the immigrants seeking entry into Poland, for that matter all of Europe, and the USA are similar.  All are fleeing war.  So why, the differential treatment?  The only discernable difference I can see is that the Ukraine refugees are white. 

 

The iceberg profile from the Haines Centre for Strategic Management can help shed some light as to why major system change can happen in the blink of an eye in some circumstances and not in others.

Everything systems comes from the base.  The base of many systems is rooted in racism.  There I said it. But I didn’t need to say. It.  We just need to listen what leaders of white countries are saying openly.  For example, Trump kicked off his presidential campaign appealing to the nationalism “They’re sending people that have lots of problems, and they’re bringing those problems with us. They’re bringing drugs. They’re bringing crime. They’re rapists. And some, I assume, are good people,” Trump said in his announcement speech. June 2015.  The racism, coded “nationalism” isn’t confined to the USA.  “The Hungarian Fidesz government and, in particular, Viktor Orbán are known for their racist sentiment toward “the others”. Prime minister Orbán more than once made clear that he takes an openly racist stance towards anyone who is not from Europe or a “Western culture”. “Viktor Orbán’s racist rhetoric and his propagation of the "great replacement theory" - Quo Vademus (quo-vademus.org)

 

To be fair, not all of those who wield power are racist or may be aware of their biases and prejudices.  Nevertheless, that’s not an excuse. Unconscious or not, its real and exists. 

 

In Europe and the USA the power base is white.  The structures, that is the immigration policies in Europe and USA, to no one’s surprise represent the values of the those who write them.  The processes, are the steps to implement these policies. We can continue to make incremental changes, but not until we acknowledge that the base of many our systems are racist can meaningful changes to the structure of complex interventions be made.

The war in Ukraine makes two things obvious.  When there is motivation for major system change it can happen quickly.  The motivation for change is rooted in culture and the culture of many systems is white.  I see the exact same problem playing out in the medical systems I evaluate.  Non-whites do not receive the same level of access to medical care nor the same quality of care.  The culture, structure, and processes of medical system is based on a “one size fits all”.  That one size is a white size.  “But we aren’t racist, we are treating everyone the same”, they say. While equal, it isn’t equitable.

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

Don’t judge a book (intervention) by its cover (label
December 24th, 2022

In my book I make a point of using the term intervention, rather than program or system.  I feel it is better to simply speak about interventions in terms of their level of complexity, rather than label them as a program or a system.  Why is using the term intervention advantageous?

Many interventions are labelled as systems; like the transportation system, health care system, tax system, etc.  However, often these “systems” are nothing more than a “bunch of stuff” (Meadows, 2008).  On the other hand, I have come across several interventions labeled as “programs”, like the HUD HOPE VI program, that upon closer inspection have many, interdependent components that are intended to coordinate and collaborate to achieve a higher function:  the very definition of a system.

As an evaluator you can’t assume that because an intervention carries the “system” label that it is in fact operating and functioning as a system.  Similarly, if the intervention carries the “program” label you can’t assume it isn’t operating as a system.  For this reason, I find it more practical to refer to interventions generically and then gauge the correct evaluation approach according to their level of complexity.  If an intervention has very few moving components and they are linearly related, then an evaluation using logic models is likely good enough.  As I write in my book, there is no need to overcomplicate evaluations.

​

However, if an intervention has many moving parts, then we need to evaluate whether the parts are actually interdependent and working toward achieving a function that no component can achieve interdependently (i.e., the emergent property).  This is because although interventions with many moving parts have the potential to operate and function as a system, not all do. 

​

In my book, I discuss how a necessary first step in any evaluation is to take the time to understand the intervention design.  For those interventions designed with many components intended to work interdependently, I then explain how to first define those interdependencies.  Once the component interdependencies are defined, I then explain how to evaluate them by applying several system principles (e.g., feedback loops, cascading evets, reflex arcs).  The evaluation of interdependencies will confirm whether the intervention intended to operate and function as a system is in fact doing so.  If so, then investing in evaluating emergence (the higher functional purpose of the intervention), makes sense.  If not, then evaluation recommendations focus on operational level improvements.

 

 

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

Outcomes are Not Emergent Properties
November 28th, 2022 

One recurring challenge I encounter when explaining the idea of a system emergent property is that colleagues often want to equate “outcomes” with an emergent system property.  “Well, isn’t the emergent property just an outcome of the complex intervention?” is what I am frequently asked. 

My reply is that I prefer to use the term emergent property when evaluating complex interventions because it does not carry the “baggage” of program evaluation.  Outcomes are often equated with the logic model, as in the immediate, intermediate, and long-term outcomes.  By definition these outcomes are chronological, they are time bound.  This is because the outcomes were derived, using if-then, root cause analysis, reductionist-type thinking.  If you are focused on cause and effect, then by definition one thing precedes the other and there is a time component. 

 

An emergent property is qualitatively different.  To emerge, according to Merriam-Webster is “to become manifest; to become known”.  What becomes known occurs through the interdependence between complex intervention parts.  Equity, quality of life, stability, and so forth are examples of emergent properties for complex social interventions.  They are not chronological, rather they manifest themselves when the intervention components are operating as they should:  an emergent property is the function of a complex intervention that arises through the product of component interactions (Ackoff, 1994).

 

I get that there is a certain sense of comfort in trying to equate emergence with something one already knows and is familiar, like outcomes, but they are not the same thing. If you equate emergence with a long term outcome, you miss the opportunity to collect data on the higher purpose of the complex intervention, and the intervention and the evaluation suffers as a result.

IMG_7759.jpeg

​

- Ralph Renger

Author, System Evaluation Theory: A Blueprint for Practitioners Evaluating Complex Interventions Operating and Functioning as Systems

bottom of page