Scientific inquiry has largely been embraced in the post-Enlightenment era and has continually assisted people to investigate, observe, and improve processes or events. It is within human nature to suffer from certain biases and fallacies to either not understand the way something happens, or distort it to confirm our own beliefs (Shermer, 65). People have, in time, found that empirical and objective studies and experiments have proven much more reflective of reality than just our perceptions and, “where experiments of this kind are judiciously collected and compared, we may hope to establish on them a science, which will not be inferior in certainty, and will be much superior in utility to any other of human comprehension” (Hume, 46). As government programs grew to serve more people, they began to involve far more variables and interventions for most to consciously keep track of on their own. And, especially as citizens began suspicious about government spending and waste, more oversight was needed to report and correct program and policy outcomes. In this way, program evaluations are a fix to the problems in human nature to misinterpret, distort, or fail to understand information.
Evaluations seek to “study, appraise, and help improve social programs, including the soundness of the programs’ diagnoses of the social problems they address, the way the programs are conceptualized and implemented, the outcomes they achieve, and their efficiency”(Rossi, et al., 3). But this close evaluation and observation by the evaluators cannot just be presented in raw data, it must also be prepared and presented in a collected and understandable way for the stakeholders in order to, “be attentive to multiple values and multiple perspectives” (Langbein, 7). In Practical Program Evaluations, author Gerald Andrews Emison seeks to explain how to best address the empirical and unbiased needs of a program evaluation in the frame of those using the information. Emison built this piece from his own experiences in the public sector and in the classroom. He saw that many texts failed to deliver a way to apply evaluation into practical action which, he observes, is knowledge unique to experience like his.
From the start of the text, Emison does a phenomenal job introducing evaluation with the weight it deserves: as a rational, scientific method that must be communicated practically beyond scientific qualities to different stakeholders. He continues to briefly cover the history of public evaluation, explaining the growing need for oversight and evaluation, and how its resulting decisions matter to public interest (Rossi, et al., 11)(Emison, 13). Emison also sets the foundation by explaining bounded rationality as the way “people make incremental decisions with the best information reasonably available, learn from the outcome, and then adjust accordingly” (Emison, 20-21)(Simon, 119). Since, the reader now understands the connection between the raw and rational side of programs and the non rational explanation of the outcomes, Emison provides the foundational definition of program evaluation “as the use of rational processes to examine the conduct of a program in the public interest and assess its characteristics and effectiveness along with the sources of these qualities” (Emison, 21). The goal of this text is how an evaluation pragmatically communicates the information to the readers.
Of course, the author continues to cover how rational evaluations are carried out (designing, collecting data, and analyzing), but seeing as even he sees this information is a banality in the field, he continues to his real focus that can only be gathered from the field: understanding the four Cs. These four Cs are client, content, control, and communicate – each C possessing a unique way to design, complete, and communicate one’s evaluation in the best way possible.
The first C, Client, coaches the reader to learn about the client and the major stakeholders to really understand what the scope and goal of the evaluation is. To this end, an evaluator can better serve those interests. Emison explains that “success involves specific deliverables, the content of analysis as well as techniques applied, and the interpersonal and political reactions to the evaluation by the stakeholders” (Emison, 36). This is definitely a piece of information one would not often learn in the classroom – but it does paint a somewhat bleak picture. Where a young evaluator may think they are searching for the facts through scientific inquiry, they are really trying to meet the needs of those funding the evaluation. And although the author stresses ethical need to do the right thing and walk away when needed, he would do better to stress the importance of the actual results of the evaluation earlier on rather than framing them to please the stakeholders – but he may be onto something students do not want to hear.
Content is the next C and Emison provides a clear and simple way to deal with the content and data collected: stick to the facts while focusing on what matters. What matters, according to the author, is the purpose of the program, prioritizing the data, by starting with what is most relevant to the program outcome and ending with a confirmation of what is most useful to the program (Emison, 51). This part of the piece satisfies the critique from the first C by stressing data and that fact that correlation does not always mean causation. The fact that Emison asserts that “a skeptical mind can be a helpful friend” confirms his understanding of the biased and fallacy-ridden mind of man (Emison, 52).
Control is Emison’s next C. Although other texts focuses on control of the evaluation and the evaluation design, Emison provides a refreshingly different kind of control: the control of the work (Bardach & Patashnik, 71)(Langbein, 9)(Rossi, et al., 32-33). That is, how to manage the evaluation team and meet the needs of the stakeholders within time and budget. His solution is less profound than his other Cs and is mainly estimating the time it takes to do the work, build in a cushion of time to get the work done, monitor work performance, and quickly react to uncertain threats to the evaluation (Emison, 72). The author elaborates with a number of case studies, which helps, but leaves the reader feeling like one can only manage this issue with experience.
The final C is communication. Emison holds that, no matter how well an evaluation is conducted, “people must be convinced” – meaning the scientific and quantitative products must be clarified and communicated to the readers (Emison, 89). This is the last portion of his equation that bookends the whole process. His advice to stay focused on the ideas rather than details by defining the scope of the problem or problems identified, explaining the sources of the problems, and then the method to address the problems. Emison again returns to the need for the evaluator to frame the results in a way that pleases the stakeholders by focusing on what they probably want to see out of the evaluation. His experience really shines through when he offers some simple yet humorous advice like making “your complex topics accessible…[by] conveying the topics in short words and short sentences” (Emison, 93). Though humorous, these steps convey a very important aspect to evaluations: very complex information is gathered and many variables and interventions are present making the issue and solution very difficult for the mind to accurately conceptualize.
This piece covers what it seeks to accomplish: information regarding successful and pragmatic program evaluation that is only gained from work in the public sector. Emison repeatedly posits that certain information can only be gained from time in the public sector and through experience. Although there is no doubt the text and his special experience is valuable in providing the reader unique knowledge, it falls into the same trap as the other texts. That is, in a way, his text fails to solve the problem he sets out to settle. Emison states that students cannot understand practical program evaluation from a textbook and seeks to solve that program with a textbook.
Practical Program Evaluations makes the full circle of explaining the heavily scientific nature of data and the complex, strenuous nature of social programs and ends in how evaluations need to communicate the complex information in terms easily understandable to the reader and the stakeholders.The art of breaking down extremely simple terms and concepts into stages seems to the key to successful evaluations, according to Emison. As stated before, the human mind is bound by its own biases and fallacies and can often distort, misinterpret, or not fully comprehend the information, especially when confronted in a mass of information. The solution of the four Cs assists the reader by explaining what is really important – conveying the information in an accurate and understandable way. One must understand the client and stakeholders, focus the content to address the focal issues at hand, control the work, and communicate it. Indeed, this information and unique experiences from Emison’s work in the field bridges the gap between scientific inquiry and human understanding to arrive at and better communicate the solution.
Emison, Gerald Andrews. Practical Program Evaluations: Getting from Ideas to Outcomes. Washington, D.C.: CQ Press, 2007
Hume, D. (1984). A Treatise of Human Nature. New York, NY: Penguin Classics.
Langbein, L. (2015) Public Program Evaluation: A Statistical Guide (2nd ed.). London: Routledge.
Rossi, P. H., M. W. Lipsey, & H. E. Freeman. (2004). Evaluation: A Systematic Approach (7th ed.). Thousand Oaks, CA: Sage Publications.
Shermer, M. (2008). The Mind of the Market: Compassionate Apes, Competitive Humans, and other Tales from Evolutionary Economics. New York, NY: Times Books.
Simon, H. (2000) Administrative Behavior: A Study of Decision-making Processes in Administrative Organizations (4th ed.). New York, NY: The Free Press.