Assessment and Evaluation Models Should Include Reflection
Trends in Educational Technology, Journal #5
I believe that assessment is about more than merely providing a kind of currency-value to students’ learning—assessment and evaluation should also be used to help teachers and instruction designers assess and evaluate themselves and their own curriculum so that they can revise it. Formative and summative assessment are tools that I’m already familiar with, and since I became aware of these assessment methods during my time in English 270 back in 2012, I frequently use them as tools to tweak my curriculum on week-to-week, unit-to-unit, and course-to-course bases. Of course we need ways to assess and evaluate what our students are doing—we are subject to educational structures that demand an accounting of students’ learning, but if 100% of our evaluative focus is on something as slippery as “student performance of learning outcomes,” we miss critical opportunities to see that if students are failing curriculum, there may be problems with curriculum. To that end, Scriven’s (1991) definition of evaluation has given me something to think about.
[ad name=”Leaderboard Banner 01″]
Evaluation and assessment of instruction design and curriculum should take into consideration each piece of Scriven’s (1991) definition of evaluation, but I would extend that strategy to be even more reflective. Scriven (1991) defines evaluation as a “process of determining the merit, worth, and value of things” (p. 97). So in terms of curriculum design, we must figure out a set of learning objectives or outcomes and have a way to assess the degree to which learners are able to perform those objectives over time. What I particularly like about this model is that designer’s should think about the merit of those learning outcomes. Indeed, learning outcomes should be those things which have intrinsic value within a given system. And in my own thinking, I believe that another important step in this process of designing and revising curriculum should be to constantly ask the following questions: Why do we value these learning outcomes or objectives? What is the nature of their merit? For example, Stufflebeam’s CIPP Evaluation Model calls for an evaluation of context, “the assessment of the environment in which an innovation or program will be used to determine the need and objectives for the innovation” (Johnson & Dick, 2014, p. 97). I would take that a step further and suggest that we must ask why that environment (context) has those particular needs. Concerning my post-structuralist analysis of these evaluation models, the same thing holds true for Rossi’s Five-Domain Evaluation model. The first dimension of that model is the needs assessment: “Is there a need for this type of program in this context?” but that question neglects an equally important question: “WHY does this context have this particular need to begin with, and is that need justified based on value systems that are of intrinsic value and benefit to everyone?” In other words, we should constantly seek to understand the underlying structures that attempt to justify the connection between a thing and that thing’s merit. This is especially crucial if we think about how those structures change over time or how the objects within those structures change over time.
Absolutely vital to the design process is Stufflebeam’s input process in the CIPP Evaluation Model. It calls for an accounting of all resources that are required to make a program’s goals attainable (Johnson & Dick, 2012, p. 98). Growing from my experience in having to teach the Early Start English program in summer 2014, this is definitely something I’ll keep in mind for the future. One of the reasons why I believe this program failed is because it failed to deliver on what was agreed upon during the program’s input process. During the input process, we were promised specific spaces and equipment, thus we designed our curriculum and it’s learning outcomes with those spaces and equipment as a key component thereof. When the university failed to deliver on that space and equipment, the curriculum could not adapt. Ultimately, if the input process fails, an entire program could also be destined to fail.
References
Johnson, R., & Dick, W. (2012). Evaluation in Instructional Design: A Comparison of Evaluation Models. In R. Reiser & J. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed.). Boston: Pearson.