Flipping the Classroom, Constructivism, and Grading Contracts

Flipping the Classroom, Constructivism, and Grading Contracts

Trends in Education Technology, Journal #9

Although I have been a teaching associate with my university’s first year writing program for three years now, we all have an orientation at the beginning of each academic year. During my orientation this fall, one of my colleagues presented an altered curriculum—or rather an altered approach—to teaching English 5A/5B. The most significant alteration was that her course was part of a larger faculty cohort across the disciplines that is piloting the university’s DISCOVERe program—an initiative to run classes 100% through tablets. Throughout her presentation, she kept referring to this idea of the “flipped” classroom, and while I found the term fuzzy at the time, I got the sense that it was something of a colloquial term for a constructivist approach to instruction—an approach that redirects or “flips” the emphasis in the classroom from the teacher/lecturer/professor/master to the student/learner. While my intuition was close, further investigation in the 2014 Horizon Report neatly bridges the gap between the idea of a “flipped” classroom and a constructivist approach to instruction.

The idea of pointing learners to objective knowledges outside of the classroom is not new to me. Jordan Shapiro (2013) talked about this in his article on forbes.com when he shares how instead of delivering the materials for objective knowledge inside of the classroom, he “flips” this paradigm by delivering those materials digitally and outside of the classroom. This enables us to redirect our face-to-face energy from ingesting material to digesting material. In the reading and writing classroom, for example, instead of focusing our time on reading a text together, we do stuff with texts together—together as collaborators, teachers and learners make meaning. As Johnson, Adams, Estrada, & Freeman (2014) explain, this paradigm “[enables] students to spend valuable class-time immersed in hands-on activities that often demonstrate the real world applications of the subject they are learning” (p. 36). So “flipping” the classroom is essentially a move toward a constructivist paradigm, utilizing digital technologies as a mediator to serve instructional materials to learners outside of the classroom.

Flipping the Classroom Word Cloud

Johnson et al. (2014) points to a resource on flipping the classroom which I have found particularly useful. Jennifer Demski (2013) offers a list of 6 tips from experts on how to flip a classroom. One thing she points to which I believe takes considerable skill and energy on the part of the teacher is to anticipate what students need during the first moments of class, letting the students decide what the particular foci will be during class time. She offers some strategies from Robert Talbert—professor of mathematics at Grand Valley State University—including having students use clickers to take a quick quiz at the beginning of class. This is essentially a quick kind of formative assessment, one that requires a certain flexibility and agility in class planning. To be successful with this strategy, instructors must have the ability to respond to their learners needs at a moment’s notice, and if they teach the same course more than once concurrently, different groups of learners may have different needs on any given day with any given topic, adding even more demand from a teacher’s curricular agility. The benefit here, though, is that you always enable students to pursue not what you think they need but what you know they need. Because they tell you exactly what they need. This approach is not without its perils and pitfalls, however.

Unless curriculum and assessment has built in to it a way to value and evaluate the labor that must take place outside of class, this flip is destined to flop. Flipping the classroom depends on student labor outside of the classroom, so if they show up to class not having done the assigned labor, they’re not able to do anything because they do not have the foundation on which to do anything. Suddenly we’re back to the classroom and lecture being the point of delivery of instructional materials. Essentially, if students have not been motivated to do the labor outside of class, they are not likely to do it. This is why I believe implementing a grading contract is crucial. Grading contracts nudge evaluation away from the product and put it on the process; it asks the question, “Did you do the labor (outside of class) to the letter and in the spirit in which it was asked?” and so long as you construct that labor as something that is assessable, i.e. have them turn something in electronically in advance of the class that’s scheduled to do something with that labor and attach that labor to their grade for the course in some way, students will be motivated to do the labor they need so that we can collaborate and construct meaning with those materials in class.


References

Demski, J. (2013, January 23). 6 Expert Tips for Flipping the Classroom. Campus Technology. Retrieved October 26, 2014.

Johnson, L., Adams Becker, S., Estrada, V., Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas: The New Media Consortium.

Shapiro, J. (2013, August 26). We Need More EdTech, But Less Technology In The Classroom. Forbes. Retrieved October 26, 2014.

Assessment and Evaluation Models Should Include Reflection

Assessment and Evaluation Models Should Include Reflection

Trends in Educational Technology, Journal #5

I believe that assessment is about more than merely providing a kind of currency-value to students’ learning—assessment and evaluation should also be used to help teachers and instruction designers assess and evaluate themselves and their own curriculum so that they can revise it. Formative and summative assessment are tools that I’m already familiar with, and since I became aware of these assessment methods during my time in English 270 back in 2012, I frequently use them as tools to tweak my curriculum on week-to-week, unit-to-unit, and course-to-course bases. Of course we need ways to assess and evaluate what our students are doing—we are subject to educational structures that demand an accounting of students’ learning, but if 100% of our evaluative focus is on something as slippery as “student performance of learning outcomes,” we miss critical opportunities to see that if students are failing curriculum, there may be problems with curriculum. To that end, Scriven’s (1991) definition of evaluation has given me something to think about.

Evaluation and assessment of instruction design and curriculum should take into consideration each piece of Scriven’s (1991) definition of evaluation, but I would extend that strategy to be even more reflective. Scriven (1991) defines evaluation as a “process of determining the merit, worth, and value of things” (p. 97). So in terms of curriculum design, we must figure out a set of learning objectives or outcomes and have a way to assess the degree to which learners are able to perform those objectives over time. What I particularly like about this model is that designer’s should think about the merit of those learning outcomes. Indeed, learning outcomes should be those things which have intrinsic ed-u-ca-tion, evaluation, assessment value within a given system. And in my own thinking, I believe that another important step in this process of designing and revising curriculum should be to constantly ask the following questions: Why do we value these learning outcomes or objectives? What is the nature of their merit? For example, Stufflebeam’s CIPP Evaluation Model calls for an evaluation of context, “the assessment of the environment in which an innovation or program will be used to determine the need and objectives for the innovation” (Johnson & Dick, 2014, p. 97). I would take that a step further and suggest that we must ask why that environment (context) has those particular needs. Concerning my post-structuralist analysis of these evaluation models, the same thing holds true for Rossi’s Five-Domain Evaluation model. The first dimension of that model is the needs assessment: “Is there a need for this type of program in this context?” but that question neglects an equally important question: “WHY does this context have this particular need to begin with, and is that need justified based on value systems that are of intrinsic value and benefit to everyone?” In other words, we should constantly seek to understand the underlying structures that attempt to justify the connection between a thing and that thing’s merit. This is especially crucial if we think about how those structures change over time or how the objects within those structures change over time.

Absolutely vital to the design process is Stufflebeam’s input process in the CIPP Evaluation Model. It calls for an accounting of all resources that are required to make a program’s goals attainable (Johnson & Dick, 2012, p. 98). Growing from my experience in having to teach the Early Start English program in summer 2014, this is definitely something I’ll keep in mind for the future. One of the reasons why I believe this program failed is because it failed to deliver on what was agreed upon during the program’s input process. During the input process, we were promised specific spaces and equipment, thus we designed our curriculum and it’s learning outcomes with those spaces and equipment as a key component thereof. When the university failed to deliver on that space and equipment, the curriculum could not adapt. Ultimately, if the input process fails, an entire program could also be destined to fail.


References

Johnson, R., & Dick, W. (2012). Evaluation in Instructional Design: A Comparison of  Evaluation Models. In R. Reiser & J. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed.). Boston: Pearson.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑