Personal Qualities Not Measured by Tests

Personal qualities not measured by testing.

Personal Qualities Not Measured by Tests

I’m not sure what the original source of this is, but it’s meaningful to my pedagogy and therefore has a place on the Snow of the Universe. Thank you, universe, for this one. Listed in no particular order…

Personal qualities not measured by testing.

  • Creativity
  • Critical Thinking
  • Resilience
  • Motivation
  • Persistence
  • Curiosity
  • Inquiry
  • Humor
  • Endurance
  • Reliability
  • Enthusiasm
  • Civic-mindedness
  • Self-awareness
  • Self-discipline
  • Empathy
  • Leadership
  • Compassion
  • Courage
  • Sense of Aesthetics
  • Sense of Wonder
  • Resourcefulness
  • Spontaneity
  • Humility
  • Bravery
  • Courage
  • Conviction

Fresno State DISCOVERe Summer Institute | Day 2

DISCOVERe: Day 2 Reflection – Google Education

Although I have been using Google Drive, Google Apps, and Google Classroom in my teaching for a few years already, today I did learn a variety of little tidbits that I am excited to incorporate into my teaching. As I wrote in my previous day’s reflection, mobile technology’s potential for formative assessment remains in the foreground of my technological radar, and I believe the entire Google Education Suite is endlessly valuable to this end. Today, however, I am seeing some ways in which the Google Education Suite can help me be more efficient and expedient as an instructor, freeing me from administrative tasks and allowing me to invest more of my time and energy in helping my students make meaning from course material.

Google LogoThe primary benefit I’m seeing with the Google Education Suite is its ability to help me be more efficient and expedient through delegation which, incidentally, touches on flipped classroom pedagogy. This fall I will have 5 sections of English 5A, and if they all maintain full capacity, I will have 125 first-year writing students. Old feedback and assessment methodology would look something like this: I would assign a writing activity or assignment; my students would produce a draft; they would then turn that draft in; I would read each and every draft, commenting where I felt the student could use some guidance; and I would return those drafts to my students. This process would likely repeat for a second or final draft. Using Google Classroom as a distribution platform, Google Forms as a scaffold for feedback, and Google Sheets to manage data, however, will allow me to exponentially expedite this process while also enabling my students to generate meaningful feedback for each other as well, ultimately redefining the feedback and assessment portions of the classroom writing process.

Using Google Classroom, I can distribute all of the materials my students need in order to generate feedback for their peers. This will primarily include a pool of drafts collected in a shared Google Drive folder and a form which they will use to submit their feedback and assessment notes. The form allows me to scaffold how the students leave feedback for each other as well as set minimum length requirements to further insure that they are leaving substantial feedback for each other. Once all of the data is collected in the associated Google Sheet, I can simply mail merge the data back to each student. If I were to assign each student two papers for which to leave feedback, my 125 students will generate 250 feedback items, and with mail merge technology, they will receive that feedback within a day of having submitted it rather than after a week or two depending on my workload. Finally, this allows me to see exactly what kinds of feedback my students are providing to each other. I believe students will gain valuable insights about their own writing with how they’re assessing each other’s writing, and this will be an extension of their understanding about how their writing will be assessed by me. Finally, since I can read all of the feedback they’re leaving for each other, I can engage in writing instruction on both the drafting and peer feedback sides of the writing coin, something traditional writing process feedback methodology could not do.

Fresno State DISCOVERe Summer Institute | Day 1

DISCOVERe: Day 1 Reflection

My underlying framework for figuring out how I can “harness the power of mobile devices to redefine teaching . . . and create student-centered environments” so far is the extended ability to incorporate formative assessment. I am also thinking about project-based learning in general and considering how it might touch on the ARCS model for motivation.
With regard to formative assessment, I see DISCOVERe and mobile technology potentially offering new ways to engage in formative assessment. In the past after discussing a new concept in the classroom, I would ask my students to show me on their hands how well they’re understanding the new concept on a scale of 1 to 5—1 being “not at all” and 5 being “confident.” While I will still use formative assessment techniques like this, I believe that observing their written labor in real time through cloud-based word processing apps like Google Docs will offer new insights into how my students are processing new concepts and ideas. Furthermore, having these insights in real time may allow me to touch directly on the ARCS model, specifically on (C)onfidence, if I can give either praise or gentle corrections as they’re working.

In a different area of the ARCS model, I’m considering both (R)elevance and (S)atisfaction with project-based learning. I believe that if I can create a project that’s framed by a real-world concern that is particularly important to my students, they will be intrinsically interested through its relevance to their lives, and if they know that they will be publishing their labor for an audience to consume, there’s a great chance that they will feel a sense of (S)atisfaction. If my suspicions are correct, this framework should invite a great deal of motivation in my students which will lead them to take even more ownership of their own learning.

Flipping the Classroom, Constructivism, and Grading Contracts

Flipping the Classroom Word Cloud

Flipping the Classroom, Constructivism, and Grading Contracts

Trends in Education Technology, Journal #9

Although I have been a teaching associate with my university’s first year writing program for three years now, we all have an orientation at the beginning of each academic year. During my orientation this fall, one of my colleagues presented an altered curriculum—or rather an altered approach—to teaching English 5A/5B. The most significant alteration was that her course was part of a larger faculty cohort across the disciplines that is piloting the university’s DISCOVERe program—an initiative to run classes 100% through tablets. Throughout her presentation, she kept referring to this idea of the “flipped” classroom, and while I found the term fuzzy at the time, I got the sense that it was something of a colloquial term for a constructivist approach to instruction—an approach that redirects or “flips” the emphasis in the classroom from the teacher/lecturer/professor/master to the student/learner. While my intuition was close, further investigation in the 2014 Horizon Report neatly bridges the gap between the idea of a “flipped” classroom and a constructivist approach to instruction.

The idea of pointing learners to objective knowledges outside of the classroom is not new to me. Jordan Shapiro (2013) talked about this in his article on forbes.com when he shares how instead of delivering the materials for objective knowledge inside of the classroom, he “flips” this paradigm by delivering those materials digitally and outside of the classroom. This enables us to redirect our face-to-face energy from ingesting material to digesting material. In the reading and writing classroom, for example, instead of focusing our time on reading a text together, we do stuff with texts together—together as collaborators, teachers and learners make meaning. As Johnson, Adams, Estrada, & Freeman (2014) explain, this paradigm “[enables] students to spend valuable class-time immersed in hands-on activities that often demonstrate the real world applications of the subject they are learning” (p. 36). So “flipping” the classroom is essentially a move toward a constructivist paradigm, utilizing digital technologies as a mediator to serve instructional materials to learners outside of the classroom.

Flipping the Classroom Word Cloud

Johnson et al. (2014) points to a resource on flipping the classroom which I have found particularly useful. Jennifer Demski (2013) offers a list of 6 tips from experts on how to flip a classroom. One thing she points to which I believe takes considerable skill and energy on the part of the teacher is to anticipate what students need during the first moments of class, letting the students decide what the particular foci will be during class time. She offers some strategies from Robert Talbert—professor of mathematics at Grand Valley State University—including having students use clickers to take a quick quiz at the beginning of class. This is essentially a quick kind of formative assessment, one that requires a certain flexibility and agility in class planning. To be successful with this strategy, instructors must have the ability to respond to their learners needs at a moment’s notice, and if they teach the same course more than once concurrently, different groups of learners may have different needs on any given day with any given topic, adding even more demand from a teacher’s curricular agility. The benefit here, though, is that you always enable students to pursue not what you think they need but what you know they need. Because they tell you exactly what they need. This approach is not without its perils and pitfalls, however.

Unless curriculum and assessment has built in to it a way to value and evaluate the labor that must take place outside of class, this flip is destined to flop. Flipping the classroom depends on student labor outside of the classroom, so if they show up to class not having done the assigned labor, they’re not able to do anything because they do not have the foundation on which to do anything. Suddenly we’re back to the classroom and lecture being the point of delivery of instructional materials. Essentially, if students have not been motivated to do the labor outside of class, they are not likely to do it. This is why I believe implementing a grading contract is crucial. Grading contracts nudge evaluation away from the product and put it on the process; it asks the question, “Did you do the labor (outside of class) to the letter and in the spirit in which it was asked?” and so long as you construct that labor as something that is assessable, i.e. have them turn something in electronically in advance of the class that’s scheduled to do something with that labor and attach that labor to their grade for the course in some way, students will be motivated to do the labor they need so that we can collaborate and construct meaning with those materials in class.


References

Demski, J. (2013, January 23). 6 Expert Tips for Flipping the Classroom. Campus Technology. Retrieved October 26, 2014.

Johnson, L., Adams Becker, S., Estrada, V., Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas: The New Media Consortium.

Shapiro, J. (2013, August 26). We Need More EdTech, But Less Technology In The Classroom. Forbes. Retrieved October 26, 2014.

Assessment and Evaluation Models Should Include Reflection

Assessment and Evaluation Models Should Include Reflection

Trends in Educational Technology, Journal #5

I believe that assessment is about more than merely providing a kind of currency-value to students’ learning—assessment and evaluation should also be used to help teachers and instruction designers assess and evaluate themselves and their own curriculum so that they can revise it. Formative and summative assessment are tools that I’m already familiar with, and since I became aware of these assessment methods during my time in English 270 back in 2012, I frequently use them as tools to tweak my curriculum on week-to-week, unit-to-unit, and course-to-course bases. Of course we need ways to assess and evaluate what our students are doing—we are subject to educational structures that demand an accounting of students’ learning, but if 100% of our evaluative focus is on something as slippery as “student performance of learning outcomes,” we miss critical opportunities to see that if students are failing curriculum, there may be problems with curriculum. To that end, Scriven’s (1991) definition of evaluation has given me something to think about.

Evaluation and assessment of instruction design and curriculum should take into consideration each piece of Scriven’s (1991) definition of evaluation, but I would extend that strategy to be even more reflective. Scriven (1991) defines evaluation as a “process of determining the merit, worth, and value of things” (p. 97). So in terms of curriculum design, we must figure out a set of learning objectives or outcomes and have a way to assess the degree to which learners are able to perform those objectives over time. What I particularly like about this model is that designer’s should think about the merit of those learning outcomes. Indeed, learning outcomes should be those things which have intrinsic ed-u-ca-tion, evaluation, assessment value within a given system. And in my own thinking, I believe that another important step in this process of designing and revising curriculum should be to constantly ask the following questions: Why do we value these learning outcomes or objectives? What is the nature of their merit? For example, Stufflebeam’s CIPP Evaluation Model calls for an evaluation of context, “the assessment of the environment in which an innovation or program will be used to determine the need and objectives for the innovation” (Johnson & Dick, 2014, p. 97). I would take that a step further and suggest that we must ask why that environment (context) has those particular needs. Concerning my post-structuralist analysis of these evaluation models, the same thing holds true for Rossi’s Five-Domain Evaluation model. The first dimension of that model is the needs assessment: “Is there a need for this type of program in this context?” but that question neglects an equally important question: “WHY does this context have this particular need to begin with, and is that need justified based on value systems that are of intrinsic value and benefit to everyone?” In other words, we should constantly seek to understand the underlying structures that attempt to justify the connection between a thing and that thing’s merit. This is especially crucial if we think about how those structures change over time or how the objects within those structures change over time.

Absolutely vital to the design process is Stufflebeam’s input process in the CIPP Evaluation Model. It calls for an accounting of all resources that are required to make a program’s goals attainable (Johnson & Dick, 2012, p. 98). Growing from my experience in having to teach the Early Start English program in summer 2014, this is definitely something I’ll keep in mind for the future. One of the reasons why I believe this program failed is because it failed to deliver on what was agreed upon during the program’s input process. During the input process, we were promised specific spaces and equipment, thus we designed our curriculum and it’s learning outcomes with those spaces and equipment as a key component thereof. When the university failed to deliver on that space and equipment, the curriculum could not adapt. Ultimately, if the input process fails, an entire program could also be destined to fail.


References

Johnson, R., & Dick, W. (2012). Evaluation in Instructional Design: A Comparison of  Evaluation Models. In R. Reiser & J. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed.). Boston: Pearson.