Sunday, September 20, 2009

putting some trust in "those little bastards"

Over at the Chronicle of Higher Education, H. William Rice has posted a thoughtful opinion piece titled "Don't Shrug Off Student Evaluations." (The piece is locked to nonsubscribers; because I'm all about open access, I will helpfully link you to a free version here.)

Rice, a long time higher education faculty member, describes a pair of colleagues who took distinctly negative approaches to the notion of students evaluating their professors: One, whom Rice describes as "an elderly faculty member," explained to Rice that he saw student evaluations as
“an absolute violation of academic freedom,” while jabbing a trembling, crooked finger in my face with a swordlike flourish. “No one has the right to come in my classroom,” he said. (I assume he allowed the students in.)

The other colleague, whom Rice calls "Professor X," confided in Rice that he read his students' evaluations before submitting final grades. Professor X had received nearly universally negative reviews and wanted Rice's advice on whether he should lower students' grades "to show 'those little bastards'."

Rice, of course, takes the more contemplative path by arguing that student evaluations have an important place in academia because they offer educators insight into how well they're doing their job, where they can improve, and in what areas they continue to succeed. He writes:

Sure, student evaluations have their limits. They should never be the only means of evaluating faculty members, and they should never be used to snoop on professors who deal with controversial subjects in their classes. Yes, administrators have been guilty of misusing them. But the benefits far outweigh the risks, and faculty members who actually want to become better teachers—and who believe that good teaching skills are not bequeathed to them in perpetuity with the awarding of a Ph.D.—should read them over and over again.

Professor X’s great objection to student evaluations was one I frequently hear: “The student does not know the subject, so how can he or she judge my teaching?”

True, students’ perspectives are limited. But so are professors’. A professor cannot know what it is like to be 20 in an age of text messages, Facebook, and YouTube, and to be forced to endure lectures from someone who does not inhabit their socially networked world. I’m not suggesting that faculty members necessarily use that technology in their teaching, only that the point of view of those who do use it might be valuable.

As a former college instructor, I can attest to the deep value of student evaluations, though the danger of misinterpretation is always present. Often, we think about student ratings as a kind of popularity contest for educators--in some ways, I think, rightly so. After all, it's fairly easy to get high marks from lots of students: Just be friendly, funny, and a soft grader. It helps to make interesting use of new media resources.

Because so much of the student evaluation process hinges on faculty popularity, it's easy to overlook the much more important questions that only students can answer: Did the professor change the way you thought about the subject? Did you leave the class a better thinker than when you went in? Can you apply what you've learned to real-world contexts?

Here I draw from Ken Bain's excellent text, "what the best college teachers do." He writes about an experiment conducted by Arizona State University physicists in the early 1980s. They examined whether introductory physics courses changed the way students thought about motion. Most students came in with an intuitive set of theories about how the world works; most of these theories aligned with what the physicists called "a cross between Aristotelian and 24th-century impetus ideas." The goal of the course was to introduce students to Newtonian physics, which was in many ways directly oppositional to the Aristotelian approach. Given that most undergraduates went in "thinking like Aristotle," did they leave "thinking like Newton"?

Bain writes:
Did the course change student thinking? Not really. After the term was over, the two physicists...discovered that the course had made comparatively small changes in the way students thought. Even many "A" students continued to think like Aristotle rather than like Newton. They had memorized formulae and learned to plug the right numbers into them, but they did not change their basic conceptions. Instead, they interpreted everything they heard about motion in terms of the intuitive framework they had brought with them to the course.

....Researchers have found that...some people make A's by learning to "plug and chug" memorizing formulae, sticking numbers in the right equation or the right vocabulary into a paper, but understanding little. When the class is over, they quickly forget much fo what they have "learned."...Even when learners have acquired some conceptual understanding of a discipline or field, they are often unable to link that knowledge to real-world situations or problem-solving contexts.

Of course, there's no way to use end-of-semester student evaluations to gauge what kind of long-term impact on learning an instructor has had. Aside from the too-short time scale, there are the real pressures on students to perform, achieve, succeed--and, strange as it may seem, the only way they can definitively prove they've done this is through their grade point average. This means that evaluations are nearly inextricably linked to students' perceived achievement in the class; linked, that is, to what they think will be their final grade.

This isn't to say that student evaluations don't have a place in higher education: I firmly believe that they do, if for no other reason than to boot the universally bad instructors who either don't care about or aren't capable of teaching effectively and to toss the best instructors a little closer to the tenure finish line.


Most of us fall somewhere in the middle of the good-teacher continuum, which means that if we want to find out whether we've had an impact on students' thinking, we may need to supplement student evaluations with some evaluations of our own.

Here's one thing we might try: A set of surveys, administered at the beginning of the class and again at the end, that zero in on the key conceptual frameworks of the course's domain. While in introductory physics the key issue may be "how students think about motion," in geometry it may be "how students think about shapes." In English, my field of choice, it may be something like "how students think about effective written communications." You start there, think about the key issues that shape your conceptual framework, and design a set of questions that can gauge students' intuitive answers (at the beginning of the course) and informed answers (at the end of the course). The nice added benefit of doing this sort of thing is that it forces you to think about and articulate your foundational approach to the subject matter--useful for any educator, no matter how expert.

Indeed, the goal for all educators, no matter what discipline, no matter what the age of their students, should be to help all learners move, even a little, toward how real practitioners in the subject area engage with the world.

And let's try to put a little more faith in our students: "Those little bastards" may care more about grades than we'd like, but they also tend to recognize real, effective teaching when they encounter it. They may not, as one of Rice's straw men explained, be expert enough about the subject area to teach the class, but they're certainly experts in learning--they've been doing it their whole lives. Let's trust that, given the right questions, they'll offer up the answers we need in order to improve our teaching practices.

3 comments:

Rafi said...

I love the idea of every class having pre/post surveys to determine if the ways that the participating students are thinking have changed as a result of a course, but I think that a key challenge here is that in order to determine this effectively they'd often have to actually engage in tasks in the pre/post that matched the skills that the course was aiming to foster, and then have someone analyze the results. As someone who has tried to do this with regards to new media literacies, I can tell you that it's very challenging, resource heavy and requires skills that most academics simply lack. Maybe there's some pioneering young learning scientist out there though that can figure out an innovative solution... :)

rebecca McMc said...

soon pls address ratemyprofessors and other such sites.

Ironicus Maximus said...

We've always been fans of the seed planting analogy for teaching around here. After all, to take 18 - 21 year olds, hold then for 15 weeks (approximately 60 hours) and expect to fundamentally alter the way they look at most anything is a bit of a stretch.

We're also fans of student evaluations, but not for what they say as much as for where they point.

 

All content on this blog has been relocated to my new website, making edible playdough is hegemonic. Please visit http://jennamcwilliams.com and update your bookmarks!