Charles,
I ran across the essay below in the "Tomorrow's Professor" newsletter.
I think it will give you some interesting ideas.
Larry
+++++++++++++++++++++++++++++++++++++++
Multiple-Choice Questions You Wouldn’t Put on a Test: Promoting Deep
Learning Using Clickers
Classroom response systems (“clickers”) can turn multiple-choice
questions*often seen to be as limited as assessment tools*into
effective tools for engaging students during class. When using this
technology, an instructor first poses a multiple-choice question. Each
student responds using a handheld transmitter (or “clicker”).
Software on the classroom computer displays the distribution of student
responses. Although many multiple-choice questions found on exams work
well as clicker questions, there are several kinds of multiple-choice
questions less appropriate for exams that function very well to promote
learning, particularly deep learning, during class when used with
clickers.
One-Best-Answer Questions
Consider posing a question that requires students to weigh evidence for
and against each of several answer choices*a question that asks
students to select the one “best” answer among competing
alternatives. In a literature class, students might be asked to select
the option that best explains a character’s motivation in a particular
point in a play. In a nursing class, students might be asked to select
the best course of action given incomplete information about a
patient’s condition. Such one-best-answer questions have more than
one defensible answer*although some answers may be more reasonable
than others.
These questions would not make sense on exams without essay questions
to supplement them, but they can function very well to promote
discussion during class. After having students respond to such a
question, an instructor might then use the distribution of student
responses to structure a classwide discussion of the question, a
discussion in which students share reasons for and against the various
answer choices given in the exercise. The instructor can then guide this
discussion in ways that show students the standards of evidence of the
discipline, standards used to make the kinds of evaluative decisions
required by the one-best-answer question.
Using clickers to facilitate this kind of activity has two key
advantages. One is that by requesting all students to commit to an
answer to the question at hand, all students are more invested in
participating in the subsequent discussion and are more likely to have
generated some ideas to share in that discussion. The other is that the
results display can show students that the question is a difficult
one*particularly when more than one answer choice turns out to be
popular*and thus worthy of discussion.
Student Perspective Questions
Student perspective questions can be useful clicker questions, as well.
These questions ask students to share their opinions and personal
experiences. For example, a political science instructor might ask
students about their views on current events, a psychology instructor
might ask students if they have a close friend or family member with a
particular medical condition, and a biology instructor might ask
students about their personal views on evolution. These kinds of
questions can help students connect sometimes-abstract course material
with their own lives. They can also help students understand each other
better. Students are sometimes surprised to see how many of their peers
agree or disagree with them on particular topics. This can embolden some
students to speak up in class discussions, knowing that there are others
present who agree with them. It can also encourage some students to more
seriously consider perspectives different from their own.
When asking student perspective questions, the ability of clickers to
allow students to respond anonymously about sensitive topics is
important. Simply asking for a show of hands would likely result in
misleading results to questions like these. Moreover, the perspectives
of all students are displayed to the class, not just those of the
relatively few students willing to share their perspectives verbally.
An instructor could poll his or her students on their opinions and
experiences using online surveys and the like, but doing so via clickers
provides an immediacy to the data thus generated that can engage more
students.
Misconception Questions
Many instructors in the sciences use clickers to ask misconception
questions, multiple-choice questions designed to surface and address
common student misconceptions about particular topics. For example, a
chemistry instructor might show students two identical flasks with
different amounts of water inside and ask which flask, if any, has the
highest vapor pressure. Students are likely to vote that the flask with
more water has the higher vapor pressure. However, since vapor pressure
depends on temperature, not volume, the correct answer is that the vapor
pressure is the same for both flasks. This question is designed to
address a common misconception about the relationships among the three
variables vapor pressure, volume, and temperature.
Well-designed misconception questions are answered incorrectly by 30 to
70 percent of students. Many instructors who see this kind of result
engage in what Harvard University physics professor Eric Mazur calls
peer instruction (Mazur, 1997). Students are asked to discuss the
question in pairs, sharing their reasons for their answers with each
other and attempting to come to consensus on the correct answer. Then
the students vote again on the clicker question. This pair discussion
time is valuable because it gives students a chance to learn from each
other. Often, a peer’s explanation of a tough question can be more
helpful to a student than an instructor’s explanation. After the
second vote, the instructor then leads a classwide discussion of the
question, guiding that discussion to focus on reasons for and against
the various answer choices.
Misconception questions work well on exams, of course. However, the
expectation (or, at least, hope) is that many students will answer these
questions correctly on an exam. When used during class with clickers,
the expectation is that many students will answer them incorrectly,
creating an opportunity for students to stretch their mental models.
Mazur and his collaborators have assessed this teaching method using
pre- and post-tests and have found significant evidence that it improves
student conceptual understanding (Crouch & Mazur, 2001). Their results
have been replicated in a variety of science courses and institutions
(Fagen, Crouch, & Mazur, 2002).
Peer Assessment Questions
Many instructors have students assess each other’s work.
Unfortunately, students can often be hesitant to publicly critique each
other, which means that when, for instance, an instructor invites a
class to give feedback on a student presentation, the resulting
discussion often does not involve the kind of critical analysis and
constructive criticism the instructor would like to see. Having students
assess each other’s work using clicker questions, however, allows them
more easily to surface the more critical opinions of their peers’
work.
For example, in her history courses at Mount Royal University, Kori
Street has her students evaluate each other’s class presentations
using clicker questions (Bruff, 2009). Her students assign a letter
grade assessing the quality of a student’s sources, the strength of
the student’s arguments, or the clarity of the student’s
presentation. She finds that by having students assess each other’s
work in these categories using clickers, her students are more able to
provide honest, constructive feedback since the clickers provide a
degree of anonymity. The display of results of these clicker questions,
in turn, promotes more engaged class discussion. When students find out,
for instance, that 40 percent of them feel that the student’s sources
were not very strong, it becomes safer for the whole class to discuss
the quality of those sources. Since Street’s clicker questions are
tied to her grading rubric, the discussions they generate serve to teach
students about the standards of her discipline.
Why Clickers?
Why use clickers to ask the kinds of questions described above?
Clickers allow students to respond anonymously, making it safer for
students to share their perspectives and take risks since their peers
are not aware of their individual responses. However, instructors can
track student responses using clickers, creating accountability for
participation during class, which in turn increases participation. When
more students can respond to a question honestly, more students are
prepared to engage in subsequent discussion. The display of results,
that classroom response systems makes possible, provides further
motivation for meaningful discussion as students become aware of
divergent views. This blend of advantages is difficult to achieve with
other in-class response mechanisms.
It should be noted that clicker questions can only set the stage for
deep learning. It is during the independent thought, small-group
discussion, and classwide debates that deep learning actually occurs.
Well-designed clicker questions, however, can be effective tools for
motivating and preparing more students to engage in those useful
activities.
References
Bruff, D. (2009). Teaching with classroom response systems: Creating
active learning environments. San Francisco: Jossey- Bass.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of
experience and results. American Journal of Physics, 69(9), 970-977.
Fagen, A.P., Crouch, C.H., & Mazur, E. (2002). Peer instruction:
Results from a range of classrooms. The Physics Teacher, 40(4), 206-209.
Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle
River, NJ: Prentice Hall.
______________________________
Essays on Teaching Excellence
Editor: Elizabeth O’Connor Chandler, Director
Center for Teaching & Learning, University of Chicago
[log in to unmask]
* * * * * * *
NOTE: Anyone can SUBSCRIBE to the Tomorrows-Professor Mailing List by
going to:
https://mailman.stanford.edu/mailman/listinfo/tomorrows-professor
-----
Larry K. Michaelsen
Professor of Management
University of Central Missouri
Dockery 400G
Warrensburg, MO 64093
[log in to unmask]
660/429-9873 voice <---NEW ATT cell phone
660/543-8465 fax
>>> Charles Dameron <[log in to unmask]> 03/07/11 1:47 PM >>>
I've been wanting to strengthen my active-learning activities in my
sophomore literature classes, and the TBL approach provides a number of
fine suggestions. I give frequent quizzes and have daily one-page
assignments (called reaction papers) that I require students to
complete, and I put them in groups during most class meetings to use
their reaction papers to launch them into discussions of our reading in
the literature courses I teach. However....... The purpose of these
groups discussions isn't focused enough, I have found, and I don't have
a really effective reporting out activity at the end.
I have read through the Michaelsen/Knight/Fink book on TBL, which is
certainly stimulating, but I'm still wrestling with the issue of
constructing group assignments that lead to productive reporting out
discussions. Most of the courses that use TBL successfully appear to use
case studies from a medical, legal, or business context. Have any of
you developed case studies, or something along those lines, for
literature courses? Are there literature faculty out there who are
having good success with group discussions and then simultaneous
reporting out?
Much thanks,
Charles Dameron
[cid:image001.jpg@01CBDCCD.CE5933F0]
|