TBL Webinar Series Information and archives about the IAMSE TBL Webinar are available at http://www.iamse.org/development/2010/was_2010_fall.htm

Yesterday was a great session on peer eval....they already have the archive up


Here is the schedule

   


Sept 23 12:00 pm ET
TBL 101 - Where to begin

Sept 30 12:00 pm ET
Voices of Experiences - Adopting TBL Into your Course


Oct 7 12:00 pm ET
Peer Evaluation - The Keys for Success


Oct 14 12:00 pm ET
Writing TBL Questions


Oct 21 12:00 pm ET
The 12 Tips of Creating a Good TBL Course


Oct 28 12:00 pm ET
Research in Team-Based Learning

jim


From: Jana McCreary <[log in to unmask]">[log in to unmask]>
Reply-To: Jana McCreary <[log in to unmask]">[log in to unmask]>
Date: Thu, 07 Oct 2010 20:05:54 -0400
To: "[log in to unmask]">[log in to unmask]" <[log in to unmask]">[log in to unmask]>
Subject: Re: peer evaluation practices

Hi Laura and others,
 
Here is the language from my syllabus:
o   Work in Teams.  Your participation in team work will be assessed by your team members using a quantitative assessment.  Team members will compile this information three times during the semester.  For each assessment period, the lowest score received by a team member in each category will be dropped.  


The average of the three evaluations will count as 2% of each student’s final grade.  
 
To implement this, I printed a chart with all team members’ names in rows and four columns as follows:  (1) Is prepared for discussions; (2) Participates in discussions; (3) Avoids dominating discussions; and (4) Listens respectfully.  The team members used that chart to allocate points among team members.  Each student filled out one sheet of paper.  
 
For points, essentially, I take the number of people on the team, minus one, times 5.  Thus, if they have 6 team members, each person allocates 25 points for each category [(6-1)*5 = 25].  If they have five team members, each person allocates 20 points per category [(5-1)*5 = 20]. Thus, if everything is fine on the team, they give everyone a score of 5. (They do not give themselves points, thus the subtraction of one before multiplying times five.)  And if they feel a  need to score a person lower than a 5, they’ll have to score someone else higher.  Thus, there must be a genuine reason to decide not to give everyone a score of 5. I was worried this would be confusing, but they all understood (two large sections of 80 students each).
 
By dropping the lowest score in each category, if there is a personal dispute (or a single problem about ethnicity/race), it will not affect a person’s score.  Also, no one average score in any category will be below a 5 unless at least two people on the team believe that person deserved lower than a 5.  My theory was that if at least two people out of five or six identify a problem, then it really needs to be addressed.  
 
Allocating the points prevents someone from “gaming” the competitive nature of grades and grading all of their teammates more harshly or lower.  (Students here are graded on a strict curve.)  It also prevents the people who have a tendency to score high from falsely giving everyone a high score over others who think “average” is good for everyone.
 
Finally, for the first round, I included a page with the open questions of “list something positive about each person” and “list something you’d like this person to improve” for each person to fill out.  My assistant typed up the comments on a single sheet of paper for each student, and I computed the averages to report (after dropping the lowest in each category); that average for each category was also reported on the sheet with the typed comments (typed to protect anonymity).  I gave that sheet to the students within a week of them completing the evaluations.
 
My hope is that by the time we get to the final assessment, everyone on all teams give each other 5s in each category.

It’s a bit labor intensive, but I’ve already seen some people improve after receiving comments---things they would never have “heard” if I’d said something similar.  
 
I hope this helps some.
 
Jana

 
 

Jana R. McCreary
Assistant Professor of Law
Florida Coastal School of Law
8787 Baypine Road
Jacksonville, Florida 32256
[log in to unmask]">[log in to unmask] <mailto:[log in to unmask]>
(904) 256-1222
(904) 680-7771 (fax)


From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Laura Madson
Sent: Thursday, October 07, 2010 2:57 PM
To: [log in to unmask]">[log in to unmask]
Subject: peer evaluation practices

Hello everyone -
I’m curious about the peer evaluation procedures you use. Would you take a few moments to respond to the following “straw poll?” In addition, please feel free to send any thoughts or comments about peer evaluations.
  1. How many times do you collect peer evaluations during a term (e.g., once, twice, after each team activity)?
  2. Do you use a numerical peer evaluations (e.g., assigning points or answering survey items on a 1-to-7 scale), open-ended comments, both, or something else?
  3. Do you share peer evaluations with students?

In the spirit of sharing, I tell you my answers to the above questions. I teach undergraduates in large-enrollment sections (N=140) of Introduction to Psychology. In the past, I’ve collected peer evaluations at the end of the term using survey items rated on a 1-to-7 scale and I haven’t shared peer evaluations with students (unless they asked about their final grade). This semester, I’m experimenting with collecting open-ended comments after each team activity and sharing those formative comments with students. Its too early in the semester to determine the effect of the new peer evaluation procedure but the change got me wondering about the variety of peer evaluation procedures used by other TBL folks.

Many thanks for your thoughts and time!
lm
Laura Madson, Associate Professor and Graduate Director
Department of Psychology
Box 30001/MSC 3452
Las Cruces, NM 88003
(575) 646-6207
--