Dan:  Your experience reinforces the principle that faculty have the  
privilege of setting acceptable ranges for weighting IRAT/GRAT/ 
application/peer evaluation BEFORE the students have opportunity to  
choose weighting.   For example, in our school of medicine, most  
faculty allow students to choose weighting within these ranges:

IRAT:  25-50%
GRAT: 30-60%
application:  0-30%
peer evaluation: 5-10%

I typically ask teams to decide among 4 options, each of which  
specifies a complete weighting scheme, such as "IRAT 30%, GRAT 40%,  
application 25%, peer evaluation 5%".  The teams make choices and  
defend those choices until there is a clear consensus--this sometimes  
requires a series of votes/discussions until one option is supported  
by a majority of the teams.

Dee Fink's alternative method is to use peer evaluation grades as  
multipliers of the team performance grades, so that peer evaluation  
matters, but is not assigned a specific percentage of the overall  
grade.  (for details, seen p. 267 in Team-Based Learning:  A  
Transformative Use of Small Groups in College Teaching; Michaelsen,  
Knight, and Fink eds, 2004, Stylus Publishing.

In any case, one definitely needs to limit the weighting assigned to  
peer evaluation so as not to diminish the importance of individual and  
team performance on the RAP and applications.   I would explain to the  
class that over-weighting of peer evaluation diminishes the  
affirmation of learning as the chief goal of TBL, so another grade  
weighting exercise is in order.     Good luck PK


Paul G. Koles, MD
Associate Professor, Pathology and Surgery
Wright State University Boonshoft SOM
937-775-2625 (phone)
937-775-2633 (fax)
[log in to unmask]











On Jun 27, 2011, at 6:06 PM, Daniel Williams wrote:

> Hi everybody:
>
> I just went through the Grade-weight setting exercise outlined in  
> appendix C of the TBL book with my class.  In previous semesters I  
> had trouble getting classes of four teams to come to an agreement on  
> grades, so for this semester's nine team class I used the large- 
> class variant.  They set their weights individually and then entered  
> them into an excel spreadsheet on my computer, where I had a running  
> average for each category set up.  The problem is that the first  
> team to finish entered in this:  10% individual performance, 10%  
> team performance, 80% team maintenance.  I think these guys then  
> persuaded the rest of the class to go along with them, so everybody  
> else quickly gave me the same weights.  I was a little flabbergasted  
> so I mentioned that this distribution was so crazy that a person  
> could be really smart, but get dinged a letter grade for being  
> overbearing or shy.  15 minutes later they had brought the team  
> maintenance score down to 66%, but that still sounds really high to  
> me.  Based on my experience the team and individual performance is  
> usually split more or less 50/50 with team maintenance getting the  
> remainder.
>
> I tried to make the peer evaluation system simpler, no forced  
> scoring, to minimize problems and I am worried that is what caused  
> the stampede.
>
> Has anybody else run into this crazy result before?  I am a complete  
> loss as to what to do about it!
>
> Thanks,
> Dan Williams