TEAMLEARNING-L Archives

Team-Based Learning

TEAMLEARNING-L@LISTS.UBC.CA

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Larry Michaelsen <[log in to unmask]>
Reply To:
Larry Michaelsen <[log in to unmask]>
Date:
Tue, 28 Jun 2011 11:46:34 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (121 lines)
I agree with Paul on all counts but, would add a couple of other
points.

One is that I don't think letting groups struggle a bit in coming to
agreement is a bad thing for a couple of different reasons. First, one
of the benefits of letting students set the grade weights is that it
jump starts the cohesiveness building process by creating a bit of
intergroup conflict. Second, it gives teams more of a chance to develop
an understanding of each other and of the grading system.

The other is that I think it is risky to let students give everyone the
same peer evaluation scores. In my experience, most students will see
that as an opportunity to give everyone an A on the peer evaluation
component of the grade (which creates a STRONG incentive to weight it
high). Students are just like us; they simply do not like giving bad
news, much less bad grades, to each other.  Thus, if you make it easy to
give everyone a high score then that’s what they will do and, in
effect you will be penalizing (and my be de-motivating) the individuals
who would otherwise be the best contributors. Thus, I recommend somewhat
of a forced-distribution even if it is a minimal and/or soft one.

Larry


-----
Larry K. Michaelsen
Professor of Management
University of Central Missouri
Dockery 400G
Warrensburg, MO 64093

[log in to unmask]
660/429-9873 voice <---NEW ATT cell phone
660/543-8465 fax



>>> Paul Koles <[log in to unmask]> 06/28/11 7:34 AM >>>
Dan:  Your experience reinforces the principle that faculty have the
privilege of setting acceptable ranges for weighting IRAT/GRAT/
application/peer evaluation BEFORE the students have opportunity to
choose weighting.   For example, in our school of medicine, most
faculty allow students to choose weighting within these ranges:

IRAT:  25-50%
GRAT: 30-60%
application:  0-30%
peer evaluation: 5-10%

I typically ask teams to decide among 4 options, each of which
specifies a complete weighting scheme, such as "IRAT 30%, GRAT 40%,
application 25%, peer evaluation 5%".  The teams make choices and
defend those choices until there is a clear consensus--this sometimes
requires a series of votes/discussions until one option is supported
by a majority of the teams.

Dee Fink's alternative method is to use peer evaluation grades as
multipliers of the team performance grades, so that peer evaluation
matters, but is not assigned a specific percentage of the overall
grade.  (for details, seen p. 267 in Team-Based Learning:  A
Transformative Use of Small Groups in College Teaching; Michaelsen,
Knight, and Fink eds, 2004, Stylus Publishing.

In any case, one definitely needs to limit the weighting assigned to
peer evaluation so as not to diminish the importance of individual and
team performance on the RAP and applications.   I would explain to the
class that over-weighting of peer evaluation diminishes the
affirmation of learning as the chief goal of TBL, so another grade
weighting exercise is in order.     Good luck PK


Paul G. Koles, MD
Associate Professor, Pathology and Surgery
Wright State University Boonshoft SOM
937-775-2625 (phone)
937-775-2633 (fax)
[log in to unmask]











On Jun 27, 2011, at 6:06 PM, Daniel Williams wrote:

> Hi everybody:
>
> I just went through the Grade-weight setting exercise outlined in
> appendix C of the TBL book with my class.  In previous semesters I
> had trouble getting classes of four teams to come to an agreement on
> grades, so for this semester's nine team class I used the large-
> class variant.  They set their weights individually and then entered
> them into an excel spreadsheet on my computer, where I had a running
> average for each category set up.  The problem is that the first
> team to finish entered in this:  10% individual performance, 10%
> team performance, 80% team maintenance.  I think these guys then
> persuaded the rest of the class to go along with them, so everybody
> else quickly gave me the same weights.  I was a little flabbergasted
> so I mentioned that this distribution was so crazy that a person
> could be really smart, but get dinged a letter grade for being
> overbearing or shy.  15 minutes later they had brought the team
> maintenance score down to 66%, but that still sounds really high to
> me.  Based on my experience the team and individual performance is
> usually split more or less 50/50 with team maintenance getting the
> remainder.
>
> I tried to make the peer evaluation system simpler, no forced
> scoring, to minimize problems and I am worried that is what caused
> the stampede.
>
> Has anybody else run into this crazy result before?  I am a complete
> loss as to what to do about it!
>
> Thanks,
> Dan Williams

ATOM RSS1 RSS2