TEAMLEARNING-L Archives

Team-Based Learning

TEAMLEARNING-L@LISTS.UBC.CA

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Molly Espey <[log in to unmask]>
Reply To:
Date:
Mon, 15 Oct 2007 21:09:53 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (195 lines)
Sandy,

For those whose grades are brought down by the group/teamwork, and not
just by peer evaluations, I would argue that those individuals need to
learn to speak up.  Typically, students who do better on IRATs than their
group, or individual assignments than their group, are those who either do
not have confidence in their answers, are not able to convince their
teammates through clearly explaining why they are correct, or just
withhold for some unknown reason, perhaps not valuing accomplishment of
the team.

If they are in a profession where they must work in teams, none of these
issues are going to lead to success.  Better to learn to have confidence,
to evoke convincing arguments, and to value the work of others in the
classroom rather than be fired for lack of these abilities in the
workplace!

Regarding the peer evaluations again:  I require the students to
differentiate scores, a la Larry Michelson.  When I said that in well
functioning teams everyone gets about 10, I meant that scores tend to
average 10 for everyone.  I'm certain I've even had groups collude to give
say, three 10s, a nine to the person on their left, 11 to the person on
their right, so that it truly all averages 10.  I also ask students to
justify their evaluations and that often reveals a lot about the team
dynamics.  I also make it clear that I reserve the right to include my own
subjective evaluations in the process as well.  It rarely changes
outcomes, but it does allow me to deal with those situations like someone
else mentioned where some colluded to mark down one of the more productive
members, or even just one person significantly marking someone down when
everyone else give that person a 10.  Obviously there are personal issues
that should not be allowed to impact the grade.

Molly



> Re: Peer Evaluations    Thank you all for your comments.  I find it very
> interesting that those who use the percentage/weightage method have not
> mentioned much push back from the class about "lowering" some people's
> scores at the expense of others.  Just to let everyone know, we outlined
> the peer evaluation process in the very beginning.  Gave them an example
> to work out (sort of a TBL) on grade setting to see the impact of both the
> varying percentages of IRAT,GRAT &amp; application as well as the peer.
> We also did a "trial" run of the peer for feedback purposes.  That is
> where we are now - the comments have been:    I didn't realize that giving
> more points to one meant others get lower  I didn't realize the percentage
> impact.  For example one group's total resulted in on individual getting
> 108% and another 92% - that meant that from the student's perspective, one
> student's grades jumped very high by 8%, but another dropped by 8%.  They
> really wanted to acknowledge one person - but they did not want to
> penalize that much.  Why must we penalize one in order to praise another?
> Can't we just give points - not take away points?
> What I cannot seem to convey to them is that their group scores are not
> theirs until it is moderated by the team contribution.  They feel they
> have earned all the group points - and the moderation is "subtracting"
> from what is rightfully theirs.
> On another note, in our first course experience there actually have been a
> couple of students who's final grade was in fact brought down, ever so
> slightly, by the group process.  This is mainly a function of some group
> processing issues they still have to work out.  Those individuals actually
> did better on the IRATs than the group did on the GRAT, explained by the
> group that it usually was because that one person "guessed" right and was
> not confident enough to convince the rest of the group that their answer
> was right - thus went with majority rules.  Plus, as we are new to
> developing our applications, some application scores are lower than the
> individual's general averages on tests and IRATs - but the difference is
> usually miniscule - less than 1 percentage point - but it has had an
> impact.
> Telling them that life is full of zero-sum experiences and that this is an
> important lesson to learn and to work with - is not flying well.
> Again, thanks for your input.
> ************************************************** Sandy Cook, PhD
> Associate Dean for Curriculum Development Duke/NUS Graduate School of
> Medicine
>   From: Team Learning Discussion List on behalf of Molly Espey
> Sent: Tue 10/16/2007 5:55 AM
> To: [log in to unmask]
> Subject: Re: Peer Evaluations
>
>
> I use the peer evaluations to weight the group portion of the grade.
> Then, I don't have to set some base for what "average" means - average
> means the group's score on activites and RATs throughout the semester.
> This then rewards groups that work better together and produce better work
> more than groups that don't do as good work.
>
> Some groups do quite well with nearly equal contributions from all
> members.  Then they are all equally rewarded by doing well as a group, as
> I find that groups that really work well together end up with everyone
> averaging very close to 10.
>
> I also do the peer evaluations at least once in the middle of the
> semester, as well as at the end, but the midsemester evaluation doesn't
> count toward the final grade.  It does, however, allow me to give feedback
> to students.  I remind them that mathematically there are two ways to
> improve the group portion of their grade (which again is weighted for
> individuals by the peer evaluations, with some subjective consideration on
> my part):  increase the group's scores or increase their own peer
> evaluations scores.  Usually, efforts to improve one's own peer evaluation
> score will also increase the group's grades.
>
> I've only had one student complain to me directly, upset that the group
> activities were bringing her grade down.  She was wrong, however,
> mathematically.
>
> Molly Espey
> Applied Economics and Statistics
> Clemson University
>
>
>
>> There was a great thread about Peer evaluation in January, which was
>> informative, but truthfully, I did not appreciate the discussion at the
>> time.
>>
>>
>>
>> We have just completed our first peer evaluation process and I have some
>> questions.  We believe in the peer evaluation process and will not
>> abandon it, but there have been some issues.
>>
>>
>>
>> In the TBL book there are two forms of peer evaluation described
>> (percentage and maintenance).  Several pros and cons are listed, but
>> mostly ending with a suggestion of a positive learning note.  Of the two
>> methods described, selfishly I chose the percentage one because it made
>> more sense to me and was easier to calculate.  The students however, are
>> incensed (well maybe too strong of a word, but upset) that it is a
>> zero-sum game.  They don't mind giving points to those who contribute,
>> but they do not want to take points away from those who contribute less.
>>
>>
>> *         How do you rationalize the zero-sum concept?
>>
>> *         How does one explain the value of moderating the scores?
>> Maybe it is a cultural thing - being nice, but the idea of taking away
>> something they believe they have earned is painful. How do you tell them
>> that they have not really earned the group scores unless they
>> participate in the group?
>>
>> *         When the group size results in a proportion that is not easily
>> divisible by 5 - and they want to give the team equal marks - but can't.
>> For example a team of 7, with 6 ratings can only give 16.7 and 16.6 -
>> someone will be a bit higher and a bit lower.
>>
>>
>>
>> Using the maintenance method might solve the logical problem by making
>> the peer assessment an added component to the grade - not subtractive
>> (on the surface).   If I were to switch to that method,
>>
>> *         How do you decide what % of the final grade should the peer
>> assessment be?
>>
>> *         Is it really any difference - or does it just appear that way
>> to the students because they see it as adding not subtracting?
>>
>> *         How do faculty feel about inflating grades by making portion
>> of success be solely on peer points?
>>
>> *         Will I exchange a student fight for a faculty one?
>>
>>
>>
>> This is quite a contentious topic, and I can see why people give up on
>> it - or move away to more feedback rather than grade moderation - but we
>> really feel that it is important to keep - so any advice on how to deal
>> with student's anxiety is most welcomed.
>>
>>
>>
>> Sandy
>>
>>
>>
>>
>>
>>
>>
>> ***************************************
>>
>>
>>
>> Sandy COOK, PhD | Associate Dean, Curriculum Development | Duke-NUS
>> Graduate Medical School Singapore | W: (65) 6516 8722| F: (65) 6227 2698
>> |
>>
>>
>>
>>
>>
>>
>

ATOM RSS1 RSS2