Just to add to this debate. We were initially thinking of our assessment as 30% RATs (iRATs weighted at 10% and tRATs weighted at 15%) plus an exam taken 2 weeks following the end of the course worth 70%. Exam would focus on bringing elements covered in the course together. We are now however having a major rethink and considering only including the RATs as the method of assessment (there are 4 units in the course). Reason being that we will have achieved the learning outcomes through the RATs and application activities and as yet are not convinced that the exam will add anything of significant added value.
I would add that this is taking place of reviewing the amount of assessments our students do in non TBL courses (all except our EBP course) throughout each academic year. I would welcome others views on this.
Best wishes
Jenny
Dr Jenny Morris
Associate Professor (Senior Lecturer) Health Studies
Faculty of Health, Education & Society
University of Plymouth
Knowledge Spa
Truro
Cornwall TR1 3H
On 30 Jun 2012, at 12:43, "Carson, Ron" <[log in to unmask]<mailto:[log in to unmask]>> wrote:
The remaining 58% of the grade is based on individual work. The remaining grade consists of: lab practical; an individual impairment assessment; peer review and professional behavior. The practical and impairment assessment represent 200 points.
________________________________
From: Team-Based Learning [[log in to unmask]<mailto:[log in to unmask]>] On Behalf Of Sandy Cook [[log in to unmask]<mailto:[log in to unmask]>]
Sent: Friday, June 29, 2012 5:46 PM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Evaluating Mastery
I would like to add to Len’s comments. The students do view the weights you put on things as how much you value them. So, that does need to be considered (as well as communicated). The question I have, that is not completely clear, are there other individual assessments besides iRATs – and how much of the total grade is made up of individual work vs team? That tells a bit of the story on if IRATs are too high, too low?
At our school – we decided that individual and team should be equally valued – so we try to make the weightage such that 50% of all grade comes from individual components (IRATs, tests, other individual assignments) and 50% from team components. We also have some courses where the TBL is the only individual assessment (thus IRATs may count for 50% of grade) and some where they do have some other individual summative exams – and then the IRATs are proportioned into the summative exams. – so it could be that the 3 Exams count for 10% each (for total of 30%) and IRAT 20%). It depends on the course a bit.
We also debated the need for summative exams – or not. There are those who argue that the summative exam push students into worrying about grades and exams rather than learning. Others who will say that summative exams provide an additional opportunity for consolidation, recall, and validation of what they have learned (if developed correctly). My view had been that if the exams are run a bit like a RAT – where by the students get do a TRAT and discuss the answers (so further learning occurs) – We get additional benefit –
The other challenge (and I’m not saying I have an answer here), is thinking about what is the important part of the learning process – how much effort is put into how well they prepare (which is reflected in the IRATs) or how well they learn after that (which is the team work and possibly other assignments that help consolidate and permit reflection on where they have come from the IRAT)?
(And I agree – most of our courses are not designed to achieve mastery – Daniel Pink’s book Drive suggests that you need at least 10,000 hours of work/exposure/practice/etc to achieve mastery. So, even if a student spent 20 hrs a week over 2 semesters (20 weeks long) over 4 years focused learning on a particular topic – they could only get about 3,200 hours – in a 4 year undergraduate/medical school program – not quite mastery. Basic skills perhaps)
Sandy
Sandy Cook, PhD,
Assoc. Prof.
Senior Associate Dean
W: (65) 6516 8722
Administrative Executive: Belinda Yeo | [log in to unmask]<mailto:[log in to unmask]> | 6516-8511
Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Leonard White, Ph.D.
Sent: Friday, June 29, 2012 10:37 PM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Evaluating Mastery
Ron (and others),
This is a difficult challenge – and I still struggle with it. Here’s what I shared with a listserve colleague yesterday on this topic. Hope this helps someone.
Len
>>When I started, I assumed that exams would not be necessary – after all, I proposed an assessment strategy with some 400 questions across the semester. The real power of TBL is that every question and every response choice to every question gets discussed in real-time by every learner (I know of no other approach that achieves this outcome!). So what good is another bolus of 50 or so questions, so I thought? What I realized however, is that I did not think well through the principles at play that defined the assessment mechanisms; i.e., how should one differentiate one form of assessment from another.
What became very clear to me (the hard way), is that “readiness assurance” must be about “readiness” and nothing more, except perhaps to encourage learners down the path that I would want them to follow as their discussions unfold and they prepare for application. My concept of “exam” then, became much more strategic and well-focused on the issue of “milestone competency”. (I’m trying avoid the concept of “mastery” with my learners, as I do not aim to produce “masters” in any sense of the term – although I received the same “mastery on RA” complaint that Ron received). I do not expect a learner to enter daily the classroom prepared to be assessed on their competency, if by “milestone competency” I have in mind the proficiency to mobilize and apply foundational knowledge at the conclusion of a unit of study. So in my courses, exams have been that “milestone competency” assessment, building upon priory assessments (readiness assurances and application experiences) without replicating those prior experience (that too would be a conflation of purpose).
I should add that I still work very hard to get this right, including communication frequently to my learners that RA’s are not about “mastery” (admittedly, sometimes thinking to myself: “You have no idea what mastery of this topic – neuroscience--means!”).
Leonard E. White, PhD
Associate Professor
Department of Community and Family Medicine
Doctor of Physical Therapy Division
Department of Neurobiology
Director of Education
Duke Institute for Brain Sciences
Associate Director for Undergraduate Studies in Neuroscience
Trinity College of Arts and Sciences
From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Carson, Ron
Sent: Friday, June 29, 2012 10:26 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Evaluating Mastery
During a recent mid-term review, a student expressed the following opinion: “Students are concerned about grades while teachers are concerned about mastery of material. If this is true, why place so much emphasis on iRATS, which do not measure mastery?”
My course is setup so that 37% of the grade is derived from iRATS, 5% from tRATS and remaining percentage from various topics, tests and assignments.
I specifically set up the course with an increased emphasis on iRATS because I want students to study the book, which is of exceptional quality. However, I now wonder if my original philosophy is misguided.
I appreciate feedback.
Thanks,
Ron Carson
--
Ron Carson MHS, OT
Assistant Professor
Occupational Therapy Department
Florida Hospital College of Health Sciences
671 Winyah Drive
Orlando, FL 32803
407.303.9182
<image001.gif>
|