TEAMLEARNING-L Archives

Team-Based Learning

TEAMLEARNING-L@LISTS.UBC.CA

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Brian Robert Dzwonek <[log in to unmask]>
Reply To:
Brian Robert Dzwonek <[log in to unmask]>
Date:
Wed, 14 Jul 2010 10:37:36 +0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (144 lines)
Jennifer,

When writing multiple choice questions (MCQ) for the first year courses, we recommend following the multiple choice question writing guidelines outlined in National Board of Medical Examiners (NBME) publication Constructing Written Test Questions for the Basic and Clinical Sciences. Since the NBME is the organization responsible for the medical board exams, it is our goal to follow their guidelines so that our questions approximate the types of questions that our students will see when they take these exams. 

We spend time reviewing multiple choice question with our faculty to make sure that the questions are not only well written but are set at an appropriate level. The primary objective of the multiple choice question review is to address the two basic criteria set forth by NBME for writing MCQs. First, the question must address important content, and second the question must avoid irrelevant difficulty. We attempt to satisfy the first criteria by assuring that the questions map to the content material provided by Duke or the materials recommended by our faculty.  In other words, the questions must assess the material we are asking our students to review.  We attempt to satisfy the second criteria by reviewing and rewriting questions that contain elements that are not recommended by the NBME, and reviewing all questions using statistical item analysis after testing including the point-biserial correlation. 

As you know the point-biserial correlation is a correlation between the right/wrong scores that students receive on a given item and the total scores that the students receive when summing up their scores on the remaining items.  It is a special type of correlation between a dichotomous variable (the multiple choice score which is right or wrong 0, 1) and a continuous variable (the total score on the test ranging from 0 to the maximum number of multiple-choice items on the test).  As in all correlations, point-biserial correlation values range from -1.0 to +1.0.
  
A large positive point-biserial correlation value indicates that students with high scores on the overall test also get the item correct (as would be expected) and that students with low scores overall are get the item wrong (again as would be expected). A low point-biserial correlation implies that students who get the item correct tend to do poorly on the test overall (which would be an anomaly) and that students who get the item wrong tend to do well on the test (also an anomaly). 

Look for point-biserial correlation in the range of .3-.6 as an indicator of well written question that also performed well.  There are a number of variables associated with how the questions performed (NBME points to many) but consider this range as a good first step to start to flag questions that may need to be revised.  Also note the distribution by percentage of students selecting a particular response. These two indicators help us to determine if the question was clear and written at an appropriate level. These data can be used to help identify items that have passed through the question review process but failed to perform well for a number of reasons that may include.

.	Items that were poorly written causing students to be confused when selecting a response
.	Graphics, tables, and diagrams that are confusing or misleading
.	Items that may not have a clear correct response causing the test takers to mistakenly select another distractor as the correct response
.	Items that contain distractors that are obviously wrong causing the students to select the correct response.

Hope this helps...

Brian


Brian Dzwonek, EdD | Deputy Director, Medical Education Research & Evaluation Department| Duke-NUS Graduate Medical School | 8 College Road, Singapore 169857 | Tel: +65 6516 8067 | Fax: +65 6227 2698|  Email: [log in to unmask] |  Web: www.duke-nus.edu.sg



-----Original Message-----
From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Jennifer Imazeki
Sent: Wednesday, July 14, 2010 10:06 AM
To: [log in to unmask]
Subject: how basic should RATs be? (was MC vs open-ended applications?)

Thanks Sandy - I do plan to have the RATs all as multiple-choice and
my question was really about applications. But you raise another point
that I've been wondering about as well and that is about discussion
after RATs. As you point out, the RATs need to have one 'right' answer
and my understanding is that they should be relatively basic - since
the RATs are covering material that we won't have discussed yet in
class, it seems like I shouldn't expect students to be able DO much
with the material yet and instead just test definitions and check
student understanding of whatever readings they were supposed to do
beforehand. But I've also seen statements from people about not making
the RATs 'too easy'. So I guess I'm a bit confused about what level to
aim for with the RATs and how much I should expect the post-RAT,
pre-applications discussion to be real discussion versus me
'lecturing' to clarify misunderstandings about the basic concepts. Am
I understanding the role of the RATs correctly?

thanks,
Jennifer

On Tue, Jul 13, 2010 at 6:31 PM, Sandy Cook <[log in to unmask]> wrote:
> Perhaps I'm missing something, but in the original query - I think there might have been a bit of confusion between application vs IRAT-GRATS.  Just wanted to make sure some of those distinctions were clear to people.
>
> If you are using the IF-AT for the GRATs= then they have to be MCQ questions.  Not necessarily with only one right answer, but certainly with one best/better answer.  Now the learning that comes from the discussions are rich when you follow up with many of the suggestions and certainly a team appeal can argue their way into more points for an alternative position on the right answer.
>
> For applications however, they don't have to be MCQ and there are lots of strategies people have used from diagrams, short answer, etc.  Good to have some in MCQ format - and sometimes with close answers - from which students much choose, debate, and provide explanation - for as someone mentioned - at some point you have to make a choice.
>
> One way that we have used to permit short answers, but allow for simultaneous reporting (without people seeing everyone's answers) is to write them on small pieces of paper (limits response too) and paste on a single sheet (but we only have 8 teams - might need 2 pages if more teams) and project answers on a visualize.  That way all 8 team's responses are shown at same time, but you don't have to write them on paper or white boards for everyone to see.
>
> Sandy
>
> ********************************************************
> Sandy COOK, PhD | Senior Associate Dean, Curriculum Development |
> Medical Education, Research, and Evaluation (MERE) |
> Duke-NUS Graduate Medical School Singapore | Khoo Teck Puat Building | 8 College Road Singapore |169857 |
> W: (65) 6516 8722| F: (65) 6227 2698 |
> email: [log in to unmask] | web:  http://www.duke-nus.edu.sg;
>
> Administrative Executive: Belinda Yeo | [log in to unmask] | 6516-8511
>
> Important:  This email is confidential and may be privileged.  If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person.  Thank you.
>
>
>
>
> -----Original Message-----
> From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Sweet, Michael S
> Sent: Tuesday, July 13, 2010 11:21 PM
> To: [log in to unmask]
> Subject: Re: MC vs open-ended applications?
>
>>then I will ask for the voices of the "minority opinions." I am rarely disappointed. . .
>>
>
> Much of being a good TBL facilitator is being able to playfully "pick fights" between teams.
>
> Ruth is a natural. ;-)
>
> -M
>
>
> -----Original Message-----
> From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Levine, Ruth
> Sent: Tuesday, July 13, 2010 10:16 AM
> To: [log in to unmask]
> Subject: Re: MC vs open-ended applications?
>
> Even when all of the teams choose one answer, there is almost always a "minority opinion" within each team where one or more of the students liked an alternative choice. What I like to do when everyone picks the same answer is I will ask one or more of the teams to justify the choice, then I will ask for the voices of the "minority opinions." I am rarely disappointed--the students who argued and "lost" in their teams are frequently happy to be able to justify their choices to the larger group. Sometimes, students will ALL pick the wrong answer--these are some of my favorite scenarios--and a rare student will then have the chance to describe why they alone chose the correct answer.
>
> I will also ask the students to justify their decision to avoid the options that they did not choose. Many times I will ask them to go through every option. They cannot get away with just picking one choice and then go on.
> Ruth
>
> -----Original Message-----
> From: Team-Based Learning [mailto:[log in to unmask]] On Behalf Of Jennifer Imazeki
> Sent: Tuesday, July 13, 2010 9:59 AM
> To: [log in to unmask]
> Subject: MC vs open-ended applications?
>
> Hi all,
>
> I will be trying TBL for the first time in the fall and am working
> through lots of issues. One has to do with the structure, and
> reporting, of the team application exercises. My students will be
> using clickers for the IRATs (though probably the IF-AT forms for the
> GRATs), and I was planning to start many of the team exercises with an
> individual clicker question, to get students thinking about the issue
> on their own before turning to the group. But I'm a little worried
> about having all the team exercises set up as multiple-choice
> questions because I wonder how that will impact the ensuing
> discussion. For example, I can imagine a scenario where the majority
> of teams selects one of the responses; even if a team that chooses a
> different response has a good reason for selecting that, the other
> students may just think they are 'right' because they are with the
> majority, and not really engage in the discussion. Of course, with
> many of the applications, there is not necessarily a right answer so
> the key will be in their reasoning but still, I wonder if having
> multiple-choice options will create an 'illusion' in students' minds
> that there are right and wrong responses. I've thought about giving
> them whiteboards instead and having them write a short response but
> then I'm worried that, given the size of the room (70 students), not
> everyone will be able to see what everyone else has written.
>
> Any thoughts, experiences, advice?
>
> Jennifer
> ****************************
> Jennifer Imazeki
> Department of Economics
> San Diego State University
> homepage: http://www-rohan.sdsu.edu/~jimazeki/
> Economics for Teachers blog: http://economicsforteachers.blogspot.com
>

ATOM RSS1 RSS2