Hi

Thanks for sharing

This is very interesting

jim

*Jim Sibley*

*I am lucky to be a Board Member for the Vancouver Fringe*

*Ask me about independent theatre in Vancouver...*



Find out more at www.vancouverfringe.com
_______________________________________

Jim Sibley and Amanda Bradley
106-2575 West 4th Ave.
Vancouver, BC
Canada

h 604-564-1043
w 604-822-9241

On Sun, Oct 5, 2014 at 3:04 PM, M Alexander Jurkat <[log in to unmask]>
wrote:

> Hey all,
>
> Over the last couple of years, I've taught an Introduction to Data and
> Databases course. For the first two semesters, I taught face-to-face using
> team-based learning methodology. Over the next two semesters, I revised the
> teaching methodology for hybrid-online and fully online. I was pleasantly
> surprised by how well TBL adapted to the online environment. I'd like to
> share some of my approaches and experiences. I'll start with the Reading
> Assessment Process (RAP).
>
> Background
>
> I am a part-time instructor in the Informatics Department at the
> University at Albany (SUNY). I spent 10 years as a lawyer, 15 more years as
> a game designer, and have been working in process improvement and business
> intelligence in the manufacturing sector for the last 4 years. As you can
> imagine, I'm fairly process oriented -- forgive my obsession with "rules".
>
> Philosophy
>
> In designing any process, the objectives and purposes are the best
> starting point. My thinking about the RAP has evolved over the years. The
> online processes that I developed are hugely dependent on my take about the
> goals of the RAP.
>
> When I started, I viewed the RATs as pure and simple tests. The students
> were to read (or view) the materials, take the tests, and be graded. That
> way I would know if they had done the reading well or poorly, and would
> have something to add to their cumm grade. I designed questions by focusing
> heavily on the materials. One answer was the "correct" one (often quoted
> directly from the source materials) and the others were not. To make the
> questions more challenging, I spent a fair amount of time devising
> plausible, but incorrect answers. When it came time to review the answers,
> the students simply accepted my "correct" answer or were annoyed at the way
> the other answers were tricky or vague.
>
> I realized I was putting a lot of effort into deceiving (or confusing) the
> students. The better I got at creating plausible but incorrect answers, the
> greater the deception. That didn't sit well with me. I know the material
> far better than they do. The fact that I could deceive one or more of them
> each test accomplished nothing . . . and was downright mean.
>
> I also found that the few questions that I seeded with more than one
> correct answer (in an effort to create "appealable" issues) produced the
> best discussions and the most meaningful appeals. In those cases, the
> "correct" answer, as indicated by the answer key, was merely a starting
> point for a larger discussion. The more questions that had multiple correct
> answers, I more I encouraged the students to "buck the system", discount
> the "correct" answer, stick to their guns, and support their answer.
> Capping the exercise and reinforcing the point, I gave them full credit as
> long they could give me a reasonable argument for their answer. It was a
> tough road, however. Students are used to seeing tests has
> teacher-controlled exercises with one right answer and a bunch of wrong
> answers.
>
> Fairly quickly, I started eliminating questions with plausible but
> incorrect answers. I started using, as a general course, questions with at
> least two correct answers (or at least two justifiable answers). The more I
> shared appeals presenting alternative answers, drew out explanations,
> balanced them against the "correct" answer, and liberally awarded full
> points to the appealing groups, the more the student realized that RAT
> questions were a starting point for discussion, not a black-and-white
> evaluation of their preparation. The more correct answers I seeded the
> questions with, the more robust the discussion. The RAP became a process to
> engage the students with the materials, not an end-point testing the
> students' mastery of the materials.
>
> In devising multiple correct answer questions, I found myself naturally
> pulling back from the materials. I could cover more material in one
> question if more than one answer was correct. I found it easier to create
> application-, implementation-, synthesis-, or analysis-oriented questions,
> using the materials as a starting point for novel situations. That too
> created more robust discussions. There were fewer and fewer easy questions
> and lots and lots of justifiable answers.
>
> A happy side-effect was that I could draw in the students who did not do
> the reading. As long as they read the question carefully during either the
> iRAT or tRAT, listened to their team members during the tRAT, and
> contributed (as part of the group) to the appeal discussion, they were
> exploring the ideas and could achieve a decent grade.
>
> Looked at as engagement and discussion seeds, the components of the RAP
> needed to be re-weighed. The iRAT is least important. It's primary purpose
> is to introduce the students to the questions. Whether they get the answer
> right is far less important than their review of the possible answers. The
> tRAT is more important but not much. It's an opportunity for the students
> to share their ideas, take a stab at a correct answer, and discuss possible
> rationales. The most important, by far, aspect of the RAP is the appeal
> process. That's where the students justify their answers and receive
> feedback.
>
> With a grading structure of 25% for the iRAT and 75% for the tRAT, as
> modified by the appeals rationales, the purpose is reinforced. Also, I make
> at least one appeal mandatory for all teams. This reinforces the notions
> that (1) questions have more than one potential "correct" answer, and (2)
> only if the team probes the alternative answers through the appeal process
> can they benefit from these correct answers. As the semester goes on, more
> and more teams appeal more and more questions. Some teams catch on quickly
> and create an appeal from every question, because . . . you never know.
>
> Process
>
> My process relies on tools available in Blackboard. Frankly, I've never
> used another CMS so I can't say if similar tools exist elsewhere.
>
> First, I create a pool of 10 RAT questions, each with five different
> answers. I use "all of the above" and "none of the above" liberally. I also
> use "some of the above" to further encourage thinking about alternative
> correct answers.
>
> Using that pool of questions, I create the iRAT using the test tool in
> Blackboard. I set the question order to be random and the answer order
> (within that question) to be random. I allow the students to take the iRAT
> as many times as they like with two conditions: (1) they don't know their
> iRAT results until after the tRAT answer sheet (see below) has been
> submitted, and (2) they cannot start the iRAT after a certain deadline. I'm
> perfectly happy to have the students review the test more than once. That
> furthers engagement with the materials.
>
> Here are the iRAT assignment instructions:
>
> "The following test has 10 questions, each worth 10 points. Choose the
> best answer for each.
>
> Make a note on the full text of your answers (or enough of it to remind
> you which one you choose) so you have a record of your choices to reference
> during the tRAT. Noting down just the letter (A., B., C., etc.) of your
> answer will not be sufficient as answers are scrambled for each test.
>
> You have 30 minutes to submit your answers. The test will time out after
> that period of time and auto-submit.
>
> You will not be notified of your score on this test until after submission
> of the tRAT for your team.
>
> You can retake the test as often as you like (prior to the due date), but
> your final iRAT score will be based on your latest submission."
>
> Once the deadline for the iRAT passes, I open up the tRAT assignment in
> Blackboard. The tRAT has two parts: the test and the answer sheet
> assignment. Unlike the iRAT, the tRAT has a set order for the questions and
> a set order for the answers to ease grading. I ask that the students gather
> in some synchronous environment (chat, Skype, Google Hangout, etc.) and
> take the tRAT together. Again, the students can open and run through the
> test as often as they like. No results are provided for the test so repeat
> review is not a problem.
>
> Once the students have had a chance to review and discuss the iRAT, one of
> them submits an answer sheet to me. That sheet lists the questions in order
> with a first, second, and third best answer to each question. The answer
> sheet submission is open to any member of the group, but only one member
> can submit the sheet and it can only be submitted once.
>
> Here are the tRAT assignment instructions:
>
> "Complete this test and the RAT Team Answers assignment at the same time.
> The RAT Team Answers assignment can be found listed in your group area. Do
> the following:
>
> Schedule, then gather your team at one time, communicating in person, via
> chat, using Google Hangouts, Facetime, Skype, or another means.
>
> Once your team has gathered, discuss each question and choose the best
> answer for each.
>
> One (or more) team members should take notes on which answers the team
> favors. Pick a first, second, and third choice for each question.
>
> Once your team has decided its 3 choices per question, one person should
> submit the RAT Team Answers assignment, listing the three choices per
> question, as well as the names of the team members who participated in the
> team test (team mates who don’t participate get a 0 on the tRAT). List the
> text of the answers as well as the letter choices, to make sure your grade
> is accurately calculated.
>
> The following test has 10 questions. Getting the correct answer on the
> first choice is worth 10 points; getting the correct answer on the second
> choice is worth 5 points; getting the correct answer on the third choice is
> worth 3 points; getting none of the choices correct is worth 0 points.
>
> You have 40 minutes to submit your answers. The test will time out after
> that period of time and auto-submit.
>
> You can view the test as many times as you like. You can submit your RAT 2
> Team Answers assignment only once.
>
> You will be notified of your score on this test shortly after you submit
> the RAT 2 Team Answers assignment.
>
> Your team's appeal assignment is based on the results of the team RAT (see
> separate RAT Team Appeal assignment)."
>
> I then grade the tRAT answer sheet. This is a relatively quick and easy
> process because the question order is always the same and the correct
> answer key (a, b, c, d, or e) is always the same. If the team gets a
> question "wrong", I provide the correct answer key when I respond to their
> iRAT answer sheet assignment in Blackboard. Because the tRAT answer sheet
> is a team assignment in Blackboard, I can input one grade result and it
> flows down to each member of the team. I then simply have to modify the
> grade for those team member who didn’t participate to 0.
>
> Once I've finished grading the tRAT answer sheets, I open the appeals
> process. Again, the students gather to discuss their answers, create
> rationales for them, and write up the appeal document. Again, any of them
> can submit the appeal document but only one of them can submit it and only
> once.
>
> Here are the RAT Appeals instructions:
>
> "Once you receive your grade on the RAT Team Answers assignments, you have
> the opportunity to appeal the results. You must discuss and appeal as a
> team. Follow this process:
>
> As a team, discuss, either at the same time (as you did for your team
> test) or using your team RAT Appeals discussion forum (appeals discussed on
> the full-class discussion boards will result in 0 points on the appeal),
> any incorrect answers that you believe were as good as the correct ones.
> You can appeal as many question results as you wish, but must appeal at
> least one question.
>
> Draft and agree on a statement for each appealed result specifically
> explaining the ground for your appeal and citing any support from the
> readings or from other sources.
>
> Submit all appeal statements using this assignment. This assignment can be
> submitted only once.
>
> If any of your appeals are approved, you will gain points on both your
> team test and any individual tests that picked the appealed answers
> (instead of the "correct" one). Note that you can take an appeal from a RAT
> question that you got right on the tRAT, but one or more team members got
> wrong on the iRAT. You just have to get your team to agree to submit the
> RAT Appeal."
>
> In my last class, I responded directly to the groups on their appeals,
> replying to any questions or points made through Blackboard. A better
> method would have me setting up a discussion forum for the entire class
> labeled RAT Appeals. I would create a new thread in that discussion area
> for each RAP. In that thread, I would present a long entry setting out each
> question and its answers, the various appeals taken from that question, my
> response to the appeal arguments, then an grant/rejection of the appeal.
> Students could review that thread to discover which appeals were made, how
> they were argued, and which of those were granted and which were denied.
> Also, students could reply to the thread, furthering the discussion if they
> like.
>
> Finally, at the end of the semester, I create a fifth, final RAP which has
> only an individual test. That test is made up of a random assortment of the
> questions from the prior four RAPs, in a random order with the answers
> randomized. That encourages the students to re-engage with all their prior
> RATs at the end of the semester.
>
> One result I found occurring quickly in the RAP process. The students
> would skip the simultaneous gathering portion and simply exchange their
> answers and rationales asynchronously via email or IM. That does undermine
> the give and take of the group discussion, but I decided that, if that's
> how the group wanted to handle their work, that's fine. They are still
> engaging with the materials.
>
> Another repercussion (not unique to this process) was that some student
> contributed more and some less, particularly if they dropped into an
> asynchronous communication pattern. So be it. Absent an in-class
> environment, I can't control how much they participate in their learning.
> Even with in-class activities, a student can always mail it in or sleep
> through it.
>
> The solution to that problem is not a better RAP. The solution to this and
> all other group contribution issues is the peer evaluations. As long as
> peer evaluations are worth a large portion of their grades (20-25%), and
> are conducted regularly throughout the semester so the non-participants
> have notice, active group participation is incentivized nicely. I'll
> discuss more about that later.
>
> I appreciate your patience as I rambled on. Hopefully this is of some use
> to some of you,
>
> M Alexander Jurkat
>
> INF 202 Team Lead
>
> Informatics Department
> University at Albany
>