Hey all,

I'm back. Later than I had hoped, but these things happen. This time, I'd like to talk about online group discussions.

Philosophy

You may recall that I have moved around in my career, starting my current information management life only five years ago. That career started with graduate school. There I encountered online academic discussions for the first time (my prior schooling predated online academics, although I was fully familiar with online forums), both in hybrid and one fully online course.

In two words: they sucked. The general assignment pattern was (1) respond to instructor's question, and (2) respond to a responding student. The theory was that students would read through all the posts, get various views on the question, and then share their thoughts. The first posters were the good students and they covered the subject fairly well. The remaining students gave stock, vague, or otherwise worthless answers. The responses to the responders were even more uneven. Lots of make-work in my view. 

As a student, I would post early. A couple days later, I would read (maybe) a half dozen other responses and try to find one that I could fashion a non-trivial response to. I tried to pick one that I could disagree with (I was an attorney in another life, after all). Whenever I decided to look a bit deeper, I was disappointed. It was like reading forums on the internet -- lots of noise/chaff.

I decided to use the discussion forums to handle what I used to fill FTF classes with -- module exercises. I wanted to limit the discussion threads/posts that any one student had to read, and make the ones they did read much more meaningful.

But first, I need to explain a bit about my FTF class module activity approach.

Class Activities

I discussed class activities pretty thoroughly a year or so ago in my blog Adventures in Learning (http://ccistudentcenterblog.wordpress.com/tag/adventuresinlearning/). The bulk of the review is contained in Parts 9-13 (with a brief tangent in Part 11), for those who want the gory details. 

In sum, I find two activity approaches work in my more-practical-than-theoretical field: directed, sharp decision points and brainstorming. The former creates a clean conflict among the answers allowing for robust discussion. The latter muddles that conflict and thus undermines discussion, but can provide a useful set-up for decision-point questions (investing the students in the exercise through their own creative inputs).

Before I get to an example of the decision-point question, allow me to bore you with some background information. Databases aren't much use unless you can get information into and out of them. Data administrators have devised a lovely acronym to cover the task that any database must be able to perform -- CRUD -- Create (input), Read (query), Update (modify), and Delete. The vehicle by which CRUD operations takes place in the most common type of database (relational) is Structured Query Language, or SQL. The vast majority of database users never move beyond Read (querying) and that's where I spend the bulk of my course time. Still, CUD (create, update, delete) deserve some coverage. So here's one of the early question/activities in this section of the course (also useful as a multi-correct answer RAT question, although a bit limited for that):

 

Staff

     

staffNo

sName

position

salary

branchNo

bAddress

SL21

John White

Manager

30000

B005

22 Deer Rd., London

SG37

Ann Beech

Assistant

12000

B003

163 Main St., Glasgow

SG14

David Ford

Supervisor

18000

B003

163 Main St., Glasgow

SA9

Mary Howe

Assistant

9000

B007

16 Argyll St., Aberdeen

SG5

Susan Brand

Manager

24000

B003

163 Main St., Glasgow

SL41

Julie Lee

Assistant

9000

B005

22 Deer Rd., London


Question 2: Which SQL statement adds one employee record to the table above?
A. INSERT INTO Staff VALUES (“SA4”, “Alan”, “Brown”, “Assistant”, 8300, “B007”, “16 Argyll St., Aberdeen”);
B. INSERT INTO Staff (staffNo, sName, branchNo) VALUES (“SA4”, “Alan Brown”, “B007”);
C. INSERT INTO Staff VALUES (“SA4”, “Alan Brown”, “Assistant”, NULL, “B007”, “16 Argyl Rd., Aberdeen”);

The above question/activity presents a database table, then asks which statement inserts one record. First off, you'll notice that SQL is sort of like English. Even someone who has no background in it can make some guesses about what it does. That means the students who didn't do the reading can still participate in class (or online in the activity). Further, because it's an application exercise, those who have done the reading aren't home free. They still have to apply their new-found knowledge in a novel context.

To the answers: Answer A will return an error message, the two employee names are separated and will be put in two different fields, leaving one extra piece of data (in the quotes) at the end with no place to go. Answer B will not create an error. It also does not completely fill a line of data (leaving NULL for three of the table's fields). Answer C will work completely -- all fields will fill with data in the correct order. However, C has a problem -- "Argyl" is spelled wrong (or at least it's not spelled the same as the "Argyll" in the table). No error will occur, but the data in the table will be inconsistent. That problem arises because this table is not "normalized" . . . thus leading into another lesson.

So I present three activities in this manner, one for Create, one for Update, and one for Delete. Each of them follows the same pattern: more than one answer is correct; all answers have some problem. The students quickly catch on to my tricks (as they are supposed to) and begin to question each part of the answers. All the problems that arise in the various questions have one solution (imagine that?) -- normalization. So we then move into a series of normalization activities/questions.

In this way, the activities require short bursts of thinking about clearly distinguishable decision points. Answers must be justified by the groups who pick them, and can create clean conflicts which highlight the lessons to be learned. Each one also leads to the next activity, providing a piece of the puzzle that builds to the next lesson (even for those who don't prepare for class). Harkening back to my game design days, each activity is an encounter which advances the plotline leading (hopefully) to a satisfying climax and denouement. 

I've done a bunch of tweaking on my questions and in-class "plotlines." I'm fairly satisfied with how they work both within one class and over the course of several classes. But how to make this work online . . . how can I capture the give-and-take of the post-group-response discussion so that the lessons to be learned can be highlighted? Unfortunately, I have only a partial answer.

Online Discussions

I provide two weeks for each discussion. That allows enough time for the process to be completed asynchronously. For each of those time periods, I create a presentation to be viewed. For the CUD/Normalization lesson, the presentation can be found at:

https://docs.google.com/presentation/d/1VQxHJniuQqxNC47THQk_sAhgRdm_2PzLloMg8fduAAE/edit?usp=sharing

Then, I create a forum in Blackboard entitled " Discussion 4 Normalization-SQL C_UD". I create a number of threads in that forum equal to the number of teams in the class (which were formed early on). Each thread bears a group name, such as "Average Joes Discussion 4." I also create an additional thread called "Full Class Discussion 4". More on that later.

In each of the threads, I create the first posting. It runs something like this:

"For this discussion, please review the following presentation:

Section 2.2 Normalization (link to presentation above embedded here)

Each team member should respond to the following questions or instructions. Provide both an answer and a rationale for your answer. 

Remember that, for multiple choice questions, extra credit is awarded if you list the answers that were given previously by your teammates (or indicate that you are the first poster), and vary your responses from those answers.

Question 1 (slide 9): Name a foreign key (other than that noted in slide 8), the table/class it comes from, and the table/class it serves.

Question 2 (slide 15): Which SQL statement adds one employee record to the table above?

Question 3 (slide 16): Based on a close reading of the answers in Question 2, name one anomaly (mistake) that could arise when inserting records?

Question 4 (slide 18): Which SQL statement properly adjusts an address in the table above?

Question 5 (slide 19): Based on a close reading of the answers in Question 4, name one anomaly (mistake) that could arise when updating records?

Question 6 (slide 21): Which SQL statement properly deletes a staff record?

Question 7 (slide 22): Based on a close reading of the answers in Question 6, name one anomaly (mistake) that could arise when deleting records?

Question 8 (slide 48): The data model above contains a warning sign -- a many-to-many relationship. Normalize this data model by creating the appropriate associative table. Also, list possible attributes in the associative table that could not fit in the data model previously.

Question 9. Do you mind speaking for your team in the full-class thread for Discussion 4, or would you rather someone else did it?"

That covers the process for the content and answers. A larger process is presented in the forum's instructions, setting up due dates the first Thursday of the two-week period, the second Tuesday, and the second Thursday:

"This forum contains a number of discussion threads covering the Create (Insert), Update, and Delete aspects of SQL (C_UD), and normalization of relational databases. In total, quality participation in this discussion is worth 1 point (1%) of your final course grade.

This discussion covers the period from March 1, 2014 to March 15, 2014. The following due dates apply:

March 6, 2014: Each member of the team must post an individual response to each of the questions/instructions listed in his or her team thread. Participation in this discussion is worth 0.4 point. Further, for any posts addressing a multiple choice question, extra credit is awarded if the poster lists prior responses, then addresses an answer covered least often previously. Initial posts after the due date receive no points.

March 11, 2014: The team must decide what their "team answers" will be for each of the questions in the discussion thread. The team must also decide who will speak for the group in the class discussion thread (the caller) -- only one team member may be chosen for this role. The caller should act as discussion facilitator and consensus builder so a single answer for each question can be stated. Note that this discussion will fail (and no points will be awarded) if team members wait until the last minute to post. A true discussion requires regular posting and responses. Participating and helping the group reach consensus is worth 0.6 points. Taking on the caller role is worth 0.2 extra-credit points. (Your Peer Evaluation grade is heavily influenced by your peers' evaluations of your contributions to this discussion.)

March 13, 2014: The team caller must post each of the team answers in the Full-Class Discussion thread. If the team caller fails to post the team's answers, no team member receives any credit for this discussion. I will post my thoughts after all teams have posted theirs. Other team members are welcome to post thoughts after all team callers have posted."

Results

The first reaction to this scheme is "whoa, that's complicated". I DID warn you about my process mania in my last post. Still, in practice, the students quickly "get it." Particularly after they get their first set of grades/feedback. Also, the first "go-round" (Discussion 1) completely removes the content challenges -- it's a get to know each other and the process of discussions. The teams name themselves and talk a bit about what they hope to get out of the course. That gets them familiar with the process so when the content challenges hit in Discussions 2-8, they are able to focus on content and not so much on process. As you can imagine, I don't vary the discussion format throughout the semester. The process is continually reinforced.

So what happens? The students spend the first week doing the background reading/review and taking some guesses at answers. They have an incentive to post first (no need to review other group members' answers). Those who wait to post have an incentive (extra credit) to review their mates answers so they can vary their choices, covering different aspects of the lesson. At most, they have to review 6 other people's work (not the entire class). This part (varying your answers from prior posts) is the most difficult to gronk. If you actually disagree with the prior posts, it's easy -- post your BETTER answer and justify it. Even if you agree with one or more of your teammates' answers (posted earlier), by picking a different answer and posting a rationale for it being WRONG, you extra credit. That takes some time for students to grasp, and frankly some students never get it. Not a problem; it helps separate the A students from the rest.

Once the first Thursday passes, the students need to work in their group's discussion thread to come to a single answer and rationale for each question. First issue: who's going to speak for the group? Two outcomes can occur here: one person monopolizes the role because he or she is most comfortable doing that and the other team members accede. The incentives (extra credit for caller), however, encourage rotation of that role. By doing so, each group member can gain at least one extra credit award. Still, being group call/consensus building takes some extra work. (That it's excellent training for real work team collaboration is purely a happy byproduct, I assure you.)

The back-and-forth when settling on an answer varies widely. Some contribute a great deal; some much less or not at all. No matter -- by Tuesday of the second week, the caller posts something in the individual group thread, creating some consensus answers for the group. The caller then has two days to transport those answers from the individual group thread to the Full Class Thread, where all the other groups also post.

That next weekend, I post my take on things and make some comments about the various group responses. 

All this worked nicely throughout the semester, with one exception I'll cover shortly. My favorite result, however, was the grading process. I could filter the Grade Center to look at only the Discussion 4 (or whatever) responses "assignments". I would leaf through each student's contributions all combined into a series of posts (with the other student's stripped out). It was easy to see if the student posted by the first Thursday (and if their post was comprehensive -- covered each question). That was worth 0.4 points. If he or she discussed prior answers and justified (either positively or negatively) different ones, I awarded an additional 0.2-0.4 points. 

It was also easy to see how much a student engaged with the group (number of posts and debate points made). Some phoned it in -- "yeah, I agree" was the extent of their contribution. That participation earned 0.2 points in the beginning of the course (gave them benefit of the doubt on the process complexity) and 0 thereafter. At the other extreme was the group caller who gathered the consensus answers, highlighted the conflict answers, and mediated the discussion of conflicts. That participation earned up to 1 additional point (on top of the 0.4 for initial answers) depending on how impressive the group leadership was. The participation that fell somewhere in between those extremes was easy enough to quantify with a number between 0 and 1.

In the end, I noted the student's total score (from 0-1.4 points) in the Blackboard grade box and provided some comments in the feedback area explaining my grade ("only first posting," "little discussion participation," "good/excellent group participation," "great group leadership," etc.). I then moved onto the next student. All in all, I could grade an entire two-week discussion by 40+ students in about an hour, and provide meaningful feedback. Good stuff.

The Downside

The defect in this scheme was the post-group-answer discussion. That give-and-take was the highlight on the FTF class and it fell flat online. I can't be sure that any student reviewed the answers of the other groups or my take on things. I believe that's because I didn't take the exercise far enough.

Next time, I'll set the first deadline on the first Tuesday. I'll then give the student's to the first Friday to come to consensus and post their group answers in the Full Class Discussion thread. I would then post a series of challenges to various group answers in the Full Class Discussion by the second Tuesday. I would ask for group responses by the second Thursday. I would then post my final feedback over the next weekend as the groups began work on the next discussion. Not sure whether to limit the responses to my challenges to the group callers -- if so, I would emphasize the need for other team mates to help the caller and require that caller role's be rotated. (Caller/non-caller participation can, once again, be accounted for in the peer evaluation process.)

Again, there's no guarantee that the students will look at or think about the culminating discussion points, but that's the case in class as well. Easy enough to tune out as the instructor engages with other students on a debriefing after the activities, or as the instructor launches into a mini lecture wrapping up the lessons learned. My job is to provide opportunities for learning, not to guarantee learning occurs. 

Once again, I appreciate your patience as I rambled on. 

Hopefully this is of some use to some of you,

M Alexander Jurkat
INF 202 Team Lead
Information Department
University at Albany

On Sun, Oct 5, 2014 at 6:04 PM, M Alexander Jurkat <[log in to unmask]> wrote:

Hey all,


Over the last couple of years, I've taught an Introduction to Data and Databases course. For the first two semesters, I taught face-to-face using team-based learning methodology. Over the next two semesters, I revised the teaching methodology for hybrid-online and fully online. I was pleasantly surprised by how well TBL adapted to the online environment. I'd like to share some of my approaches and experiences. I'll start with the Reading Assessment Process (RAP).


Background


I am a part-time instructor in the Informatics Department at the University at Albany (SUNY). I spent 10 years as a lawyer, 15 more years as a game designer, and have been working in process improvement and business intelligence in the manufacturing sector for the last 4 years. As you can imagine, I'm fairly process oriented -- forgive my obsession with "rules".


Philosophy


In designing any process, the objectives and purposes are the best starting point. My thinking about the RAP has evolved over the years. The online processes that I developed are hugely dependent on my take about the goals of the RAP.


When I started, I viewed the RATs as pure and simple tests. The students were to read (or view) the materials, take the tests, and be graded. That way I would know if they had done the reading well or poorly, and would have something to add to their cumm grade. I designed questions by focusing heavily on the materials. One answer was the "correct" one (often quoted directly from the source materials) and the others were not. To make the questions more challenging, I spent a fair amount of time devising plausible, but incorrect answers. When it came time to review the answers, the students simply accepted my "correct" answer or were annoyed at the way the other answers were tricky or vague.


I realized I was putting a lot of effort into deceiving (or confusing) the students. The better I got at creating plausible but incorrect answers, the greater the deception. That didn't sit well with me. I know the material far better than they do. The fact that I could deceive one or more of them each test accomplished nothing . . . and was downright mean.


I also found that the few questions that I seeded with more than one correct answer (in an effort to create "appealable" issues) produced the best discussions and the most meaningful appeals. In those cases, the "correct" answer, as indicated by the answer key, was merely a starting point for a larger discussion. The more questions that had multiple correct answers, I more I encouraged the students to "buck the system", discount the "correct" answer, stick to their guns, and support their answer. Capping the exercise and reinforcing the point, I gave them full credit as long they could give me a reasonable argument for their answer. It was a tough road, however. Students are used to seeing tests has teacher-controlled exercises with one right answer and a bunch of wrong answers.


Fairly quickly, I started eliminating questions with plausible but incorrect answers. I started using, as a general course, questions with at least two correct answers (or at least two justifiable answers). The more I shared appeals presenting alternative answers, drew out explanations, balanced them against the "correct" answer, and liberally awarded full points to the appealing groups, the more the student realized that RAT questions were a starting point for discussion, not a black-and-white evaluation of their preparation. The more correct answers I seeded the questions with, the more robust the discussion. The RAP became a process to engage the students with the materials, not an end-point testing the students' mastery of the materials.


In devising multiple correct answer questions, I found myself naturally pulling back from the materials. I could cover more material in one question if more than one answer was correct. I found it easier to create application-, implementation-, synthesis-, or analysis-oriented questions, using the materials as a starting point for novel situations. That too created more robust discussions. There were fewer and fewer easy questions and lots and lots of justifiable answers.


A happy side-effect was that I could draw in the students who did not do the reading. As long as they read the question carefully during either the iRAT or tRAT, listened to their team members during the tRAT, and contributed (as part of the group) to the appeal discussion, they were exploring the ideas and could achieve a decent grade.


Looked at as engagement and discussion seeds, the components of the RAP needed to be re-weighed. The iRAT is least important. It's primary purpose is to introduce the students to the questions. Whether they get the answer right is far less important than their review of the possible answers. The tRAT is more important but not much. It's an opportunity for the students to share their ideas, take a stab at a correct answer, and discuss possible rationales. The most important, by far, aspect of the RAP is the appeal process. That's where the students justify their answers and receive feedback.


With a grading structure of 25% for the iRAT and 75% for the tRAT, as modified by the appeals rationales, the purpose is reinforced. Also, I make at least one appeal mandatory for all teams. This reinforces the notions that (1) questions have more than one potential "correct" answer, and (2) only if the team probes the alternative answers through the appeal process can they benefit from these correct answers. As the semester goes on, more and more teams appeal more and more questions. Some teams catch on quickly and create an appeal from every question, because . . . you never know.


Process


My process relies on tools available in Blackboard. Frankly, I've never used another CMS so I can't say if similar tools exist elsewhere.


First, I create a pool of 10 RAT questions, each with five different answers. I use "all of the above" and "none of the above" liberally. I also use "some of the above" to further encourage thinking about alternative correct answers.


Using that pool of questions, I create the iRAT using the test tool in Blackboard. I set the question order to be random and the answer order (within that question) to be random. I allow the students to take the iRAT as many times as they like with two conditions: (1) they don't know their iRAT results until after the tRAT answer sheet (see below) has been submitted, and (2) they cannot start the iRAT after a certain deadline. I'm perfectly happy to have the students review the test more than once. That furthers engagement with the materials.


Here are the iRAT assignment instructions:


"The following test has 10 questions, each worth 10 points. Choose the best answer for each.

Make a note on the full text of your answers (or enough of it to remind you which one you choose) so you have a record of your choices to reference during the tRAT. Noting down just the letter (A., B., C., etc.) of your answer will not be sufficient as answers are scrambled for each test.

You have 30 minutes to submit your answers. The test will time out after that period of time and auto-submit.

You will not be notified of your score on this test until after submission of the tRAT for your team.

You can retake the test as often as you like (prior to the due date), but your final iRAT score will be based on your latest submission."


Once the deadline for the iRAT passes, I open up the tRAT assignment in Blackboard. The tRAT has two parts: the test and the answer sheet assignment. Unlike the iRAT, the tRAT has a set order for the questions and a set order for the answers to ease grading. I ask that the students gather in some synchronous environment (chat, Skype, Google Hangout, etc.) and take the tRAT together. Again, the students can open and run through the test as often as they like. No results are provided for the test so repeat review is not a problem.


Once the students have had a chance to review and discuss the iRAT, one of them submits an answer sheet to me. That sheet lists the questions in order with a first, second, and third best answer to each question. The answer sheet submission is open to any member of the group, but only one member can submit the sheet and it can only be submitted once.


Here are the tRAT assignment instructions:


"Complete this test and the RAT Team Answers assignment at the same time. The RAT Team Answers assignment can be found listed in your group area. Do the following:


Schedule, then gather your team at one time, communicating in person, via chat, using Google Hangouts, Facetime, Skype, or another means.

Once your team has gathered, discuss each question and choose the best answer for each.

One (or more) team members should take notes on which answers the team favors. Pick a first, second, and third choice for each question.

Once your team has decided its 3 choices per question, one person should submit the RAT Team Answers assignment, listing the three choices per question, as well as the names of the team members who participated in the team test (team mates who don’t participate get a 0 on the tRAT). List the text of the answers as well as the letter choices, to make sure your grade is accurately calculated.


The following test has 10 questions. Getting the correct answer on the first choice is worth 10 points; getting the correct answer on the second choice is worth 5 points; getting the correct answer on the third choice is worth 3 points; getting none of the choices correct is worth 0 points.

You have 40 minutes to submit your answers. The test will time out after that period of time and auto-submit.

You can view the test as many times as you like. You can submit your RAT 2 Team Answers assignment only once.

You will be notified of your score on this test shortly after you submit the RAT 2 Team Answers assignment.

Your team's appeal assignment is based on the results of the team RAT (see separate RAT Team Appeal assignment)."


I then grade the tRAT answer sheet. This is a relatively quick and easy process because the question order is always the same and the correct answer key (a, b, c, d, or e) is always the same. If the team gets a question "wrong", I provide the correct answer key when I respond to their iRAT answer sheet assignment in Blackboard. Because the tRAT answer sheet is a team assignment in Blackboard, I can input one grade result and it flows down to each member of the team. I then simply have to modify the grade for those team member who didn’t participate to 0.


Once I've finished grading the tRAT answer sheets, I open the appeals process. Again, the students gather to discuss their answers, create rationales for them, and write up the appeal document. Again, any of them can submit the appeal document but only one of them can submit it and only once.


Here are the RAT Appeals instructions:


"Once you receive your grade on the RAT Team Answers assignments, you have the opportunity to appeal the results. You must discuss and appeal as a team. Follow this process:


As a team, discuss, either at the same time (as you did for your team test) or using your team RAT Appeals discussion forum (appeals discussed on the full-class discussion boards will result in 0 points on the appeal), any incorrect answers that you believe were as good as the correct ones. You can appeal as many question results as you wish, but must appeal at least one question.

Draft and agree on a statement for each appealed result specifically explaining the ground for your appeal and citing any support from the readings or from other sources.

Submit all appeal statements using this assignment. This assignment can be submitted only once.


If any of your appeals are approved, you will gain points on both your team test and any individual tests that picked the appealed answers (instead of the "correct" one). Note that you can take an appeal from a RAT question that you got right on the tRAT, but one or more team members got wrong on the iRAT. You just have to get your team to agree to submit the RAT Appeal."


In my last class, I responded directly to the groups on their appeals, replying to any questions or points made through Blackboard. A better method would have me setting up a discussion forum for the entire class labeled RAT Appeals. I would create a new thread in that discussion area for each RAP. In that thread, I would present a long entry setting out each question and its answers, the various appeals taken from that question, my response to the appeal arguments, then an grant/rejection of the appeal. Students could review that thread to discover which appeals were made, how they were argued, and which of those were granted and which were denied. Also, students could reply to the thread, furthering the discussion if they like.


Finally, at the end of the semester, I create a fifth, final RAP which has only an individual test. That test is made up of a random assortment of the questions from the prior four RAPs, in a random order with the answers randomized. That encourages the students to re-engage with all their prior RATs at the end of the semester.


One result I found occurring quickly in the RAP process. The students would skip the simultaneous gathering portion and simply exchange their answers and rationales asynchronously via email or IM. That does undermine the give and take of the group discussion, but I decided that, if that's how the group wanted to handle their work, that's fine. They are still engaging with the materials.


Another repercussion (not unique to this process) was that some student contributed more and some less, particularly if they dropped into an asynchronous communication pattern. So be it. Absent an in-class environment, I can't control how much they participate in their learning. Even with in-class activities, a student can always mail it in or sleep through it.


The solution to that problem is not a better RAP. The solution to this and all other group contribution issues is the peer evaluations. As long as peer evaluations are worth a large portion of their grades (20-25%), and are conducted regularly throughout the semester so the non-participants have notice, active group participation is incentivized nicely. I'll discuss more about that later.


I appreciate your patience as I rambled on. Hopefully this is of some use to some of you,


M Alexander Jurkat

INF 202 Team Lead

Informatics Department

University at Albany