Robert & others,

It can be useful when trying to analyze patterns like this, to
identify the length of time and number of tests involved in the data gathered.

Because of my long association with Larry, I am aware that the
"cumulative scores" he cites represents data from iRATs and gRATs
added up at the end of a whole semester, i.e., 15 weeks and usually 6
to 7 RATs.

Robert:  Have many RATs and weeks were represented by your data?
         Was that just from one RAT or the combined data from several RATs?

Dee







At 09:44 AM 11/24/2005, Larry Michaelsen wrote:
>My explanation is a combination of those that you offer.  The data you
>present is typical of: 1) a single (and probably short) test,  2) given
>to newly-formed groups--not yet teams, 3) not using IFAT answer sheets.
>Thus, your results are a likely to be a combination of the explanations
>you offer plus the "lucky guess," option could just as easily be called,
>"unreliable test"--which is characteristic of almost ANY short and new
>test.  (You might want to check out Watson, W. E., Michaelsen, L. K. &
>Sharp, W.  (1991).  Member competence, group interaction and group
>decision-making:  A longitudinal study. Journal of Applied Psychology,
>76, 801-809 to see a fuller explanation and some supporting empirical
>evidence.)  Based on cumulative scores (i.e., increased reliability from
>a longer test), between 1986 and 2003 (when I started using IFATs), I
>had data from over 1,100 teams and all but 1 team scored higher than its
>own best member and by an average of nearly 11%. Since I've started
>using IFATs (which provide teams with some within-test feedback, I
>haven't had any team fail to beat its best member and the average gain
>has been 20+%.
>
>Larry
>
>
>Larry K. Michaelsen
>Professor of Management
>Central Missouri State University
>Dockery 400G
>Warrensburg, MO 64093
>660/543-4124 voice
>660/543-8465 fax
> >>> "Philpot, Robert J." <[log in to unmask]> 11/23/05 3:11 PM >>>
>Hello All,
>
>
>
>I have been intrigued by the comparisons of team scores on the gRATs to
>the High, Low and Mean scores on the iRATs. Lately I've had the
>opportunity to keep track of scores following different team learning
>experiences. It struck me as a little odd that some teams actually score
>lower than the highest member on that team. I've attributed this to 1.)
>inexperience working as a team, 2.) withholding by the brighter team
>member (for whatever reason), and 3.) lucky guessing by unprepared
>students that cannot help their team experience the same success.
>Perhaps someone could posit another cause?
>
>
>
>
>
>Low
>
>High
>
>Mean
>
>Team Score
>
>Gain above high
>
>Gain above mean
>
>Gain above low
>
>Team1
>
>56%
>
>67%
>
>61%
>
>72%
>
>6%
>
>11%
>
>17%
>
>Team 2
>
>50%
>
>89%
>
>78%
>
>94%
>
>6%
>
>17%
>
>44%
>
>Team 3
>
>67%
>
>89%
>
>78%
>
>94%
>
>6%
>
>17%
>
>28%
>
>Team 4
>
>61%
>
>83%
>
>78%
>
>89%
>
>6%
>
>11%
>
>28%
>
>Team 5
>
>50%
>
>89%
>
>67%
>
>78%
>
>-11%
>
>11%
>
>28%
>
>Team 6
>
>56%
>
>89%
>
>72%
>
>83%
>
>-6%
>
>11%
>
>28%
>
>Team 7
>
>72%
>
>89%
>
>78%
>
>94%
>
>6%
>
>17%
>
>22%
>
>Team 8
>
>67%
>
>78%
>
>75%
>
>89%
>
>11%
>
>14%
>
>22%
>
>Team 9
>
>72%
>
>89%
>
>78%
>
>89%
>
>0%
>
>11%
>
>17%
>
>Team 10
>
>56%
>
>78%
>
>67%
>
>83%
>
>6%
>
>17%
>
>28%
>
>
>
>My real question, however, revolves around the analysis of this data
>once it is collected. Has anyone used a reasonable statistical test to
>compare individual scores on the iRAT to the team scores on the gRAT?  I
>have been considering ways to compare performance of several teams on
>gRATs (dependent variable) following the use of an educational
>intervention (independent variable). All of the students will have taken
>the iRAT prior to the intervention so I could compare team scores to
>high, low and mean individual scores for each group also. Exp.
>(iRAT->X1->gRAT) vs. control (iRAT->gRAT->X2).
>
>
>
>'Looking forward to hearing your ideas and experiences,
>
>
>
>Bob
>
>
>
>Robert Philpot Jr., PhD, PA-C
>
>Clinical Assistant Professor
>
>Associate Clinical Coordinator
>
>University of Florida College of Medicine
>
>Physician Assistant Program
>
>Gainesville, FL 32610-0176
>
>
>
>352-265-7955 w
>
>352-871-5053 mobile
>
>[log in to unmask]
>
>
>
>http://medinfo.ufl.edu/pa/faculty/Bob/
>
>
>"Someday, after mastering the winds, the waves, the tides and gravity,
>we shall harness for God the energies of love, and then, for a second
>time in the history of the world, man will have discovered fire."
>
>Pierre Teilhard de Chardin
>
>
>
>
>
>
>
>!DSPAM:1576,4384e50a18453946398363!


* * * * * * * * * * * * * * * * * * * * * * * *
L. Dee Fink, Instructional Consultant           Phone: 405-364-6464
         in Higher Education                     Email: [log in to unmask]
234 Foreman Ave                         FAX:  405-364-6464
Norman, OK 73069                                Website:
www.finkconsulting.info

**Author of: Creating Significant Learning Experiences (Jossey-Bass, 2003)
**Immediate Past President of the POD Network [Professional and
Organizational Development] in Higher Education
**Founding director (now retired), Instructional Development Program,
University of Oklahoma (1979-2005)