Extra credit to you for making it all the way through this document (!).  I hope it has been useful.


You will almost certainly know that Lertap is not the only tool, or "app" as we tend to say these days, capable of supporting work of the sort I've demonstrated herein.


When I started out as part of a data collection and processing center at the University of Wisconsin in 1967, we used tools known as "Statjob" and "BMD" for data snooping and cleaning, and Frank Baker's "FORTAP" program for item analysis.  If these systems didn't do exactly what we wanted, we at times wrote our own special-purpose routines.  When I was fortunate to have the chance to work in Venezuela in the early 1970s, the team I was part of wrote a special suite of data analysis programs in order to have output in Spanish (Lertap grew out of some of that work).


Fast forward to the year 2013.  SPSS (www.spss.com) is but one example of a "modern-day" system which could do quite a bit of the work I've presented in this document.  SPSS is capable of reading Excel worksheets, making it easy to take advantage of Excel's very powerful data snooping support before delving into the extensive data analysis tools offered by SPSS.  I've put a document up here with a practical demonstration of moving data from Excel to SPSS.


I have always regarded SPSS as weak when it comes to analyzing the performance of cognitive test items.  But it seems I may be wrong -- I trotted out my favorite search "engine", gave it "using SPSS for item analysis", and got some very interesting hits, including a quality "white paper" on item analysis from SPSS itself.


Intrigued by the results of my SPSS item analysis search, I also tried "using Excel for item analysis" and once again got a list of relevant sites.  Some of these looked very good (especially, of course, those which referred to the use of Lertap, but there were many others).


Searching the internet can snowball quickly.  After asking about SPSS and Excel as they might relate to items analysis, I then gave "software for item analysis" to the search tool.  There were a few million hits, more than enough to delay my lunch hour.


So there you go.  Can you do the things I've wowed you with using software other than Excel and Lertap?  Certainly.  But allow me to enter a caveat, if I may: don't trust datasets until they've been given a real snoop.  It's very easy to take the output from an optical scanner and deliver it immediately to systems which will rapidly produce all manner of statistics.  We can get pages of tables and graphs without ever really seeing our data, leaving us very susceptible to the old "GIGO" effect: "garbage in, garbage out".  Snoop the data first.


Of course I'm not saying that scanners introduce garbage; it's simply that it can be difficult to see the data after they've been prepared / processed by a scanner.  We very often end up with "ASCII" text files, and they do not lend themselves to snooping.


(Fortunately, most scanners can, I believe, be configured to create Excel-compatible files, such as "CSV" files.  Nonetheless, there remain numerous programs which have been designed at the outset to work with ASCII files -- I urge caution with such programs, not because the programs themselves are deficient, but because ASCII files (generally having extensions of "TXT" or "DAT") are, as I've said, difficult to snoop.  Use of what could be called ASCII-friendly data analysis programs makes it more likely that people will not snoop their data, leading to what can well be mistaken faith in data integrity, possibly increasing the chance of "GIGO".  Note: Excel is capable of importing ASCII files; it takes a bit of work, but the process is entirely straightforward.  You might read about it here, where ASCII files are referred to as "text" files.)


While data snooping has been a central theme of this paper, there is another, one which hasn't received quite as much air time: adjusting test scores for the effects of item ambiguity.  A quality item analysis program ought to have the ability to accept more than one answer as "correct", or, perhaps, partially correct.  In Lertap this is done via the use of *mws cards; in case you've already forgotten, click here to jet back to the "double-keying" topic.  When I say "partially correct", I mean, for example, perhaps giving half a point to an answer.  Examples of how to do this may be seen here, starting at "Example C12".  (In the literature, scoring item responses in this manner sometimes falls under the rubric of "partial credit".)




Larry Nelson (snooperman?)

School of Education

Curtin University

Western Australia