There are a variety of practical exercises that might be undertaken with the program.
An example that could be done would be to run FIMS results through the program, comparing output for Japan students with those from Australia -- the summaries of Rasch diff values made by the macro will exemplify that Rasch results, like those from CTT (classical test theory) and general IRT (item response theory), can indeed be sample dependent.
One could also note the effect of sample size on INFIT and OUTFIT statistics: the larger the sample, the closer these values will be to 1.00. This might be done using the 60-item "M.Nursing" dataset twice: first with all 1,767 students, then with just the first 500.
Wu (2016) makes reference to the relationship between the classical item discrimination index (pb(r) values in Stats1f reports) and the INFIT statistic often used in Rasch item analysis (see p.152 in Wu's text). A related exercise would be to correlate and plot item pb(r) and INFIT values -- Wu suggests that an inverse relationship will be found: low INFIT values can be expected to correlate with higher pb(r) values, or, in other words, items with low INFIT values can be expected to have higher discrimination.
Lertap5 hosts other macros (program modules) for IRT work. While not as easy to use as the RaschAnalysis1 macro, a payoff is that they lead to plots of item performance and fit. Refer to this topic.
Of note is that the macro copies the "Rasch" scores it makes over to the Scores sheet. This makes it possible to correlate original test scores with those from Rasch - students new to this area may be surprised to find near perfect correlations, usually at least r=0.98; the Rasch scores will not histogram too well until they're multiplied by a factor of 10 or 100; this little option makes it easy to do that, while this one makes it easy to update the Scores worksheet's correlations. Students could also readily get a scatterplot of the scores; they will look like those seen in Wu's text (Chapter 7, p.130). (Note that some records in the Scores sheet will not have a Rasch score if a student had a perfect test score to begin with.)
The bottom of the Stats1f worksheet houses a little area with "bands". It can be illuminating to compare the item difficulty bands in the Stats1f bands with the Rasch item difficulty bands made by the macro (as seen here).
The possible impact of mis-keyed items can be investigated using the MathsQuiz dataset. Item 11 was mis-keyed. Has this affected the CTT and Rasch results, and, if so, how?
Two things, two hints, useful when using Lertap5 to undertake exercises such as those suggested above: (1) it is very easy to make a copy of a Lertap5 workbook, and (2) inserting a blank row in the Data worksheet will get Lertap5 to stop reading data records at that point -- when a Data sheet has many hundreds or thousands of records, inserting a blank row after, say, 250 Data rows will at times result in enough data for purposes of the exercise, and will always speed up processing. Note that the "to halve and hold" option may be used to extract random samples of the Data sheet records.