Which is included in item analysis?
Item analysis is a technique that evaluates the effectiveness of items in tests. Two principal measures used in item analysis are item difficulty and item discrimination. Item Difficulty: The difficulty of an item (i.e. a question) in a test is the percentage of the sample taking the test that answers
that question correctly. This metric takes a value between 0 and 1. High values indicate that the item is easy, while low values indicate that the item is difficult. Item Discrimination is a measure of how well an item (i.e. a question) distinguishes between those with more skill (based on whatever is being measured by the test) from those with less skill. The principal measure of item discrimination is the discrimination index. This
is measured by selecting two groups: high skill and low skill based on the total test score. E.g. you can assign the high skilled group to be those subjects whose score on the entire test is in the top half and the low skilled group to those in the bottom half. Alternatively, you can assign the high skilled group to be those subjects whose total score is in the top 33% and the low skilled group those in the bottom 33%. The discrimination index is then the percentage of subjects in the high
skilled group who answered the item correctly minus the percentage in the low skilled group who answered the item correctly. The discrimination index takes values between -1 and +1. Values close to +1 indicate that the item does a good job of discriminating between high performers and low performers. Values near zero indicate that the question does a poor job of discriminating between high performers and low performers. Values near -1 indicate that the item tends to be
answered correctly by those who perform the worst on the overall test and incorrectly by those who perform the best on the overall test. Another measure of item discrimination is the point-biserial correlation between the scores on the entire test and the scores on the single item (where 1 = correct answer and 0 = incorrect answer). Example 1: A 20 question test is given to 18 students. The table in Figure 1 shows the results for question 1 and for
the whole test. Calculate the difficulty Df of question 1, its discrimination index D (using the top third vs. the bottom third) and its point-biserial correlation coefficient p. Figure 1 – Item Analysis The difficulty is given by Df = SUM(B4:B21)/COUNT(B4:B21) = 11/18 = .611. Since 5 of the top 6 students got question 1
right and 2 of the bottom 6 got the question right, the discrimination index D = 5/6 – 2/6 = 3/6 = .5. The point-biserial correlation coefficient p = CORREL(B1:B21,C4:C21) = .405. Observation: In computing the discrimination index the boundary between the high-skilled, medium-skilled and low-skilled groups is not always so clear. E.g. in Figure 1 the 6th and 7th highest total scores are both 16. So which one of these do
we choose to be in the high-skilled group? In this case, it doesn’t matter since the score for either subject on Q1 is 1, but if one of these had a score of 1 and the other had a score of 0, then we would have to make a decision. For our purposes, we will count the score for Q1 as the average of these, i.e. 0.5. More detail about this matter can be found in
Real Statistics Item Analysis Functions. Item analysis provides statistics on overall performance, test quality, and individual questions. This data helps you recognize questions that might be poor discriminators of student performance. Uses for item analysis: Example: After the item analysis, you notice that the majority of students answer one question incorrectly. Why the low success rate? Based on what you discover, you can improve the test question so it truly assesses what students know or don't know. Watch a video about item analysisThe following narrated video provides a visual and auditory representation of some of the information included on this page. For a detailed description of what is portrayed in the video, open the video on YouTube, navigate to More actions, and select Open transcript. Video: Using Item Analysis in Blackboard Learn explains how to access and run item analysis, view statistics, and edit a test question. Run an item analysis on a testYou can run an item analysis on a deployed test with submitted attempts, but not on a survey. The test can include single or multiple attempts, question sets, random blocks, auto-graded question types, and questions that need manual grading. For tests with manually graded questions that you haven't assigned scores for, statistics are generated only for the scored questions. After you manually grade questions, run the analysis again. Statistics for the manually graded questions are generated and the test summary statistics are updated. For best results, run an analysis on a test after students have submitted all attempts, and you've graded all manually graded questions. Be aware that the statistics are influenced by the number of test attempts, the type of students who took the test, and chance errors.
You can access a previously run analysis in the Available Analysis section. Test summary on the Item Analysis pageThe Test Summary provides data on the test as a whole.
Only graded attempts are used in item analysis calculations. When attempts are in progress, those attempts are ignored until they're submitted and you run the analysis report again. Question statistics table on the Item Analysis pageThe question statistics table provides item analysis statistics for each question in the test. Questions that are recommended for your review are indicated with red circles so you can quickly scan for questions that might need revision. In general, good questions fall in these categories:
In general, questions recommended for review fall in these categories. They may be of low quality or scored incorrectly.
View question details for a single questionYou can investigate questions that are flagged for your review and view student performance. On the Item Analysis page, scroll to the question statistics table. Select a linked question title to access the Question Details page.
Symbol legendSymbols appear next to the questions to alert you to possible issues.
Multiple attempts, question overrides, and question editsThe analysis handles multiple attempts, overrides, and other common scenarios in these ways:
ExamplesItem analysis can help you improve questions for future test administrations. You can also fix misleading or ambiguous questions in a current test.
What are the steps of item analysis?Steps in item analysis (relative criteria tests). award of a score to each student.. identification of groups: high and low. ... . calculation of the discrimination index of a question. ... . Award of a score to each student. ... . Ranking in order of merit. ... . Identification of high and low groups. ... . Difficulty index. ... . Calculation.. What is meant by item analysis?Item analysis is a process which examines student responses to individual test items (questions) in order to assess the quality of those items and of the test as a whole.
What is the main purpose of item analysis?Item analyses are intended to assess and improve the reliability of your tests. If test reliability is low, test validity will necessarily also be low. This is the ultimate reason you do item analyses—to improve the validity of a test by improving its reliability.
What is basic item analysis statistics?Item analysis is a technique that evaluates the effectiveness of items in tests. Two principal measures used in item analysis are item difficulty and item discrimination.
|