Intraclass Contingency Table
- A table that displays frequencies of two or more categorical variables measured within the same group (the “class”).
- Used to assess agreement and consistency among multiple ratings or observations for the same subjects.
- Commonly applied in research on rater agreement and in evaluating the reliability of diagnostic tests.
Definition
Section titled “Definition”An intraclass contingency table is a statistical tool used to analyze the relationship between two or more categorical variables within a single group or “class,” commonly employed to evaluate consistency and agreement among multiple ratings or observations of a phenomenon within that group.
Explanation
Section titled “Explanation”An intraclass contingency table organizes the frequencies of categorical responses given to the same subjects by different raters or measurements. The “class” refers to the group of individuals or units being assessed; the categorical variables are the ratings or test results provided by each rater or tester. By displaying how often each combination of category assignments occurs across raters, the table lets researchers evaluate the level of agreement and the consistency of those observations, which informs assessments of reliability and validity.
Examples
Section titled “Examples”Rater consistency for a psychological trait
Section titled “Rater consistency for a psychological trait”A researcher may use an intraclass contingency table to evaluate the consistency of ratings given by multiple raters to a set of individuals on a psychological trait (for example, intelligence or aggression). The class is the group of individuals being rated, and the categorical variables are the ratings from each rater. The table shows the frequency of each rating given by each rater, allowing evaluation of agreement among raters.
Reliability of a diagnostic test
Section titled “Reliability of a diagnostic test”An intraclass contingency table can evaluate the reliability of a diagnostic test by using a class of patients who have undergone the test and treating the test results from multiple testers as the categorical variables. The resulting table shows the frequency of each test result given by each tester, enabling assessment of consistency and reliability of the test.
Use cases
Section titled “Use cases”- Evaluating agreement and consistency among multiple observations or ratings within a single group (as in rater studies).
- Assessing the consistency and reliability of diagnostic tests based on multiple testers’ results.
Related terms
Section titled “Related terms”- Categorical variable
- Ratings
- Agreement
- Consistency
- Reliability
- Validity