Search IRA

Critical Thinking Competency 2010-2011

The Office of Institutional Assessment and Studies coordinated the 2010-2011 critical thinking competency assessment. A faculty committee composed of representatives of the undergraduate schools, after reviewing existing tests of critical thinking, chose a relatively new instrument, the Critical Thinking Assessment Test to measure students' critical thinking skills. This instrument requires students to examine underlying assumptions, solve problems, and suggest alternative solutions to real-world scenarios. Students' written responses were then scored by faculty according to specific rubrics. Any questions about the assessment can be directed to Sarah Schultz Robinson (982-2321).

Student Learning Outcomes

For learning outcomes the Critical Thinking Assessment Test (CAT) is based on the assumption that students competent in critical thinking can:

  • Evaluate information (separate factual information from inferences; interpret numerical relationships in graphs; understand the limitations of correlational data; and evaluate evidence and identify appropriate conclusions)
  • Think creatively (identify alternative interpretations for data or observations; identify new information that might support or contradict a hypothesis; and explain how new information can change a problem)
  • Learn and solve problems (separate relevant from irrelevant information; integrate information to solve problems; learn and apply new information; use mathetical skills to solve real-world problems)
  • Communicate ideas effectively


The following standards were established for graduating fourth-years:

  1. 25% of undergraduates are expected to be highly competent;
  2. 75% competent or above;
  3. 90% minimally competent or above. 

The committee considered the following definitions of competence as reflected in test scores. These definitions were consistent with those applied in past instrument-based assessments (Quantitative Reasoning and Scientific Reasoning):

  • Highly competent: students score greater than 3/4 of points available (CAT score of 28.8 or above);
  • Competent: students score between 1/2 and3/4 of points available (CAT score of 19.3-28.7);
  • Minimally competent: students score between 1/3 and1/2 of points available (CAT score of 12.5-19.2);
  • Not competent: students score less than 1/3 of available points (below 12.5).



The CAT is a 15-item, 1-hour, short essay critical thinking test with questions and passages designed to simulate real-world experiences that require critical thinking. Developed by a consortium of seven universities with funding from NSF, the purpose of the test is to assess higher-order thinking – thinking that requires application, analysis, synthesis, evaluation, and creativity. Test responses are scored by University faculty according to scoring rubrics. The ability to involve faculty in the scoring was an important consideration in the committee's search for a critical thinking measure.

Technical information about the CAT included criterion validity: scores are significantly correlated with the SAT, ACT, California Critical Thinking Skills Test (CCTST), and the critical thinking module of the Collegiate Assessment of Academic Proficiency (CAAP), as well as student GPA. Scoring reliability was reported as moderately high (α=.82) and internal consistency as reasonably good (α=.69). Test-retest reliability was also moderately high (α=.80) providing evidence that the the CAT could be useful a measure of student progress over time, if it were used again at the University.

Sampling, Confidentiality and Compensation

Approximately 1,900 fourth-year students were sampled from six undergraduate schools at the University (Architecture, Commerce, Education, Engineering, Nursing, and the College of Arts and Sciences) and from the Bachelor of Interdisciplinary Studies program (BIS) using a disproportionate stratified sampling method.  Sampled students were invited by email to take an assessment and, as compensation, were offered a $20 gift certificate to or the option to donate the $20 to a UVA student group.

Students were informed that participation was voluntary and that their test responses would be kept confidential and would not affect their academic record. In the invitation, students were not informed of the topic of the assessment. Approximately 20 one-hour testing sessions were scheduled during weekdays over the period of four weeks in March and April 2011. In total, 323 students responded to the invitation and signed up to complete the assessment (17% response rate) and 264 attended a session and completed the CAT (18% no-show rate).


A 14-member faculty committee representing multiple schools and disciplines was trained by IRA staff to rate the CAT papers according to the rubric. Two hundred scoring hours were needed to score the 264 papers and the committee completed the task during a two-day workshop.

To ensure reliability of the scoring, each test question was scored by the group at the same time. Each student’s response was scored by two raters separately and by a third rater if the first two did not give the same score.

List of 2010-11 Committee Members

  • John Corlett - Continuing and Professional Studies
  • Elizabeth Friberg - Nursing
  • Mark Hadley - Religious Studies
  • Deandra Little - English
  • Kirk Martini - Architecture
  • Ed Murphy - Astronomy
  • Kathryn Neeley - Engineering
  • Josipa Roksa - Sociology
  • Karen Schmidt - Psychology
  • Mark White - Commerce