Choose and Design Assessment Measures 

If an objective is written well, it will not be difficult to determine the most effective way to measure it. The alignment between the measure and the PLO is vital. Because the goal of learning outcomes assessment at the program level is to demonstrate that students have met program objectives at or near graduation, typical measures for program assessment purposes are those in which students demonstrate mastery of the objective (see the curriculum map). These can be major assignments in capstone or other 400-level courses, internship supervisor evaluations, or program-level exams. Consider choosing a measure that will provide longitudinal evidence of maintenance or changes in student performance.

An exemplary assessment measure is described clearly and completely and aligned closely with the learning objective being assessed. The use of both direct and indirect measures is exemplary assessment practice.

At times you may want to use a different assessment method. For example, if you find students have not mastered an objective you may want to assess performance in an earlier course to investigate possible reasons for their sub-optimal performance. Once you determine what those reasons are, you can make changes and then re-assess students at the mastery level. Alternatively, you may have an assessment question that requires measurement of student performance at an earlier point, or even multiple points, in the program. The best choice of assessment measure will provide meaningful information to the program faculty.

There are 2 basic types of measures for assessment:

Utilize samples of student work. 
Examples: exams, papers, projects, internship evaluations, student performance.

Utilize perceptions or other proxies.
Examples: student surveys, post-graduation graduate program acceptance rates, post-graduation employment

Direct evidence by itself is a stronger measure of student learning than indirect evidence by itself. The strongest assessment evidence combines both direct and indirect evidence. For additional examples, see the resource in on the OPAIR Assessment Resources page called “Direct and Indirect Assessment Measures.pdf.”

In smaller enrollment courses, data collection may constitute obtaining copies of each student’s artifact (i.e., assignment). In large enrollment programs, it may be necessary collect data from a sample of students.

Remember that this process is driven by your learning concerns or assessment questions. At the most basic level, learning outcomes assessment is about whether students have the skills and knowledge necessary for success after graduation. The most effective strategy for answering this question is to measure student performance at the demonstration of mastery level, typically in a capstone or other 400-level course, or a master’s or dissertation defense for graduate programs.

Faculty may have other concerns or interests about the program that lead to different assessment designs. Perhaps faculty believe students are not coming into the program with the necessary background. This question suggests the need to administer a pre-test in an early course. If faculty are interested in how students are developing throughout the program, measures at various points in the curriculum would be the best assessment strategy. Alternatively, faculty might want to know how demographics or experience might impact student mastery of skills and knowledge. If a language program wants to know if students who study abroad master learning objectives at a greater level than those who don’t, performance data will need to be compared between those two levels of experience.


Most assessment measures, excluding multiple choice tests, surveys, and other indirect measures, can benefit from a scoring guide, often called a rubric, that facilitates scoring. According to Linda Suskie (2009), the following are advantages of rubrics:

  • help clarify vague, fuzzy goals;
  • help students understand your expectations;
  • help students self-improve;
  • can inspire better student performance;
  • make scoring easier and faster;
  • make a score more accurate, unbiased, and consistent;
  • improve feedback to students;
  • reduce arguments with students; and
  • improve feedback to faculty and staff.

Chapter 9 from Suskie’s (2009) book, Assessing Student Learning: A Common Sense Guide, describes various types of rubrics and how to create can be found on OPAIR’s Assessment Resources page. One of the most effective strategies for “designing” a rubric is to find one that has already been developed and modify it to suit your needs. The Association for American Colleges and Universities (AAC&U) gathered multiple faculty members from across the country to develop a series of rubrics addressing common learning objectives. These VALUE rubrics are freely available online and are also on OPAIR’s Assessment Resources page in the Rubric Library. They are an excellent place to start.

Analytic rubrics (like the AAC&U rubrics) include descriptions of each performance level for each component of an assignment. These rubrics are recommended for program assessment because they support consistent scoring and allow analysis of each assignment component separately. This is important because it is possible for students to score high on an assignment overall, while not meeting expectations for a specific component. Using an analytic rubric and analyzing the results by component may reveal specific skills or knowledge that require additional emphasis in the program.

Exemplary assessments use well-constructed analytic rubrics for appropriate assignments that include clear and concise descriptions for each performance level and analyze performance separately for each assignment component.

Performance Target (Criterion)

The final component associated with designing your assessment measure is creating a performance criterion or benchmark, which is a statement of the level at which you would consider students to have met the learning objective. It is important that your performance criterion be in alignment with your measure. Several examples of performance targets are listed below.

  • 80% of students will receive a total score of 85 or above on the essay.
  • At least 85% of students will score a 3 out of 5 on each of the rubric components.
  • At least 90% will receive a score of 80 or higher on the internship evaluation form.
  • 95% of students will score at least a 3 on each rubric component, and 60% of students will score at least a 4 on the attached 5-point rubric.

Exemplary assessments provide a clear performance target to help determine whether students met expectations. When a scoring guide (rubric) is used, performance target for each assignment component are described.