Assessment

USING ASSESSMENT EVIDENCE TO ACHIEVE AND IMPROVE THE UNIVERSITY’S LAND GRANT MISSION AND VISION

Assessing Small Programs

Low enrollment is one of the biggest assessment challenges. No one wants to make curricular or pedagogical changes based on the performance of one or two people. But the reality is that some Penn State degrees and certificates serve a relatively small number of students.

What is a small program? For this discussion, we will define a small program as a degree or certificate program that five or fewer students complete each year, or that an average of 25 or fewer students complete over a five-year period. There is flexibility in this definition.

Challenges and rewards

The inherent challenge in assessing a small program is the fear of generalizing student learning based on very little data. Further, small programs are often supported by small groups of faculty and instructors with limited time and resources to devote to assessment and the cost-effectiveness of conducting elaborate assessments for a few students can be questionable. On the other hand, smaller groups of students make it possible to take deeper dives into learning. It can also be easier to implement changes in small programs than it is in large ones.

A multi-year, multi-PLO assessment approach

A common approach to reliably document student learning is to repeat the same measure assessing the same program learning objective (PLO) over several years or cohorts. A multi-year approach can provide confidence that the assessment findings are representative of the students in the program. It also establishes a more robust data set as a foundation for curricular improvement.

Because this strategy requires multiple years to assess each PLO, it is most effective when combined with assessment measures that allow us to collect data on multiple PLOs simultaneously. Graduate milestones such as the thesis or dissertation defense almost always allow students to demonstrate their mastery of more than one learning objective. In undergraduate programs, direct measures like capstone projects, term papers, and portfolios provide powerful ways to do this.

Implementing one of the following strategies strengthens program assessment. However, applying several approaches in tandem increases assessment efficacy by triangulating the evidence and providing a more nuanced view of student learning.

Direct assessment strategies

When using a single method to assess learning, direct measures of learning are preferred as they provide stronger evidence than indirect measures. Indirect measures, such as student self-assessment of learning (typically gathered via surveys or exit interviews), rely on self-reporting or other proxies to measure perceived learning. In contrast, direct measures utilize samples of student work that demonstrate learning.

In each of these examples, a rubric should be developed that articulates the expectations for each PLO demonstrated. Rubrics designed to document mastery of program learning should be developed collaboratively and approved by the program faculty and instructors. Analytic rubrics include descriptions of each performance level for each criterion used to assess an assignment. These rubrics are recommended for program assessment because they support consistent scoring within and between instructors. They also allow the analysis of each assignment criterion separately which may reveal specific skills or knowledge that require additional emphasis in the program. It is possible for students to score highly on an assignment overall, while not meeting expectations for a specific criterion. Therefore, it is important to use an analytic rubric and to analyze the results by criterion.

        • Required Capstone or Milestone Projects: Select a culminating assignment where students demonstrate their skills and knowledge across several or all PLOs for your assessment. For undergraduates, this may be a final project in a capstone or culminating senior course. At the graduate level, a master’s paper or project, thesis, or dissertation can serve the same purpose.
        • Other Significant Projects or Papers: For programs where all students do not take the same upper-level courses (e.g., those with multiple options or areas of specialization, or multiple upper-level courses that can be used by students to meet requirements), you can use a different assignment from several upper-level courses if all assignments are designed to demonstrate mastery of the same group of PLOs. Apply the same rubric to each assignment to aggregate performance data across a program.
        • Portfolios: Have all students create a portfolio from selected assignments that address each PLO. Graduating students may also be asked to include a reflection on their knowledge and skills related to the learning objectives. Faculty and instructors can review each portfolio as students graduate or review a group of them periodically, for example, every five years. This is where having a relatively small number of students to assess can work to your advantage.

Indirect assessment strategies

Indirect measures complement direct measures by providing insight into what students think and feel about their learning. For the rare program that does not have a single required upper-level course or assignment, indirect measures may be particularly important.

        • Focus groups: Invite your graduating students to lunch and ask them to describe their abilities and confidence related to your PLOs.
        • Exit surveys: Conduct a senior-exit survey asking students to self-report their perceived knowledge and abilities related to your PLOs.
        • Alumni surveys: Conduct a periodic alumni survey covering multiple graduated cohorts asking alumni to self-report their perceived knowledge and abilities related to your PLOs and the application of this learning since graduation.

Sample versus census and participation

With small programs, it is particularly important to include as many students as possible—preferably all—in your assessment. In assessment lingo, this means assessing a census rather than a sample of your students. The easiest way to do this is to base your assessment on a required assignment in a required course. Ideally, this would be an upper-level course as students approach graduation. If you base your assessment on a survey, or any other non-required activity, it is critical that you find a way to ensure participation from the majority of your students. Some strategies for maximizing participation include:

        • Ensuring that all relevant program advisors, faculty, instructors, and staff are encouraging the target students to participate. A request from someone a student knows is more effective than a generic email.
        • Allowing time in class to participate.
        • Offering incentives to participants (e.g., extra credit, gift cards, t-shirts, entry into drawings for larger items).
        • Sharing real examples of how this data has been or will be used to improve the program.
        • For graduating student surveys: Including a request to complete the task (e.g., the survey) with all communications related to applying for and attending graduation.
        • For alumni surveys: Conducting a pre-graduation drive to get non-Penn State emails for students.

Additional resources

For more information about assessment measures and rubrics, visit the Learning Outcomes Assessment Handbook. If you still aren’t sure how to best assess your small program, please reach out to your campus or college assessment liaison for help.

____________________________________________________________________________________

This information is based in part on a document originally created by Bill Powers, Qing Shen, Katherine McAdams, Diane Harvey, and Joan Burton at the University of Maryland’s Office of Institutional Research, Planning, and Assessment.