An evaluation of Canvas

Summary.

Background
Canvas is a popular learning management system (LMS) used by more than 3,000 universities worldwide. Canvas delivers its functionality primarily through a web-based software application; a subset of features is also available through native Android and iOS mobile applications. It provides a host of features that address many aspects of the educational experience such as course delivery, grading, discussion management, assignment submissions, and test administration, etc.

Challenge
A LMS can increase student engagement by creating better participation. We chose to focus on a subset of functions within the discussion component. Because the discussion component requires active student and instructor participation, it is vital to student engagement. However, factors such as digital literacy, ease of use, training, customization options, accessibility, and peer feedback may be hinderances to student engagement and effect the value of Canvas as a LMS.

Solution
Using formal evaluation and data collection methods, we assesseds the value and effectiveness of the discussion component as it relates to increasing student engagement. We conducted pilot tests, pre-evaluation interviews, open question interviews, observational usability tests, closed question surveys, and data analysis with key stakeholder groups and evaluation plan participants. In the end, we were able to create informed recommendations and improvements on the discussion component of Canvas and speak to the value it creates for student engagement. 

Research methods used: Pilot testing, expert interviews, open question participant interviews, observational usability study, close question surveys

Download the full product evaluation

Program evaluators: Nathan Friend, Elizabeth Holloway, Drew Swanwick

Canvas Discussion component built specifically for participant testing
Learner's Tryout video example
Method 1: Stakeholder Survey with mixed question types
Method 2: Usability test
Data analysis

PROJECT DETAILS

Date

26 October, 2018

CATEGORY

Program Evaluations, Human Computer Interaction, Data Analysis