UCF Blended Learning Toolkit Research and Evaluation

Scaling the Evaluation of Blended Learning for NGLC and Beyond

By Patsy Moskal, University of Central Florida

The University of Central Florida (UCF), along with the American Association of State Colleges and Universities (AASCU) through a Wave I NGLC grant, Expanding Blended Learning Through Tools and Campus Programs, expanded the adoption of blended learning to 20 AASCU member institutions by developing and disseminating the Blended Learning Toolkit (http://blendedlearningtoolkit.org) containing blended learning best practices successfully implemented at UCF since 1997.

The large scale of our grant created logistical challenges. Grant participants included faculty and students from 11 different states and 20 unique campuses, each with different academic calendars, course requirements, faculty experience, and grading protocols. This was a great opportunity to scale blended learning quickly, but how could we evaluate with such scale and with such a short grant time frame? 

UCF's Distributed Learning Impact Evaluation has been ongoing since 1997. Over time, components of this model have been scaled at UCF from the classroom to program, college, and institution. The grant provided an opportunity to determine which of these evaluation components could scale beyond the university. It also allowed us to identify issues related to the challenge of a large evaluation project and how to communicate with and collect data from a variety of remote locations.

We used GroupSpaces to facilitate communication with grant participants and each institution assigned an assessment contact who was the primary point of contact with UCF’s assessment coordinator. Because of the complexity of gathering data from so many sites, the design was simplified to encompass measures of scale, student success (A, B, or C grade) and withdrawal, students' evaluation of blended learning instruction, and faculty evaluation of blended learning instruction.  

Data gathering also had to be streamlined. We wanted to simplify the process for participants to address FERPA concerns and make reporting easier given the number of participants. Pell status was used to identify students as low-income or not, a factor of particular relevance for  the NGLC grant. Each campus was responsible for obtaining Institutional Review Board (IRB) approval. To facilitate this, UCF first obtained IRB approval for the grant evaluation. A key part of expediting this process was to de-identify student data. While this meant that we could not merge students’ satisfaction responses with their performance data, it eliminated passing sensitive student identification back and forth – a necessary sacrifice given the fast time table of the grant.  An Excel spreadsheet with sample data elements including course and student demographics, and student grades, was also provided to each campus to ensure that data was in a uniform format and ready to quickly aggregate when passed onto UCF. Of course, each campus has variations on grading and course identification that required significant data cleaning, but the process was much smoother and expedited based on these sacrifices made. The scale of blended learning on the sites was calculated from these spreadsheets by the number of unique courses, sections, students, and faculty participating. Overall, 131 faculty participated in teaching 79 unique and 217 overall course sections to 5,798 students!

When planning how to gather student and faculty satisfaction feedback, we knew faculty would be both stressed and focused on teaching, with little time and varied experience with research or evaluation. So, our primary concern was to eliminate as much of their burden as possible when it came to the research. Creating and maintaining online surveys for both student and faculty satisfaction using Google Forms was the simplest solution. Faculty were sent requests to announce the student survey about a month prior to the course end and a reminder email to nudge students 2 weeks later. Shortly after this request, faculty were asked to provide their feedback and thoughts regarding their experience teaching in the blended format.  And, the plan worked! We were able to get 73 faculty responses (56% response rate) and 1,349 student responses (23%). Not bad for a project scattered across 20 campuses and 11 states. (Detailed results from student success and withdrawal, and faculty and student satisfaction are forthcoming in November in Scaling blended learning evaluation beyond the university, in Research Perspectives in Blended Learning edited by Picciano, Dziuban and Graham for Routledge.)

Moving beyond the grant

In addition to evaluating the grant progress with scaling blended learning courses, we wanted to also provide our colleagues at the 20 sites (and beyond) with the resources and guidance to help them evaluate their blended learning initiatives after the grant had passed. These resources—provided directly through the Blended Learning Toolkit—include surveys and cover letter templates, tips and guidance on IRB, FERPA, etc. Perhaps you will find them useful too.


We also wanted to encourage those who were getting started in blended learning to not only conduct research, but also disseminate their findings. On our campus, we support many faculty who are involved in research centered around the scholarship of teaching and learning. So, again, we wanted to simplify the process and provide faculty with the resources necessary to publish and present. With that focus in mind, we provided through the Toolkit links to organizations and conferences that were involved in blended learning, to possible research journals, and finally, a bibliography of others’ works.


Overall, the fast pace and varied concerns surrounding such a large scale evaluation provided many challenges, but also a great opportunity to identify issues and solutions. And, providing faculty and administrators with resources through the Blended Learning Toolkit has allowed us to share these with others after the grant, hopefully facilitating their ongoing evaluation and providing ongoing scale.

Patsy D. Moskal, Ed.D. is the Associate Director for the Research Initiative for Teaching Effectiveness at the University of Central Florida.

Add new comment