Online courses are rapidly expanding. They can reach more students and reduce drastically the costs of teaching. They are, thus, an attractive option for both students and schools. Being a recent development, there is still little research to assess their performance. A first group of studies used the method of randomly assigning students to an online or in-person section of one course and found negative or null effects on students test scores (Figlio et al., 2013 1; Alpert et al., 2016 2 and Joyce et al., 2015 3, Bowen et al., 2016 4). A second group examined two-year community colleges, where students took several online and in-person courses. Again, the estimated effects of online courses is negative (Xu and Jaggars (2013, 2014) 5, 6, and Streich (2014) 7.
Bettinger et al. (2017) 8 report the above state of the art in the studies regarding the performance of online courses and conduct a more extensive study on their own. These authors use data from a very large for-profit university where the average student takes two-thirds of her courses online, and with as many as 100 physical campuses. The interesting thing is that each course is offered both online and in-person. Both, online and in-person sections are identical in most ways: both follow the same syllabus and use the same textbook; class sizes are approximately the same; both use the same assignments, quizzes, tests, and grading rubrics. The only difference is the mode of communication, with the online section offering standardized videos that replace the professor lecture of the in-person section.
The authors want to estimate weather taking an online course reduces the student success. In a study like this, several difficulties must be overcome. A correlation between the choice of a type of course and the student’s success may be direct (the choice causes success), inverse (successful students tend to take a certain type of courses) or spurious (the choice of the course and the success are both caused by a third omitted factor). To deal with this problem, econometricians usually employ the instrumental variables approach. In their work the authors use two such variables. One is the changes from term to term in which courses are offered in-person at each student’s local campus (the online course is always available). The idea is that if the offer of in-person courses is correlated with the success of the student, this is an indication of the direct causality (e.g.: more in-person courses available imply more taken and higher success) as the indirect causality or the spurious correlation would be difficult to sustain (higher success of students implying a higher offer of in-person courses?). The instrumental variable is, so as to say, an instrument of causality that makes more sense in one direction that in the other. The other instrumental variable is the distance each student must travel to attend an in-person course at a local campus. The interaction between the two instrumental variables allows the authors to gain a higher degree on confidence when interpreting the causality in the data. The reason, in their own words, is that (i) any other mechanism through which student distance from campus affects course grades is constant across terms with and without an in-person class option; and (ii) any other mechanism causing grades to differ between terms with and without an in-person class option affects students homogeneously with respect to their distance from campus.
The regression analysis that uses the described instrumental variables show the following results:
-Taking a course online, instead of in-person, reduces student success and progress in college. More specifically, the estimated effect is a 0.44 grade point drop in course grade, which amounts to one third of a standard deviation decline. Additionally, it also reduces the student’s grade point average (GPA) the following term by 0.15 points.
-To the extent that grades reflect, even partially, actual learning, one should expect that the effect is larger in later courses, and even larger in courses that have the first course as a prerequisite. The authors, again, find evidence that this in indeed the case. This result in not only important it itself, but also as an additional reason to suspect the differences in the online grades are due to learning and not to a difference in grading.
-The poorer performance induced by online courses is also reflected in college enrollment. After taking a course online, rather than in-person, students are about 9 percentage points more likely to drop out of school the next semester, although one year later the reduction does not grow or shrunk. In addition, online students who do re-enroll take fewer credits in future semesters.
The next thing the authors do is to check the robustness of their model. For that, they repeat the analysis for different specifications. For instance, they replace the linear specifications of distance with a quadratic and even a cubic and get similar results. They also conduct the analysis taking subgroups of students according to the distance they travel to campus and cannot reject the null hypothesis that the estimates at different distance restrictions are equal. Four additional robustness checks do not change the main results.
Finally, the authors complete the analysis introducing more control variables to see how the results change with particular characteristics of students and the courses they take. In this way, they find four main results:
-The drop in grades after taking an online course is high for students with a GPA below the median (a reduction of 0.5 points or more). However, for students with a previous GPA in the top three deciles, the effect is not statistically different form zero.
-The negative effects of online courses are somewhat larger in health-related majors than in business- or computer-related majors.
-For students taking required courses (about one-half of the sample), the effects on student grades are somewhat larger and the effects on persistence are somewhat smaller.
-The authors estimate the effects of taking a course online separately in introductory-intermediate courses and advanced courses. At both course levels, the negative effect on current course grade holds.
It is important to also stress what this research does not do. First, it says nothing about the massively open online courses (MOOCs), as their data are about online courses that are substitutes for regular courses, where the size of the group and the tuition are the same as in the in-person section of the course. Second, it does not contain enough information to study the underlying mechanism that lead to poorer results after taking an online course. Of course, one my find several hypothesis in the literature, but the present work cannot discriminate among them. And last, the findings do not offer a complete welfare analysis. Even if online courses induce poorer results, they may still be a sensible choice if they imply a lower cost for the student. Even if the tuition is the same, the convenience of the online cost may give the student an opportunity that the in-person course cannot offer.
- Figlio, D.; Rush, M., and Yin, L. 2013. Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics 31 (4), 763–84. ↩
- Alpert, W.T.; Couch, K.A., and Harmon O.R. 2016. A randomized assessment of online learning. American Economic Review 106 (5), 378–82. ↩
- Joyce, T.J.; Crockett, S.; Jaeger, D.A.; Altindag, O., and O’Connell S.D. 2015. Does classroom time matter? Economics of Education Review 46, 64–77. ↩
- Bowen, W.G.; Chingos, M.M.; Lack, K.A., and Nygren, T.I. 2014. Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management 33 (1), 94–111. ↩
- Xu, D., and Smith Jaggars, S. 2013. The impact of online learning on students’ course outcomes: Evidence from a large community and technical college system. Economics of Education Review 37, 46–57. ↩
- Xu, D., and Smith Jaggars, S. 2014. Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. Journal of Higher Education 85 (5), 633–59. ↩
- Streich, F.E. 2014. Education in Community Colleges: Access, School Success, and Labor-Market Outcomes. Chapter 2. PhD diss., University of Michigan. https://deepblue.lib. umich.edu/bitstream/handle/2027.42/108944/fstreich_1.pdf (accessed June 30, 2017). ↩
- Bettinger, E.P.; Fox, L.; Loeb, S., and Taylor, E.S. 2017. Virtual classrooms: How online college courses affect student success. American Economic Review 107(9), 2855–2875 ↩