Before You Judge Online and Blended Learning, Get the Data Right
The Hechinger Report recently published an article by Nichole Dobo entitled “Should We Hit the Pause Button for Online and Blended Learning?” The article cites a new National Education Policy Center report released by Gary Miron and Chariosse Gulosino. Dobo points out that the report suggests that “Both fully online (virtual) schools and blended learning schools included in the report tended to fare worse than traditional schools on state assessments of quality.”
What is not described in the report is the flawed nature of the study. Upon a closer look, it is clear that virtual and blended learning schools are likely to do a better job than traditional brick-and-mortar schools with expelled, truant, or pregnant students, or students who may have behavioral issues—but accommodations for various student populations are absent from the report. In fact, in the section of the study that cites limitations, we find this statement:
While comparisons of two inherently different forms of schooling, each representing different geographic datasets, have some obvious weaknesses, national aggregate data is what state and federal agencies typically use in their reports and comparisons.
In my opinion, this statement raises even more questions about the data and student populations being compared. First, the report clearly fails to consider differences in student population. In my extensive experience, virtual and blended programs are used disproportionately with students at risk of not graduating. Online learning can provide scaffolds that increase the likelihood of success for students who struggle with underdeveloped skill sets, absenteeism, or credit deficiency. These scaffolds include credit recovery mode courses that reward students for prior knowledge or software that converts text to speech.
Virtual and blended programs are too diverse to be analyzed as a group, even with selection criteria. Programs that are 100% virtual use completely different instructional models than their blended learning counterparts, and blended learning programs differ greatly in how much instruction is computer-based (screen time) and how much is traditionally delivered. Without that information, it is impossible to make valid comparisons.
Not only that, but the graduation rate data cited in the study fails to account for student standing when enrolled in a virtual or blended setting. Often, students are in these settings because they are far behind.
Another variable that has a direct impact on student achievement and appears to have been ignored by this study is school revenues during this time, which may have impacted the amount of professional development teachers received during blended learning training or virtual teacher on-boarding.
In short, the recommendations given by this study must be taken with great caution. Here are additional reasons why:
- The recommendation to use smaller class size is questionable, since the body of research in this area is not conclusive, and since the data collected in this research is based on the bricks-and-mortar setting. It would not apply to a fully virtual or blended setting.
- If there are sanctions for virtual schools, they should match sanctions for brick-and-mortar schools that show similar student outcomes, and should be based on growth data not federal outcome data—especially if it is including a graduation rate metric, and additional time should be provided for students (to age 20) to complete diploma requirements.
- Simply identifying who operates schools is not enough: Variables include depth of training for staff, the stability of online content, and commitment to the student. In two of these three variables, it is likely that a privately sponsored school will do a better job.
Research to identify policy options should go beyond simple federal accountability measures. If it did, it would likely prove that virtual schools are the most effective way to engage students who would otherwise have dropped out.
I certainly appreciate that Nichole Dobo and the team at Hechinger Report reignited this conversation. New technologies and learning approaches are transforming education, and discussing the pros and cons can lead all of us to challenge our thinking and come up with great ideas.
In fact, the best suggestion in the report was to develop new outcome measures. Measures should include growth data and account for student history to truly reflect differences between brick-and-mortar and virtual school settings. Blended schools need further refinement, perhaps categories for different models (lab rotation, self-blend, etc.) or the percentage of instruction that is online. Only with these improvements can a fair comparison can be made.