Since its release in 2014, the learning-management platform Google Classroom has quickly become one of the more popular online tools in K-12 schools. Adoption ramped up dramatically with the mass switch to virtual instruction that followed COVID-19-related school closures in March: This spring, Bloomberg News reported that the number of active Classroom users worldwide had doubled, to 100 million.
So how is this increasingly pervasive educational platform changing teaching and learning?
The effects are subtle but significant, argue a team of researchers led by Carlo Perrotta, a senior lecturer in digital literacies at Monash University in Melbourne, Australia. As Google permeates schools with the “logics of datafication, automation, surveillance, and interoperability,” the researchers argue in a paper published earlier this month, the corporation is funneling teachers and students alike into a narrow set of activities that software developers and business strategists have determined count as legitimate pedagogy.
The convenience of tools like Classroom is hard to deny.
But while the platform’s impact on students’ development is still uncertain, Perrotta and his colleagues write, what is clear is that student Classroom users are helping Google learn by providing massive troves of data the company uses to refine the artificial intelligence and machine-learning algorithms that power its most popular consumer tools. In the process, the researchers argue, Google is buttressing its broader business strategy.
“The moulding of Classroom users into datafied Google users represents a corporate ‘long game’ entirely consistent with its overall strategic outlook,” the paper reads.
With the U.S. Department of Justice and 11 state Attorneys General now suing Google for alleged antitrust violations related to its consumer products like Search, the paper offers a timely look at the company’s expanding role in schools.
The following transcript of Education Week’s conversation with Perrotta has been edited for length and clarity.
How is Google Classroom changing the work of teachers and students?
Infrastructure are things made by people to organize social life. Think of roads, or train lines. We looked at the way the Google Classroom platform is emerging as an infrastructure for pedagogy. It has features and properties that channel and organize the work teachers and students do.
All sorts of tasks are now offloaded on to the platform, on to third-party integrations, and on to parents and guardians. Teachers often no longer have a say about what functionalities get integrated into their classrooms. A system administrator now makes that decision. Teachers are required to accept it. They become a cog in this infrastructure.
Also, there is a degree of platform literacy that is now required to teach and learn. A lot of pedagogy becomes about how to engage with the platform correctly. The ability to engage meaningfully with the platform increasingly cannot be separated from actual teaching and learning. That benefits Google first and foremost. It gets users used to the Google environment, so when they leave Classroom, they will keep engaging with the Google ecosystem.
Isn’t it a good thing to automate some of the rote tasks required of teachers?
There are a number of mundane day-to-day activities that can be automated, and that obviously has benefits for teachers. But the way the system is structured takes away a degree of agency.
There’s also this idea about the “cascading logic of automation.” It starts with rote and routine aspects that are difficult to argue. But automation then begins to colonize other aspects, and then it becomes increasingly dominant and pervasive, a way of organizing a particular activity. We can see it happening in policing and health care. Something similar can happen in education.
Can you give a tangible example of the problems you see with this in education?
The best example is actual literacy, learning how to read and write. Google Classroom now automates the process of originality checking, so it can be carried out by Google Docs itself. Teaching students how to engage appropriately with original material and explaining originality in a way that students can understand is a pedagogic process. But if that becomes automated, and it’s just Google telling us what is original and what’s not, it takes away the pedagogical dimension and becomes a matter of surveillance. Mistakes get flagged as a problem, rather than being treated as a teachable moment.
Why is Classroom different than the other learning management systems, some of which have been use in for more than a decade?
The short answer is Google. The company has an unprecedented scale, and it in many ways invented the business model of extracting and using data from users. The platform economy has monopolistic tendencies. That finds its way into classrooms in indirect ways.
You also write in the paper about the impact of Google Classroom’s API, which prescribes a fairly narrow framework of supported teaching activities, such as assigning quizzes and submitting assignments.
The idea is that platforms operate by creating frameworks for other tools to work together and for users to engage with the platform. Whether it’s Facebook, Twitter, Google, there are certain predetermined ways you can engage. Those are determined through a design process. An API is part of that process. It determines what counts as a legitimate user action. We call it a ‘data ontology.’ It determines what is actually ‘real’ in a particular context. But this ontology is actually arbitrary. Developers and corporations make those decisions in the interests of efficiency. It’s not like Google engaged with pedagogy experts to come up with the data ontology behind its API.
For teachers, though, the tendency becomes to go with what the platform allows. If certain teaching activities don’t fit within that particular framework, they require additional work, technical skills, time, all things teachers may not have. The risk is that teachers just adapt and go with the flow of what Google allows rather than challenge it with something more pedagogically meaningful.
You also write about Google Classroom’s “underlying logic,” which you describe as focused on neutrality, personalization, and predictive capacity. Why is that troubling?
In education systems around the world, we see a huge focus on measurement, accountability, tests. The negative effects on teaching from these regimes of accountability, such as the narrowing of curricula, have been widely documented. This is the ground upon which Google Classroom is building its dominance. In the process, it’s exacerbating those problems. The ideas of predicting student success and personalizing education are happening on the back of problematic developments in education.
How do data privacy concerns fit into this, especially given what you describe as Google’s “extractive” business model?
Google is clear that any data they collect through Classroom is not being used to profile users or target them with ads. But the moment users step out of Classroom, the traditional extractive model applies. If a teacher assigned a YouTube video to watch, that extractive model applies. We call it a leaky pipe.
The integrations are also a danger in their own right. They’re basically the Wild West.
And even the data collected within the confines of Classroom is still used to refine Google’s tools. They use all the data collected from Google Docs, for example, to train the algorithms for the company’s AI models. Anyone who uses Google Docs is contributing to that process.
So Classroom is not really a fully closed environment. There are gaps and holes. The current regulatory framework is unable to keep up. We suggest this framework should change to make Google more accountable as an educational actor that is shaping these dynamics in an active way.
Researchers, journalists, and advocates have been raising these privacy issues for years. But as you note, the adoption numbers for Classroom have exploded. Are schools making the same calculation as consumers, that the convenience provided by this service is so great that they’ll just brush aside more complicated concerns?
That’s definitely a fair assessment. You can apply the same logic to many other areas where platforms become dominant. They make life easier and their efficiencies are undeniable. So people just go along with them without questioning the problems. But it shouldn’t come down to individuals having to make those decisions. It’s increasingly clear that individuals in their own personal lives find it difficult to resist or even questioning the underlying logic of these platforms. There needs to be a broader political debate about regulation, and about what these platforms are doing to society.