Do We Need a Human in the Loop?
The primary user of generative AI tools may make all the difference.
The AI era is quickly expanding across the world of education. MagicSchool, Inc. was founded in 2023 and already claims more than 6 million users and more than 20,000 schools. With the rapid adoption of these technologies – I am referring to machine learning and large language models, specifically – come wild claims.
Take, for example, Alpha School. This private school has recently expanded to 14 sites around the United States after its initial launch in Texas. The $65,000-per-year school has embraced the use of AI to its core. There are no teachers, only guides. The school claims its 2-hour learning model results in twice the academic gains in a fraction of the time (independent research supporting this claim is difficult to find). Students receive their core academics through a proprietary AI-enabled tutoring program that individualizes the educational experience for every student. After the 2-hour academic blocks are complete, students are free to explore project-based learning activities as they interact with their guides and other students.
Walking the halls of tech-focused education conferences, you will find tons of programs that put AI right in front of the students. The major AI systems from OpenAI, Anthropic, Google, and others come with tutoring modes for students to ask questions and receive individualized support. Early evidence of generative AI use by students without the presence of significant guardrails has not been encouraging. Perhaps this model, where students are given direct and almost unrestricted access to AI, is not a good solution. There is another model that might be a better approach.
Compare Alpha School’s 2-hour learning model to Paloma Learning, where AI is used to support human tutors. Paloma uses generative AI to create research-based instructional materials that teachers send to parents. The parents then deliver tutoring on literacy and math at home with their children, with support from the program. This is a so-called “human in the loop” model, where a human is part of the process and can have a hand in the output of the AI system. One early study has found that the AI + human tutoring approach can be just as effective as a human-only tutoring approach.
On its surface, being just as good but not better than a traditional human-only tutoring approach seems like a non-starter. If humans alone are just as good, why bother? The difference is that a human + AI approach can recruit thousands or even millions of people to become tutors with limited support from teachers. Paloma Learning is making a bet that being able to assign tutoring to parents and giving them AI-generated instructional materials will result in better academic gains than AI-only tutoring.
Unfortunately, it is too early to know for sure if the human-in-the-loop AI models are better than the widely available direct-to-student models. Rigorous research conducted in the real world takes time. I will admit that I have placed a personal bet on the human in the loop model. Eddie, my company’s product, is designed to give teachers answers to questions about what works in the classroom.
I am assuming that educators are best positioned to make the correct decisions for their students. Eddie speeds up the process of researching what works or creating instructional materials such as lesson plans or assessments. The human in the loop is the educator who delivers the instruction. The fundamental question of the moment for AI in education is whether the technology is best when it faces the student or when it supports an adult.

