AI vs. Machine Learning and Education’s Future

Personalization and the Limitations of Machine Learning Algorithms
Dr. Cathleen Norris & Dr. Elliot Soloway

“Personalized learning” is a pedagogical technique fueled by machine learning algorithms that is gaining considerable visibility in today’s K-12 marketplace. And, there is even some empirical evidence that those personalized learning applications are leading to improved student achievement. Given how hard it is to find positive impacts of technology on student achievement, educational technologists welcome these empirical findings.

These are heady times for the promoters and purveyors of personalized learning systems. From the breathless pronouncements of these purveyors, one has the sense that personalized learning is about to penetrate all of K-12 – replacing teachers and improving student achievement at the same time!  Indeed, a leading provider of personalized learning systems asks: “Is artificial intelligence the next big wave in #edtech offerings?”

Careful, careful – Artificial Intelligence is not machine learning, as we argue below. And, while Artificial Intelligence is all about mimicking natural intelligence, machine learning algorithms are about uncovering patterns in data. Before educators jump headlong into “the next big wave” we suggest that educators equip themselves with a deeper understanding of the difference between machine learning algorithms and Artificial Intelligence. Herewith, then, is a “Cliff Notes” summary of that difference.

  • Three-Year old <looking up with a mischievous grin>: “Dad, why does Mopsie <urinate> all the time on our walks?” 
  • Dad <looking down, trying not to show exasperation and consternation>: “Well, son, cough, cough, ahem, ahem, Mopsie needs to mark the trail so she could find her way home.”

Bad answer, right? Dad can immediately see another “why” question forming…

  • Three-year-old <deep in thought>: “But Dad, Mopsie’s not going to get lost; we are taking her for a walk.”

Before educators jump headlong into “the next big wave” we suggest that educators equip themselves with a deeper understanding of the difference between machine learning algorithms and Artificial Intelligence

And so it goes! A normal three-year-old engaging in normal three-year-old behavior: asking why questions all the time. Besides enjoying driving their parents to distraction, the three year old is engaging in a deeply human intellectual activity: asking a question about why something is the way it is because one is genuinely curious and then engaging in a conversation to further explore the situation – before moving on to ask another seemingly random “why” question to further irritate and exasperate.

  • You: “Google, why did you route me through <remember the time that Google Maps took you in a circle>?”
  • Google Maps: “… factor 34 is .7, factor 39939281 is .9, factor 294950 is .8 …”

Bad answer? No!  In fact, Google Map’s “answer” is not an “answer” in the sense that humans understand the notion of “answer.” 

While the three-year-old and Dad are doing “cognitive computing” as they engage in their conversation, Google Maps has been using machine learning algorithms – algorithms that are statistical in nature which are algorithms that look for correlations amongst factors in order to make decisions. The humans and the computer in our sample conversations are computing in two fundamentally different manners.

And there you have it, sports’ fans! The difference between Artificial Intelligence and machine learning.

Not so fast, Slick! Where did “Artificial Intelligence” come into the picture? We were talking about natural intelligence and machine learning.

Yes… but, the goal of Artificial Intelligence research has always been to understand how cognitive computing – natural intelligence works, i.e., the goal of Artificial Intelligence is to compute “like” a human.  Back to the Mopsie snippet:  To carry on that conversation, a cognitive computer – a computer that used Artificial Intelligence to reason with – would need to understand a whole lot of stuff:  goals of three-year-olds, goals of parents, the goals that parents and three-year-olds impute to pets, the types of behaviors that can be expected from pets on a walk, the structure of a walk-the-dog scenario, the discourse patterns that can take place between parents and three-year-olds, etc., etc. etc.

And, by the way, Dad’s reason for Mopsie’s behavior was technically incorrect. But, Dad had a good reason for explaining Mopsie’s behavior in that way: Dad wanted to teach his three-year old about watching where you are going and being able to find your way back home. Phew! Yes, to make an artificially intelligent computer engage “like” Dad is really hard to do – as researchers in artificial intelligence have long acknowledged.   

A Google Maps app that plans routes “like a human” would certainly be a valuable app since, in principle, it could answer the question “Why did you route me …” in a human-comprehensible format. However, how effective at route planning – and question answering – such an app might be is a truly an open question. Indeed, how effective are you at route planning – and question answering?

Now, in this post we have gone to some trouble to make a distinction between machine learning and Artificial Intelligence. Is this effort just a pedantic enterprise, with little at stake? No; quite the contrary! But to explain why making that distinction is important, we need to talk history for a bit.

In the 80’s, another computing technique, “expert systems” arose out of some work in Artificial Intelligence, just as recently, machine learning arose out of some work in Artificial Intelligence. And, just as everyone is excited about machine learning today, back in the 80’s everyone thought that expert systems would solve all the world’s problems: experts, after all, just have a set of If-Then rules in their heads, so just talk with experts, get their If-Then rules, and bingo bongo, medical diagnosis, financial investment, etc. would be solved. And in the 80’s, the term “expert systems” became synonymous with the term Artificial Intelligence.  

Astute readers – which means anyone reading this blog post – can probably fill in the rest of this story.

It turned out, in the 90’s, that while an expert system could be beneficial in some contexts (e.g., mixing Campbell soups), the expert system computing technique was not as universally applicable as was initially thought. Surprise, Surprise. The upshot? AI Winter. Artificial Intelligence research took the fall, with funding for – and respect in – AI drying up.  

Machine learning today is what expert systems was in the 80’s.  

  • Memo to AI researchers: Buy a warm coat.   
  • Memo to educators: Proceed with caution!

A parent can ask a human teacher – and a true artificially-intelligent system – “Why did you assign problem X to my child” and expect to receive an answer couched in terms of pedagogy, psychology, and domain content. However, a typical personalized learning system that is powered by statistically-based, machine learning algorithms will respond much as Google Maps would respond: “… factor 34 is .7, factor 39939281 is .9, factor 294950 is .8 …”

Given such an unhelpful answer, could a parent ever really trust a computer to educate his/her child? And how could a school administrator justify the use of such algorithms? Will parents – or teachers – accept the following sort of justification: “Well, research data indicates that the algorithm usually ‘works’ – but we can’t really explain why the algorithm usually works other than to say there are patterns in the data, and we can’t really know when it doesn’t or won’t work – the algorithm, after all, is just based on statistical probability.”

Machine learning, not AI, has potential to alleviate strains on teachers through advanced courseware and make personalized learning techniques valuable and highly useful. But as we see it, with all the face-planting and chaos we see as districts pressure teachers to “go digital” we also see wonderful possibilities once leaders and teachers change their philosophical stance. Step bravely and blindly into the abyss of the future, take chances and be willing to fail forward. See our colleague Roger Schank’s withering analysis of the Bob Dylan-IBM Watson commercial.  Certainly don’t be that educator that needs to explain why they spent all that money on “hoola-hoops” – on a fad for educating children.

Early in their careers both Norris and Soloway considered Artificial Intelligence as their research areas. Norris did R&D in Artificial Intelligence and computer-based tutoring systems. Soloway wrote his PhD dissertation, in 1976, about a machine learning program that he developed that acquired the rules of baseball by just watching games of baseball. Soloway tells this about his conversion to education: “My wife and I had a child, and it occurred to me that making children smarter would be a much better use of time than making computers smarter.” A high-school math teacher for 14 years, Norris’ understanding of the classroom has fueled their productive partnership over the past 20+ years.

Cathleen Norris and Elliot Soloway work as team – bringing education and technology together – what a concept! Cathleen is Regents Professor & Departmental Chair, College of Information, Department of Learning Technologies, University of North Texas. Elliot is an Arthur F. Thurnau Professor, Department of CSE, College of Engineering, University of Michigan. They can be reached at: or

Recent Articles

Event News
Single-Sign-On, Budgeting for Sustainability Hot Topics at Texas Educators Meeting
Learning Counsel Staff
Digital Curriculum Transition is in the Eye of the Storm
Learning Counsel Staff
How Design Thinking Was Applied to Accomplish Equity in Learning
Deagan Andrews