top of page

Solving for the Future: Why AI Errors are a Teacher's Best Friend.

  • gemkeating87
  • Aug 24
  • 4 min read
Created with ideogram
Created with ideogram

Over the summer, I've been visiting family, catching up with friends, and, crucially, moving countries. This has involved a lot of logistics, decision-making, and problem-solving -everything from "Will this bookshelf fit in our shipping?" to a thrilling round of "If we travel to Southport, will this car journey coincide with our son's nap?" Some of it was aided by AI (I even snapped photos of our bookshelves overflowing with books and records and asked AI to estimate how many there were -it was surprisingly accurate for our shipping records). However, other dilemmas required a bit more personal analysis. The process of evaluating and selecting the right approach for a task is a cornerstone of analytical thinking.


As the academic year kicks off again, I've started to consider problem-solving frameworks to implement in my classroom to encourage metacognition in learners.

My usual strategy, is particularly common with Math Teachers, and a universal framework for analytical thinking. We like to break it down into steps as below:


1) Understanding what the question is asking us

2) Devising a plan - this can be via diagrams, trial and error, making equations, working backwards. This involves logical reasoning and strategy formation.

3) Carrying out the plan

4) Reviewing the solution - verification


Optimal step 5) Review the whole problem solving process, identifying what we learnt and considering how this could be applied to future problems - critical evaluation


The thing is, with AI, some students will try and circumnavigate this process and rely on programmes such as MathGPT. If you have not tried MathGPT, let me assure you that your students probably have - I was introduced to it last year by a student.

Screenshot of the MathGPT homepage
Screenshot of the MathGPT homepage

The positives of MathGPT and its counterparts are:

You can upload a photo of a problem and it will show you the steps to solve them.

It gives a breakdown of the solution for you to follow and understand.


The issues are: Steps are not always annotated, so if students don't understand the process, it can be hard to follow the solutions.

Students can use it, write down the steps and not have any understanding of the problem.


When students bypass these steps with AI, they're not just skipping steps, they are skipping the development of analytical skills.


The eternal question remains: "Who is doing the learning?" If you're using these systems to learn, congratulations! You now have a tireless math tutor available 24/7. If you use this purely for finding the answers and not doing the learning, that's a problem. However, let's rewind to my teenage years, when I was mastering the art of copying chemistry homework from my friend on the bus. Spoiler alert: I learned nothing. Sure, I avoided immediate consequences, but I was left clueless about the subject. This classic scenario of avoiding the hard part of problem solving has been a rite of passage for some teenagers for decades.


In my classroom, I've tried a different approach. I give my students a set time to tackle an exam-style problem, then we review the steps using AI. Think of it as a thinking partner rather than a simple answer key.


This method works wonders for traditional mathematical applications like optimization models and equation creation. Below, I’ve modeled this approach using a question from the IBDP Applications and Interpretations Specimen Paper:


ree

In this example, I decided to ask for a markscheme for this, and its mostly correct. I screenshotted and uploaded just this question as a demo.


Output from MathGPT

The moment a student finds a solution with MathGPT, as above, a new analytical task begins: evaluating AI's reasoning. Is the logic sound? Are the steps justified? Could there be a more efficient path? Answering these questions requires a high degree of analytical skill. Simply copying the answer outsources the work, but if we encourage students to critique the answer forces students to engage their own analytical mind, turning them from passive recipient into an active investigator.


The problem lies with the last question where the context and the fact that the x-axis is in thousands seems to have thrown the model - this is a contextualized part of the solution that student, machine and sometimes teachers overlook.


This error is a masterclass in the importance of analytical thinking. The AI performed the mathematical computation correctly but failed at interpreting the context—a skill where human analysts currently excel. The student's job, therefore, transforms from 'calculator' to 'contextualizer.' Their analytical skill is demonstrated not by solving the equation, but by identifying the discrepancy between the model's output and the real-world constraints of the problem. They must ask, 'Does this answer make sense?' which is the foundational question of all analysis.


Another place where MathGPT and other similar platforms struggle are with Finance and using the Finance solver. As below, I used a specimen paper question from AISL Paper 2. Here, it was completely incorrect. It gave answers and guided how to use the Finance App, even giving correct fields for the TMVsolver but did not get to an accurate solution.


ree

I've put the video below for this and you can see that it is trying its best to solve this appropriately:


MathGPT vs Financial Math

After using this with students and colleagues, we suspect that Analysis and Approaches style questions (and A-level), often referred to as "Quadratics and Simultaneous Equations in disguise," worked effectively with platforms like MathGPT. This can also be true for MYP and IGCSE style questions, which were less context-heavy.


This reliance on platforms sometimes made students hesitant about obtaining accurate answers. Following several instances of "This is what the markscheme says" versus "But MathGPT says different," many students concluded that while the steps were useful, they often felt frustrated with inaccurate results.


This student frustration is a valuable learning moment. It teaches a key component of analytical thinking: skepticism and verification. Just as a doctor uses diagnostic tools but relies on their expertise for the final judgment, students must learn to treat AI outputs as a 'first opinion,' not a final truth. The process of cross-referencing, questioning, and ultimately synthesizing information from various sources (the AI, the textbook, their own knowledge) to arrive at a correct conclusion is a powerful analytical exercise.


Using AI in this manner can, when done with consideration, amplify lessons in analytical thinking and problem solving, providing a framework to show attention to detail and provides opportunity to refine critical thinking within the mathematics classroom.


Happy analysing!

Recent Posts

See All

Comments


Drop Me a Line, Let Me Know What You Think

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page