The Tropical Twist: Why "Clean" Math Fails in an AI World
- gemkeating87
- Jan 12
- 3 min read
As I progress through my Master’s in Education Technology, I’ve become increasingly preoccupied with a specific hurdle in the math classroom: Invalidity Blocks. According to the Teaching and Learning Research Programme (TLRP, 2009), students often become so reliant on routine, "sanitized" mathematics that they struggle when faced with the "messiness" of the real world.
Last week, I decided to tackle this head-on using a Goodness of Fit test, a bag of Skittles, and a healthy dose of AI skepticism.
The Setup: See, Think, Wonder
We began with a simple "See, Think, Wonder" routine. I displayed a bag of Skittles and posed the question: "Do purple skittles appear less than other types?"
Students researched the Mars Wrigley claims roughly a 20% distribution per flavor and designed their experiments. They chose weights of skittles, determined probabilities, and thought about what the observed data should look like.
I checked their plans, got their friends to critique their tests and adjusted from feedback, using the Laurillard cycle of adaptive feedback.
But then, I introduced a digital "sparring partner": Generative AI.
The SAMR Leap: AI as a Critical Mirror
Using Puentedura’s SAMR model, I didn't want the AI to just substitute for a calculator. I wanted to move toward Redefinition. I asked students to prompt an AI to simulate the distribution for Skittles colours.
The AI dutifully churned out a roughly 20% distribution for varying quantities of skittles - students picked everything from 10 boxes of 36 packs to 10kg to 100,000 skittles. It looked authoritative. It looked "correct." But there was a glaring flaw: The AI didn't ask which Skittles.
The Tropical Twist
This is where the "messy mathematics" came in. I introduced Tropical Skittles and Yogurt Skittles into the mix - two variants that contains no purple Skittles at all.
The room shifted. One student noted it "wasn't fair" because they hadn't been told to account for variants. Another countered that she'd said "only use original skittles" in her prompt. However, this frustration is exactly where de Aldama’s Cognitive Enhancement occurs. By introducing a novel, unexpected variable, we forced the students to move from passive consumption of AI data to active, analytical critique.
The Analytical Framework: ABCE
To guide their reflection, students used Analytical Thinking Framework based on four pillars:
Accuracy: Was the AI's math correct?
Bias: Does the location of manufacture (Hong Kong vs. US) change the "standard" mix?
Completeness: Did our initial experiment omit the existence of variants like Tropical or Wild Berry?
Ethics: Was it "ethical" for the teacher (or a data provider) to withhold information about the Tropical variant?
Why This Matters for Future Learning
By using Laurillard’s Conversational Framework, we created an adaptive feedback loop. Students engaged with the content, sought feedback from me, and then acted as "critical friends" for each other.
The result? They weren't just doing math; they were practicing Data Agency. They realized that:
AI is a "Black Box": It defaults to standard assumptions (the 20% rule) unless challenged.
Omission is a Bias: Ignoring flavor variants leads to "clean" but "invalid" results.
Context is Queen: Global data looks different when you consider regional production differences.
You Can Do This Too
You don't need to be a tech expert to run a lesson like this. You just need to be willing to let the lesson get a little messy. By intentionally designing tasks with "flaws," we move students away from cognitive decline and toward the higher-order thinking skills they need for a future where AI is everywhere.
Next time I reach for a textbook problem, I will be asking myself: How can I add a "Tropical Skittle" to this context?



Comments