The Dolly Parton Dilemma: Teaching Students the Critical Veto in the Age of Agentic AI
- gemkeating87
- Oct 5
- 3 min read
Have you had conversations about Personal Autonomy and Consent in the age of AI? Are we teaching our students to wield the ultimate critical tool: the power of the veto?
As part of our experiential learning, my students and I choose projects to explore. My students asked me about my own - to create an agentic voice for peace based on Dolly Parton. The idea is to get the agent to generate ideas to support literacy and philanthropy. For me, she is a role model for the ages. But here is the critical rub: I don’t feel positive about feeding her likeness, thoughts, and ideas into the big black box without her consent. I have not brought this project to fruition yet because of this.
This ethical dilemma - using a public figure's identity, even for a cause as pure as peace - is what led us to the vital, student-facing conversation we need to be having right now. The ethics of unauthorized celebrity usage are complex, but the ethics of protecting our students’ own likeness and intellectual property are immediate and non-negotiable.
The Consent Crisis
We are facing a Consent Crisis. Tools like Sora 2 are constantly encouraging users to willingly upload their photos and videos into a "big black box." The capabilities are amazing, but we must protect our students from passively surrendering their likeness to AI. They need safeguards.
This is where the discussion shifts from celebrity ethics to personal autonomy. To work around the issue for creative projects, I’ve suggested students use a resource like thispersondoesnotexist.com to create a placeholder avatar or persona. This simple step immediately forces consent to be the core principle of their AI creative process.
The stakes are rising. We are seeing more and more deep-fake style videos. How will we know if something did or did not happen? While a video of my niece turning into a bumblebee is easily refuted, it’s harder to tell when complex data or subtle events are fabricated. This is a challenge for Data and Information Literacy that extends far beyond a fact-checking exercise.
The Agentic Difference: Doer vs. Responder
This is why we must teach the difference between a simple chatbot and an AI Agent.
Feature | Chatbot (Responder) | Agentic AI (Doer) |
Core Function | Respond (Answers a query). | Act (Sets goals, plans, and executes tasks). |
Workflow | Single-step. Answers one question and stops. | Multi-step/Planning. Breaks a goal into smaller, executable steps. |
Agentic AI systems can autonomously plan and act. Our job as educators is to teach our students to critically audit that plan before it is executed.
A Controlled Embedding: Practising the Veto
The concept of personal autonomy starts with simple things. Last weekend, while prepping for the school bake sale, my son insisted on full process control. He wanted the autonomy to measure the flour, pour the batter, and shape the cookies himself. I provided the parameters; he demanded the execution.
That fierce desire to do the thing, rather than watch it be done, is precisely what we must cultivate in the classroom. Our role isn’t to prevent them from using powerful tools; it's to teach them how to critically veto its instructions and retain their own intellectual agency.
In my Financial Maths class, we use a simple tool in an agentic manner—a Custom GPT with the manual for the students' specific graphic calculator (TI-Nspire CX II Custom GPT). This allows us to practice the auditing the agentic process in a safe, contained environment.
1. The Agent’s Plan (Autonomous Action)
Students are given a multi-step Financial Maths problem. Their task is to prompt the Custom GPT for the exact sequence of buttons and syntax needed for the calculator to solve it. The AI acts by autonomously generating a complete solution plan.
2. The Critical Intervention (The Human Veto)
The students do not touch the calculator. This is the non-negotiable moment.
The AI-generated sequence of steps becomes the object of study. Students must rigorously examine the methodology against their own mathematical knowledge, identifying the formulaic justification behind each input and annotating the steps. The focus is not on finding the answer, but on the analytical verification of the machine’s reasoning.
The Ultimate Guardrail: Intellectual Honesty
By practicing the human veto, students learn:
Autonomy: They must own the intellectual process, even if a machine provides the suggested steps.
Verification: They are forced to demand a verifiable framework of proofs to determine the plan's authenticity and correctness.
We are moving our students past simply being consumers of information to becoming critical auditors of action. This is the true equation we need to solve: ensuring that in an age of automated AI action, the student is always the ultimate, informed Veto.



Comments