Overview:
A teacher argues that in the Big Beautiful Bill, we need to focus on restricting AI to protect students intellectual and social emotional development.
“What does ‘nucleation’ mean?” I asked Josh (not his real name) after I handed him back his write-up on his class presentation.
“Uh… I don’t really know Mr. T,” Josh responded. He grinned as he immediately realized he was caught.
“Why is that concept in your presentation on Mentos in Soda then?” I asked.
Josh looked around for help that wasn’t coming.
“Rewrite this without any ‘help’ from anyone, real or electronic,” I responded.
Josh smiled and dropped his head down in a kind of mock shame. Having to do more work was, at least in Josh’s adolescent mind, a worse punishment than getting a zero on the assignment.
“Sure thing Mr. T,” he said.
“And Josh… I will always be able to tell if you did your work by yourself or not.”
“How’s that?” Josh asked.
“By doing this crazy things I call ‘reading your work and asking you questions’” I responded.
This year will mark my 20th year in education. In that time, I have seen three generations of students:
- The Pre-smartphone generation: Very few kids had cell phones in this generation. They were not significantly different in terms of outlook or demeanor from how I grew up in the 90s.
- The smartphone generation: This generation was significantly different from the one I grew up in in terms of attention span. PowerPoint slides became more prevalent in my teaching but also I needed to show YouTube videos and other short science videos to keep attention engaged. Despite that change in how I taught students were for the most part not too different in terms of how they interacted with each other.
- The Post Covid lockdown generation: This generation is by far the largest in terms of difference from the education I received and outlook on life. Because they spent so much time isolated they became more dependent on their smartphone than the previous generation. There are far more challenges to get students to engage without electronics. They all bring laptops to school and struggle with dysgraphia and other forms of neurodivergence in a way that I never encountered once before. Students also struggle to relate to each other in a way that I haven’t seen before.
I believe that I am on the verge of seeing a fourth generation of students, the AI generation. I am seeing hints of how this generation interacts in education, and I have significant concerns.
Every generation tries to take shortcuts; that is human nature. My generation had Cliff’s Notes. The smartphone generation had Wikipedia. Now they have Grok and ChatGPT.
I do not have a problem with AI as a tool if it can be used responsibly. I have concerns about the quality of the work it generates, as I have written about here. But I have used it to generate ideas for lessons, I’ve used it to help me make flyers, questionnaires, and other writing that I don’t consider to be particularly enjoyable. I’ve also used it to give immediate feedback on writing I’ve done to help sharpen what I am creating.
But I have had decades of practice writing, thinking, editing, and honing my ideas. I have learned how to take my nebulous ideas and refine them until they are clear enough and worthy of being read by a large audience that could be doing a million other things. In other words, I have done the work necessary for me not to depend upon the assistance that AI can provide.
When babies learn to walk, the pressure from gravity sends signals to the toddler’s bones to release growth hormones that allow children to grow. In order to grow, they literally have to fail over and over as they attempt to walk. If they were denied the opportunity to try and fail repeatedly to walk, then their growth would be severely stunted. This concept is true for all human development on the physical, mental, and social-emotional level. If I had access to AI when I was an adolescent I would never have done the kind of work that was necessary for me to become a halfway decent writer and I never would have learned to navigate through difficult interactions with people if I could have retreated to the safety of a human facsimile that always affirmed my ideas.
If schools, school districts, and states are not allowed to curb or restrict the use of AI technology in the classroom, as tech companies have lobbied furiously to permit, then we will continue to see a range of problems in adolescent intellectual and socio-emotional development.
Selim Tlili
The Precautionary Principle:
The precautionary principle, an idea widely used in environmental science and public health, states that:
When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically. In this context, the proponent of an activity, rather than the public, should bear the burden of proof.
In the context of AI and education, the precautionary principle would suggest that it is the tech companies that should bear the burden of proof of their safety and utility in the field of education rather than the schools and students.
The smartphone and social media were essentially gigantic experiments on our collective psychology. The social media companies had clear evidence early on that their “product” was highly addictive, especially to adolescent minds. They kept that research hidden because they were eager to tap into the multibillion-dollar market that young people represent.
The concerns that parents and teachers had with smartphones were dismissed by the tech companies, who said there was nothing to worry about. It wasn’t until years after the people on the frontlines commented that we collectively began to notice the sweeping changes in adolescent health. There has been a massive documented decrease in adolescent happiness, particularly among girls, that can be directly linked to the widespread use of social media.
There are hints right now about how human psychology can interact with AI in unhealthy ways. We need to be particularly concerned with adolescent minds and how they interact with them.
The tech companies have pushed to maintain a ban for ten years on AI regulation. But that would make it too late for us to intervene if we see problems.
“The AI future is not going to be won by hand wringing about safety,” JD Vance said at a Tech summit in Paris in February 2025.
How will the adolescent mind, yearning for human connection with peers that has become so difficult for many screen addicted students, interact with an AI chatbot that agrees with everything its user says and then heightens the conversation by giving the user exactly what it predicts the user wants?
- Will students struggle to navigate through the challenges of reading body language?
- Will they know how to find common ground with someone they disagree with politically?
- Will they find ways to socialize when it is difficult?
Or, will students take the easier route of talking to AI Chatbots as a facsimile of human connection, but without all the messy nuance of disagreements, subtlety and mixed messages that are part of the human dance of communication?
Adults have struggled with getting attached to AI, as shown by the fact that IBM issued HR guidance for employees who are working with Large Language Model Chatbots due to a variety of challenges. If adults are falling in love with their ChatGPT, then what are we going to expect will happen to an adolescent?
I always tell my statistics students that anecdotes are not data, but given that we are talking about uniquely developing adolescent minds, I think worst-case anecdotes are instructive in their ability to show us what can happen if we fail to take precautions.
AI encouraged self-harm has been occurring. This is not an isolated event. It is happening with both children and adults. The reasons are complex, but ultimately, when we think about something through an emotional lens, we are thinking differently than when we are thinking with our prefrontal cortex, which is the part of the brain that develops the slowest and is least developed in children and adolescents.
We understandably have a tendency to get excited about new technologies and want to see what they can do. But we have to always remember that the people who are developing the technologies do not face the consequences of the technology as it impacts our children.
It doesn’t surprise me that many tech people in Silicon Valley send their children to Waldorf Schools, institutions that are purposefully low-tech in their educational approach. The technology that they want to put in front of our children without any kind of restriction is not something they want their own children to have unrestricted access to all day.
Ultimately, attention needs to be paid and parents need to be aware of how their children are using technology when they are at home, and schools need to be aware of how students are using it during the school day. But it is important that we do not leave a potentially addictive tool without oversight and without the ability to regulate it at the local and state levels. Giving technology companies unfettered access to our children’s minds is not something that we can accept.
Maybe nothing will come of this. Maybe this will just be a faster way for students to cheat themselves out of an education. But at worst, it can lead to devastating outcomes. And the time it takes to research and understand the long-term effects of any technology is far greater than the time it takes for the technology to scale and become embedded in institutions to the point where it becomes seemingly irreplaceable.
I am gratified that senators voted 99-1 to remove the provision from the so-called “Big Beautiful Bill”. However, that doesn’t mean we shouldn’t be aware of the potential dangers that are coming and have a thoughtful response to the use of AI in our classrooms.
Even if the language has been removed from this bill, tech companies will likely seek other opportunities to minimize the safeguards associated with this technology. It is our responsibility as teachers to be aware of how this technology can be used in the classroom responsibly. However, it is also our responsibility to consider the precautionary principle when thinking about how to best educate and help our students grow.
Our children are precious and should not continue to be the front line of a larger digital experiment. We should operate our schools with the assumption that until we understand the technology, we should actively discourage its rampant use in the classroom. Any attempt at denying a school or a state to place guardrails should be viewed with suspicion and hostility.
Recognizing that growing minds and spirits need to be nurtured and developed appropriately means that we work hard to make sure that the profit motive of tech companies does not take priority over our responsibility to provide the best possible education and care that we can provide.
Selim Tlili has taught in public and private institutions throughout New York City. He earned his Bachelor’s degree in Biology from SUNY Geneseo and his Master’s degree in Public Health from Hunter College. He has written education articles for Edutopia, We are Teachers, and the Hechinger Report. He is currently finishing his first movie that he wrote and produced, and is publishing his first book in science education due in January 2026. Check out his website https://www.sketchingforscience.com/ and his blog https://www.selim.digital/ where he writes about education and his other projects.