top of page

AI in Education, The New Ring of Power: What Tolkien Would Say About Artificial Intelligence

  • Writer: Patrick Martel
    Patrick Martel
  • Jan 12
  • 9 min read

“He guessed as well as he could, and crawled along for a good way, till suddenly his hand met what felt like a tiny ring of cold metal lying on the floor of the tunnel.”

-The Hobbit


I was at a dinner party at a friend’s house in late 2022 when one of my more tech-savvy friends told me about a new app that had just come out - ChatGPT. He pulled out his phone and told it to ‘tell a joke.’ The joke it produced was super cheesy - I was amused, but not because of the joke. He gave it a few more prompts just to give us a glimpse of what it could do. “What’s the meaning of life?” he asked it. “Answer as Jordan Peterson.”


The answer it produced was MUCH funnier than the joke it told earlier. Funny - and frightening.



ree

I felt something strange, a dramatic and deep curiosity about the nature of this thing, as I reached my hand into my pocket so that I could find this app and have it for myself. I wanted to hold it in my hands. And I certainly didn’t know what I was holding - and still don’t - perhaps the way that Bilbo did in the scene from my epigraph, rooting around on the riverbed and picking up the One Ring for the first time.


Surely AI represents a similarly tectonic force. Or perhaps it is the other way around - the One Ring represents technology, symbolizes anything with such immense potential and danger, a tool of incredibly capability and mystery, promising to transform the world, inspiring in those of us with any ambition with a glimpse of what we could be or become, were we to possess it.


The implications are vast, reaching far beyond that of education, and people of good will in every field (I believe) have a significant responsibility, to do a good bit of (regular old-fashioned, human) thinking before we get swept away in the tsunami of what is coming.


My mind became immediately aflood with some of the more obvious or apparent implications this could have for education, especially the benefits for students with Learning Disabilities (that’s my specialty area). It can create a personalized learning experience and assistive technologies specifically tailored to the needs of the individual, utilizing specific student performance related data. We in special education are always talking about ‘scaffolding’ and ‘individualizing’ and ‘meeting the student where they are’ - this can all be done instantly now, automatically, and for everyone - not just the select few who have a Specialized Learning Plan, IEP, or 504. All boats will surely rise.


But I also, and maybe simultaneously, realized that its risks are equally significant: graduate-level papers can be written instantly whole-cloth, complete with (pretty accurate) citations; long and detailed prompts can be copied and pasted, and responses generated with a click. For remotely savvy students, the use of AI will be untraceable, even with the fanciest AI-recognition tools (let’s be honest here). Every step of the writing process - which takes years to instill even in the most astute students - is compromised, from brainstorming ideas to outlining, generating text, mastering transitions, editing, revising, and compiling works cited. And that’s just scratching the surface - what about data privacy? What about students who already struggle with sudden or rapid change? What will kids discover AI can do, that we didn’t - couldn’t have - foreseen? What happens when all the kids except yours are using AI to “assist” them? 


What will happen to the human mind as it adapts to using AI, even as a tool, even as intended, over the course of years, decades, and generations? What happens as it grafts onto us? Is it a reversible process?


Some fifteen years ago my school had a kind of internal debate about whether or not to allow kids to use calculators. I understood the hard-liners’ perspective, and appreciate the need to be able to do on-command arithmetic, know formulas and how to apply them and such. I can be somewhat of a Luddite myself, in a lot of ways. But I also saw the more progressive perspective, and as a special educator working with kids with dyscalculia and other disabilities, I was usually advocating at least for certain exceptions. Just let the kids use the tech, focus on teaching them how to identify and apply the right formulae, etc.


Now we have a tool that can do all that, and all the rest of the student’s homework for them, in less than the time they’ll be waiting at the morning bus stop. It is the same size as a calculator, and fits right in their pockets.


What happens when teachers use AI to design their lessons and assignments, and the students use AI to generate responses to the prompts? AI has a conversation with itself? The snake eats its own tail.


I know the question of technology is as old as man - as old as the ‘garments of skin’ in the Old Testament, but I envision the appearance of computer technology onto the scene through the 20th century as a figure erupting from beneath the surface of the earth, like a long-banished ancient entity, fully incubated and now emerging: at first kind of formless - a back, a hip, something nondescript. Computer, internet, and mobile technology came in waves, great heaves of emergence. And now I have the feeling, with the arrival of Artificial Intelligence, that we are about to see this great Titan come forth and stand at full stature; we are about to see its face.


I’ve spoken with many teachers on these subjects, and obviously opinions span a very wide gamut - but I have a deep skepticism of those who don’t have this kind of mythical reverence for the transformative power of AI - those who think it’s just the ‘calculator debate all over.’ ‘It will just be a ‘neat tool teachers can use to help structure their lessons,’ they say, and will take some ‘unnecessary burden of lesson planning off of them.’ ‘There have always been the kids who try to take the shortcuts, and we’ll navigate dealing with them as we always have.’


Technology is the ‘garments of skin’ - like any power, it is amoral: not good or evil. It extends our abilities, extends our arms and our legs, makes us faster, gives us higher and farther reach. Allows us to travel faster and communicate instantly. A man becomes a superman; a man becomes a giant. And a giant, we all know, is all body and not enough brain. It goes around stomping on and crushing things, ignorantly and stupidly, since its power vastly outweighs its ability to think, to reason, to weigh the moral consequences, to be and act intentionally in the world.


That’s what a giant is - as those of us who remember our fairy tales well know - and it is what we will become if we don’t grow morally, ethically, and quickly, in a way that is in concert with the power that we now wield. Science only tells us what we can do, but not what we ought to do. For that, we have to look at our moral, ethical, and religious traditions - or at least our fairy tales, and maybe the great canon of western literature, which bears heavily upon the subject. Certainly Tolkien’s One Ring, which exploits the ambition, greed, and selfishness of its bearers, stands as a powerful analogy. If the inside cover of the book bindings are a map of Middle Earth - all the pages in between are also a map, but one that helps to direct us out from the Ouroboros that is its central subject.


Here, in the spirit of Tolkien (or maybe Peterson, who also loves a good ‘rule’) are 5 principles that might help us navigate the journey we are about to (willingly or unwillingly) embark upon, without losing our way, and hopefully, without losing ourselves:



  1. Power Corrupts Even the Well-Intentioned


The power of the ring is so absolute that it corrupts anyone who seeks to wield it - even ‘the good guys’ like Bilbo, Frodo, and Gandalf. Its allure lies in the promise of control, but invariably it twists the intentions even of the noblest characters.


AI, like the ring, holds immense potential for good or ill. Assuming that we will only use it for good, assuming that human nature is incorruptible does no one any favors and almost guarantees that it will wreak incredible havoc. Perhaps the most dangerous instance of this is when one assumes that his or her own nature is good and incorruptible.


  1. Humility is the First Safeguard Against Corruption


Hobbits like Bilbo, Frodo, and Sam are uniquely able to carry the ring longer than others because of their humility, simplicity, and lack of ambition. They are not consumed by the same pride and greed that drive other characters.


Our approach to AI cannot be ignorant, naive, or dismissive, and it needs to recognize the risks inherent in wielding such a powerful tool. Humility is the voice that says, “I really don’t understand this,” and “There is much more to this than I can see at present.” People who think they have a handle on it, who imagine they know what they’re getting themselves into - I think Tolkien would posit that they are not in the right mind-space to use it without causing more damage than good.


  1. Strength Lies in Fellowship, Not Isolation


Frodo does not succeed alone. His success depends on the Fellowship, and ultimately, on the deep friendship, loyalty, and courage of Sam. The development and implementation of AI must be a collaborative, decentralized effort, drawing on the perspectives of educators, teachers, parents, students, ethicists, all nested in cohesive and communicative communities. This can help promote balance and stability, and reduce ethical conflicts, imbalance, exploitation, and unfairness.


In LOTR, characters like Boromir and Saruman seek to use the Ring’s power for what they perceive as noble goals, but their pride blinds them to the Ring’s corrupting influence. They isolate themselves from others, and therefore from wisdom, and fall prey to their own ambitions. Overconfidence in AI’s potential and the belief that ‘we know best’ can lead to dangerous outcomes. Transparent decision-making, accountability, and constant ethical scrutiny are essential to making sure AI is always used in a way that is aligned with ideals of human flourishing.


  1. Know When to Relinquish Control


Ultimately, Frodo’s mission is to destroy the ring, since the Fellowship recognizes that no one - not even a humble hobbit - can wield it without falling prey to its corrupting influence. I don’t know if we can “destroy” AI - I’m sure there are those out there who think we ought to. I’m not in that camp (I don’t think), but at least I think Tolkien’s insight is valuable - that we need to surrender, recognize human limitations, and also know when to step back, turn off the algorithm, or prioritize human judgment over AI.


  1. Sacrifice is the Price of Stewardship


Frodo’s journey is one of immense sacrifice. He endures great pain and suffering, and ultimately loses part of himself in the process. In fact, all three protagonists - Frodo, Aragorn, and Gandalf - all experience a sort of death that’s eminently necessary to the resolution of the story. The message: I don’t know exactly. But I think it at least means that each of us, personally, in our own lives, have to prioritize something over and above ourselves, our own wellbeing, even our very lives. We have to prioritize people over profit. We have to learn where the line has to be drawn, and mortify our desires that exceed a certain limitation or threshold, that go beyond the pale, that benefit us at the expense of others, or of fairness.


Tolkien’s message suggests that wielding immense power - whether it’s the One Ring or Artificial Intelligence - requires humility, collaboration, vigilance, and a willingness to prioritize the greater good over self-interest. The answer is not necessarily to destroy AI or reject technological progress, but to approach it with the same reverence and caution that Frodo and his companions demonstrate on their mission. I don’t think Tolkien is tasking us with ‘destroying’ technology, traveling to Silicon Valley to destroy and topple the industry - but personally, each one of us needs to ask ourselves if we possess the ability to ‘let go of the ring’ - to hold it over the fires of Mount Doom, and let it go, surrender it, and purposefully do without it. If we lose that ability, if we become so dependent on tech tools that we become incapable of surrendering it in certain areas in our lives when it’s needed, for moral, ethical, or just human reasons, then Tolkien is right - the battle for humanity will be lost.


The fundamental difference between a Saruman and a Frodo is that the villain sees the Ring as an opportunity - and Frodo, the humble hobbit - sees it, perhaps more properly, as a burden. I think we need to acknowledge that aspect of it, the great burden that it is to wield AI technology; whether we like it or not, each of us has this incredible weight hanging round our necks, this immense responsibility, unprecedented in human history. It will reveal to you, and to others, what, even in germ form, is inside of you - pride, greed, ambition, naivete, courage, humility, the wisdom to know your own weaknesses - or the lack thereof. The hour is coming. Maybe it has already come.


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page