Artificially Intelligent Thoughts
(If you don’t feel like reading, feel free to listen to the audio attached)
Aidan Gomez, chief executive of $2bn LLM start-up Cohere, said, "What you really want is models to be able to teach themselves. You want them to be able to . . . ask their own questions, discover new truths and create their own knowledge. That's the dream."
A dream deferred so far. But honestly, it's a dream not too far from becoming real. If the AI goal is for it to become self-sufficient, it still stands on the shoulders of our data.
Current LLMs are trained primarily by scraping the internet. Data used to train these systems includes digitized books, news articles, blogs, Twitter and Reddit posts, YouTube videos, among other content. It almost feels like "digital sharecropping". This recently has sent me into a nihilistic hole asking the question:
AM I SIMPLY A PIECE OF CODE FOR MY GRANDCHILDREN'S JARVIS?
Am I training my replacement? Is every angry, sad, or euphoric moment that I share on the internet simply information for an AI vulture to pick at my data? If that's the case, then what is the point? At every turn my existence is turned into statistics for someone else. Every time I post I’m potentially contributing to a system designed to automate the very kind of thinking I’m trying to do. My individual experience is simply for the collective to learn from and use in their own experiences leading to a never-ending cycle. But that might be the nihilism speaking.
I remember a speech Kamala Harris gave in 2023 that went viral. She said the phrase, "You exist within the context of all in which you live and what came before you."
There is this thing that humans realize after a certain age. The foundations I place today are for the people after me to build upon. There is a popular phrase that says, "The one who plants trees, knowing that he will never sit in their shade, has at least started to understand the meaning of life."
ARE AI SYSTEMS OUR NEW TREES?
The rise of LLMs has forced me to ask this question in a genuine manner. Is AI our attempt to grasp an esoteric idea and turn it into measurable code? This isn't a new fight that humans have faced. Throughout history, each generation has grappled with how to pass on wisdom, skills, and cultural understanding to successors. At one point it was cave drawings that evolved into spoken word and then written text. We've seen this pattern continue with the printing press and mass media. But those mediums were limited in their expression and easily lost to the nuances of the world. In simple terms: they're inefficient.
The idea of innovation is the cocoon humans are hardwired to build in hopes that on the other side is a butterfly experience. I think in the most optimistic way, AI is the next step in giving shade to the people after us. I want to stress the word "optimistic" though. According to recent research from Stanford's evaluation of large language models, many AI systems struggle with consistent accuracy, with some studies suggesting error rates as high as 60% on complex reasoning tasks. Yet people swear by the results because the technology is marketed as "shiny and new" and being forced down our throats.
The initial quote from Aidan is about building a system that teaches itself, but with a foundation as inefficient as the current LLMs we have, will we not just build a new wheel that continues the same cycles we ourselves are a part of? If we are tethered to biological habits, can we build something that is greater than us, something smarter? History suggests otherwise.
And if history, in all its wisdom, indicates we cannot transcend our own limitations through our creations, perhaps my philosophical detour into nihilism isn't entirely misplaced. But there remains something profoundly human about the attempt itself—this striving to create something that outlasts us, understands us, and perhaps one day, completes us.

