When the Algorithm Knows You
The Awakening
I didn’t think much of it when I took Andrew Ng’s online machine learning (ML) course a few years back. I was curious about the field and—admit it—wanted a CV boost to help land a good job. What began as a pragmatic move became an epistemic jolt: machines could detect patterns that felt private, and that troubled my sense of subjectivity and agency.
The course project was simple: build an algorithm to predict IMDb movie ratings. My neural network was trained on features like genre, release year, runtime, and budget. I was skeptical—surely an algorithm couldn’t predict something as subjective as a movie rating from such apparently random data. But when I fed it a film it hadn’t seen, it predicted the IMDb rating with unsettling accuracy.
That moment shattered my naïve belief in the sanctity of human taste. Machines, it seemed, could perceive the hidden regularities in our choices—patterns invisible even to us. It felt like watching a mirror sharpen into focus and realizing it reflects more than appearance—it reflects preference.
A technical caveat: predictive success can arise from quirks in data—artifacts, confounders, or leakage between training and test sets—so accuracy isn’t proof of deep understanding. But even so, the experiment revealed something plain: statistical regularities in our behavior often hide in plain sight.
LLMs and the Shattered Human-Centric Worldview
That early shock returned when large language models (LLMs) swept into public view. Their ability to produce fluent, human-like text was astonishing—ChatGPT 4.5 was judged to be human 73% of the time, making it the first AI to pass a version of the Turing test [1].
What’s striking is how they do it: by predicting the most probable next token and chaining those predictions. “Next-token prediction” sounds trivial, yet at scale it produces coherence, creativity, even style. Which forces a hard question: could human reasoning itself be a form of pattern prediction?
The idea feels alien to how we conceive thought—intentional, layered, meaning-driven—but LLMs compel us to consider whether our cognition is less mysterious, more statistical than we like to believe. It’s the same disorientation people must have felt when they learned Earth wasn’t the center of the universe.
If machines can model preference, can they also model agency? That question—whether free will survives in a predictive world—haunts the edges of this reflection, though I’ll tackle it more directly in a future post.
Creativity, Art, and Music
If creativity is your last bastion of human exceptionalism, brace yourself. LLMs and generative models now produce poems, images, and music that many find moving—sometimes indistinguishable from human work in blind tests [2]. These systems remix and amplify human-made data rather than lived experience, yet their outputs can still surprise, provoke, and influence.
That’s not entirely shocking: humans and models draw from the same cultural reservoir. Similar inputs often yield similar outputs. Ask a model to render a “full glass,” and it will stop shy of the rim—because such images are rare in its training data. Human creators, shaped by the same environment, make the same choices.
AI doesn’t erase meaning; it relocates it. Meaning now arises in the loop—model generation, human curation, and audience interpretation. Creativity becomes a conversation rather than a possession.
Limits and Fragilities
Still, LLMs are far from omniscient. They hallucinate—confidently asserting falsehoods. They’re brittle—small prompt changes can yield wildly different results. And they lack grounding—no lived experience or intention to draw upon. They also inherit and sometimes amplify biases embedded in their training data.
These limits matter. They define where and how such systems should be trusted, and they remind us that imitation isn’t understanding.
An Uplifting Note — Practical Takeaways
If this feels unsettling, try this reframing: interacting with these systems is like encountering an alien intelligence—one that reflects our collective patterns back at us. They’re mirrors and probes, revealing the structure of our thought.
So don’t panic—but do act. Invest in basic AI literacy: how models learn, where they fail, what they can and can’t do. Demand transparency and accountability in systems that govern public life. Design AI tools to augment rather than replace human judgment. And above all, foster dialogue across disciplines—engineers, artists, ethicists, policymakers—to build institutions that preserve human dignity in an age of prediction.
The challenge is to do this before disruption forces us to. We can still choose what reflection we want to leave behind in the algorithmic mirror.
Citation
@misc{elmokadem2025,
author = {{Ahmed Elmokadem}},
title = {When the {Algorithm} {Knows} {You}},
date = {2025-10-31},
url = {https://aelmokadem.github.io/aelmokadem/posts/ai/},
langid = {en-GB}
}
Social and Ethical Stakes
The implications stretch beyond philosophy. Predictive systems already shape what we see, buy, and believe—recommendation engines tune taste, surveillance algorithms personalize influence, and automation redefines labor.
The risks are tangible: filter bubbles that narrow public discourse, opaque systems making consequential decisions, and economic upheaval as prediction automates judgment. These are not just metaphysical puzzles; they’re design and policy crises.