The Illusion of Certainty: Why AI-Driven Prediction is a Tool of Power, Not Fact

21

In a recent executive seminar, a student offered a provocative insight: she uses AI chatbots as “fortune tellers.” She claimed that, much like reading tea leaves, AI can provide surprisingly accurate glimpses into the future, citing a recent instance where it correctly predicted a 2% rise in the stock market.

While this may sound like a harmless novelty, it touches upon a profound and dangerous shift in how society functions. We are moving away from traditional methods of forecasting—astronomy, sociology, or economics—and handing the keys of the future to a new class of soothsayers: computer scientists, data analysts, and engineers.

The Confusion Between Prediction and Fact

There is a fundamental logical error at the heart of our modern obsession with predictive technology: predictions are not facts.

Facts belong strictly to the present and the past. By definition, the future has not happened; therefore, there are no facts about it. An assertion about what might happen can be an estimate, a warning, or a desire, but it can never be a factual truth.

When we treat AI outputs as “truth,” we fall into a dangerous trap. We begin to mistake statistical probabilities for inevitable realities, forgetting that these models are merely calculating the most likely next step based on historical patterns.

The Fantasy of “Laplace’s Demon”

The drive behind modern AI is fueled by a scientific fantasy known as Laplace’s Demon. Proposed by Pierre-Simon Laplace, the concept suggests that if an intelligence possessed complete knowledge of the position and momentum of every particle in the universe, the future would be as visible as the past. In this view, uncertainty is simply a lack of data.

Modern AI proponents chase this dream through “brute force.” The logic follows a relentless cycle:
1. Collect everything: Track every movement, purchase, social interaction, and biological metric.
2. Process everything: Use massive computational power to analyze these data points.
3. Predict everything: Use the resulting patterns to eliminate uncertainty.

This has turned human existence into a commodity to be “tortured” for data. We are being quantified in every facet of life—from our sleep patterns to our political leanings—all to feed the machine’s hunger for predictive accuracy.

Machine Learning: A Triumph of Scale, Not Science

Perhaps the most sobering reality of the AI revolution is that it was not driven by a sudden spark of human genius or a fundamental breakthrough in understanding how the mind works. Instead, as Oxford professor Michael Wooldridge notes, it was a victory of scale over science.

For decades, neural networks struggled to produce meaningful results. The “breakthrough” that changed everything wasn’t a new way of thinking, but rather the arrival of:
Massive Datasets: The sheer volume of digital information available.
Increased Compute: The hardware power (GPUs) to process that information.

Machine learning is essentially “prediction on steroids.” When a language model writes a sentence, it isn’t “thinking”; it is predicting the most statistically likely next word based on billions of previous examples. When an algorithm recognizes a face, it is simply calculating the probability that certain pixels match a learned pattern.

The Hidden Cost of the “Oracle”

Because these predictive models require immense resources, their development is inextricably linked to power and exploitation. The “brute force” method used to build these systems has relied on:
– The mass surveillance of global populations.
– The exploitation of vulnerable workers to label data.
– A massive consumption of natural resources.
– The unauthorized harvesting of intellectual property.

Furthermore, the rise of prediction markets (such as Polymarket) has turned human suffering and political instability into a form of gamified speculation. When we bet on the outcome of wars or natural disasters, we dehumanize the victims and treat real-world crises as mere data points for profit.

Conclusion

The danger of AI prediction lies in its ability to masquerade as objective truth while acting as a tool for control. Predictions do not just describe the future; they shape it by bending social behavior toward the forecasted outcome.

Ultimately, we must recognize that algorithms are not oracles of truth, but instruments of power—and those who control the data control the direction of the world.