AI narrows the list of human-only skills
Guardian author Tom Griffiths argues intelligence depends on constraints and data budgets, machines win by buying scale humans cannot
Images
Illustration Elia Barbieri Illustration: Elia Barbieri/The Guardian
theguardian.com
A decade ago, it was still plausible to list the things only humans could do: beat grandmasters at complex games, write coherent essays, prove new theorems. Tom Griffiths writes in the Guardian that the list has been shrinking fast—AI systems now win top-level competitions, produce polished prose and, in some cases, earn medals in mathematics. Tech executives have responded by promising that “superhuman” AI is near.
Griffiths’ essay is useful less for the prediction than for the inventory of constraints. Human intelligence runs on roughly a kilogram of neurons, inside a skull that cannot be upgraded, over a lifespan that ends after a few decades. A person has to learn language, norms, skills and a workable model of the world from a thin slice of experience, then communicate those thoughts through vocal sounds and finger movements. Machines do not face those limits: they can absorb more text than any human could read in multiple lifetimes, scale capacity by renting more compute, and copy what they learn to other machines at near-zero cost.
That asymmetry is why headline feats can mislead. AlphaGo beat the best Go players, but it did so after training on what Griffiths describes as many human lifetimes of games. ChatGPT can hold a reasonable conversation while drawing on thousands of years of human language. In both cases the performance is real, but the route is not human. The system is not solving the problem under the same budget of time, data and energy that a child or adult has to manage.
The essay’s sharpest point is that constraints are not merely a handicap; they are part of the definition of the thing being measured. Human cognition evolved to generalise from limited experience, to infer intentions, and to coordinate with other minds under uncertainty. That is why humans built language, writing, teaching and science—technologies for pooling knowledge across individuals and time. Machines, by contrast, are optimised for a world where “experience” can be bulk-purchased as datasets and where memory can be replicated perfectly. They may outperform humans at tasks that reward scale, but that does not settle what “intelligence” is, because the contest is being run under machine-friendly rules.
Griffiths argues that intelligence is not a single ladder like height; animals are intelligent in different ways shaped by their environments. Birds navigate, ants cooperate, spiders hunt. Human minds are “special” partly because they are forced to do a lot with little: short lives, limited neurons, narrow channels of communication. Machines are special in the opposite direction: they can do a lot because the constraints are loosened.
The policy debate tends to treat these differences as a moral question—whether humans will be “replaced”—rather than a practical one about where institutions choose to deploy systems that learn from vast pooled data. A model that can be trained on the accumulated output of millions of workers changes the bargaining position of the next worker, even if the model cannot match the creativity of a five-year-old exposed to the same amount of input.
For now, the most reliable sign of what is happening is not the benchmark score but the training bill. In Griffiths’ examples, the machine’s advantage is purchased upfront, then copied endlessly.