Last week, I joined Pete Bernard of the EDGE AI FOUNDATION, Steve Chen from Himax, Tharak Krishnan of BrainChip, Abdullah Raouf from Syntiant, and Scott Smyser from EMASS on a packed panel at Sensors Converge 2026. The session was titled “Embodied Intelligence: When Edge AI Gets Legs, Wings, and Wheels.” The room was full of people curious about where this is heading and what it means for how they build.
This idea is older than you think.
In 1948, in a paper called Intelligent Machinery, Alan Turing made a distinction that most of AI history chose to ignore. He argued that building an embodied intelligence (whole man) would be a “sure” route to produce a thinking machine. Then he set the idea aside on grounds of technical practicality, and proposed instead a “brain which is more or less without a body.” The fields he suggested for this disembodied approach: games, language translation, cryptography, and mathematics. With that, the direction of artificial intelligence was set for decades.
Turing was not wrong about the practicality. Turing was right about the direction.
Rodney Brooks, who has spent his career on embodied intelligence, notes that Turing’s description from over sixty years ago fairly precisely describes what is done today in dozens of research labs around the world: robots with cameras, microphones, and handling mechanisms, their intelligence either onboard or in racks of compute off in the cloud.
The cloud part is the problem. Not just because the physical world cannot wait for a round trip, but because of what happens to global energy requirements when billions of sensors come online and every one of them sends its data to a data centre for processing. The numbers do not work. The architecture has to change
What embodied intelligence actually means.
An embodied AI system is not a model running behind a screen or in a data centre. It is intelligence situated inside a physical object that can sense its environment, make a decision, and act. A robot adjusting its grip force in real time based on the material it is touching. A motor controller reducing load the moment a vibration signature shifts toward failure. A hearing aid that suppresses noise before the wearer notices it has changed.
The distinction that matters is not between smart and dumb devices. It is between systems that report and systems that act. A connected temperature sensor reports. An embodied system decides what the temperature means and does something about it, locally, immediately, without waiting for instruction from above.
The physical world does not pause for a cloud round-trip. Latency that is acceptable in a chat interface is catastrophic in a robot, a wearable, or an industrial system operating in real time.
he architectures we built was never designed for this.
The last decade of AI was built on a simple and correct assumption: centralise compute, scale it aggressively, and iterate on models as if compute were unbounded. That assumption produced extraordinary results. Large language models, generative AI, computer vision at scale. None of that goes away.
But the numbers are starting to demand attention.
Data centres now consume 22% of Ireland’s total electricity, up from just 5% a decade ago. Projections suggest that figure could reach 30% of national demand by 2030, putting Ireland at risk of exceeding its entire carbon budget for the period. Ireland is not a cautionary tale. It is an early reading of where the trajectory leads everywhere.
The industry response is telling. Microsoft has committed to a 20-year power purchase agreement to restart Three Mile Island. Oracle has announced a gigawatt-scale data centre powered by three small modular reactors. Meta has issued a request for proposals for up to 4 gigawatts of new nuclear generation.
We are restarting nuclear power stations to run AI. That is a remarkable sentence. It is also the clearest possible signal that the current architecture has physical limits for a specific class of problems.
The answer is a different compute philosophy.
Sensors live in a strange world. They are always on but almost always bored. A microphone in a smart device listens continuously, but meaningful audio events occur for a fraction of a percent of its operating life. A vibration sensor on a motor runs for years, but the signature that indicates failure lasts milliseconds. A radar presence sensor scans a room that is empty most of the day. The ratio of signal to noise in the physical world is brutal and almost entirely in favour of noise.
This is the constraint that conventional AI architectures cannot solve by scaling. GPUs process everything at full power regardless of whether anything meaningful is happening. CNNs were designed for image data: structured, two-dimensional, frame by frame. Most real-world sensors produce nothing like this. Audio, vibration, inertial, and biomedical signals are one dimensional time series or complex spatio-temporal data, like radar evolves across both space and time simultaneously. Forcing these signals through CNN architectures built for frames and grids is not just inefficient. It is the wrong abstraction entirely. The model is fighting the nature of the data before it has even started.
What embodied intelligence needs is compute that matches the nature of the physical world: idle when nothing happens, fast and efficient the moment something does. Architectures designed for event-driven, sparse, always-on sensor processing, consuming energy proportional to what is actually happening in the signal rather than proportional to time, are the only viable answer. This is where neuromorphic computing moves from academic pursuit to production architecture, designed precisely for this class of problem.