

Newton vs. LLMs: The case for Physical AI
Large language models have transformed digital work — from customer service to coding and financial analysis. But walk onto a factory floor, power plant, or construction site and the picture looks different.
The physical world doesn't speak in text. It operates in signals — vibration, temperature, current, and motion. And understanding those signals at scale requires a fundamentally different kind of AI.
In this virtual event, we’ll explore how our foundation model, Newton, was designed to understand the physical world in ways that language models can’t — by understanding real-time multimodal sensor data. This unlocks new classes of problems from identifying anomalies in critical infrastructure and optimizing sophisticated machinery to coordinating complex physical systems where robots and humans interact.
What you’ll walk away with:
The limits of LLMs on physical data
What a physical AI foundation model requires to generalize across domains
The latest research and techniques behind Newton
How Newton compares to traditional approaches
The session will also include a demo of Newton and the Archetype AI platform.