

Physical AI Hack 2026
Physical AI Hack 2026 is a hands-on hackathon for people who want to see AI actually work on real robots. We’ll be hosting it at Founders Inc, a space where ambitious founders, builders, and operators come together to grow and scale real startups.
This is not a simulation-only event or a paper exercise. Teams will build, fine-tune, and deploy models on real robotic platforms and watch their systems succeed or fail in the physical world.
We’re designing the hack around simple, high-signal tasks that are easy to understand, easy to benchmark, and surprisingly hard for robots.
Think real tasks that look simple, until a robot has to do them:
Puzzle and shape insertion. Trivial for humans, brutal for robots. A clean benchmark for vision (shape recognition), action (pick and place), and precise alignment.
Plugging in chargers. A real household task that reveals how difficult fine insertion and depth perception actually are.
Pouring liquid into a cup. Inspired by coffee-making robots, where small depth errors quickly turn into spills instead of success.
These challenges are intentionally chosen because progress is visible, measurable, and hard to fake. We’re still adding tasks and are very open to ideas. If there’s a real-world manipulation problem you think belongs here, we want to hear it.
🤖What you’ll work with
Solo Tech provides the base VLM and VLA models, along with the fine-tuning workflow, so teams can focus on learning and iteration instead of setup.
World Intelligence provides 50+ hours of multimodal egocentric data, including 2D video, depth, IMU, and audio, collected from the same task families used in the hack.
Oli Robotics provides the imitation learning robotics data and tooling for robotics tasks, including demonstrations related to automated coffee-making.
KikiTora provides 40+ hours of multimodal human pose data, including RGB, depth, camera intrinsics and extrinsics, segmentation masks, and COCO keypoints and joints in 2D pixel space and 3D world space, for training locomotion policies.
You’ll also have access to real robots on site, including Unitree G1, Open Droid R1D2/R2D3, Open Duck Mini and LeRobot SO-101/LeKiwi, so improvements aren’t theoretical. You’ll see them in action.
🧠What teams can explore
Teams are free to choose their own technical approach. Possible directions include:
Transfer learning and fine-tuning of VLM and VLA models on task-specific data.
Closed-loop policies that improve alignment and execution through feedback.
Generalization across task variations, such as new shapes or layouts.