Cover Image for Physical AI Hack 2026
Cover Image for Physical AI Hack 2026
Avatar for Solo Tech
Presented by
Solo Tech
Physical AI Diagnostics https://www.getsolo.tech/
1,029 Went
Registration
Past Event
Welcome! Please choose your desired ticket type:
About Event

Physical AI Hack 2026 is a hands-on hackathon for people who want to see AI actually work on real robots. We’ll be hosting it at Founders Inc, a space where ambitious founders, builders, and operators come together to grow and scale real startups.

🤖 Powered By

  • Solo Tech provides Physical AI Tuning Platform, VLM and VLA models, along with the fine-tuning workflow, so teams can focus on learning and iteration instead of setup.

  • World Intelligence provides 50+ hours of multimodal egocentric data, including 2D video, depth, IMU, and audio, collected from the same task families used in the hack. 

We’re designing the hack around simple, high-signal tasks that are easy to understand, easy to benchmark, and surprisingly hard for robots.

Think real tasks that look simple, until a robot has to do them:

  • Puzzle and shape insertion. Trivial for humans, brutal for robots. A clean benchmark for vision (shape recognition), action (pick and place), and precise alignment.

  • Plugging in chargers. A real household task that reveals how difficult fine insertion and depth perception actually are.

  • Pouring liquid into a cup. Inspired by coffee-making robots, where small depth errors quickly turn into spills instead of success.

These challenges are intentionally chosen because progress is visible, measurable, and hard to fake. We’re still adding tasks and are very open to ideas. If there’s a real-world manipulation problem you think belongs here, we want to hear it.

🤖 Co Hosted By

  • Oli Robotics is eliminating the constraints that make physical automation slow and inflexible. By building robots that adapt to messy, real-world environments, we're unlocking the same rapid iteration cycles that transformed software.

  • KikiTora provides 40+ hours of multimodal human pose data, including RGB, depth, camera intrinsics and extrinsics, segmentation masks, and COCO keypoints and joints in 2D pixel space and 3D world space, for training locomotion policies.

Sponsors!

You’ll also have access to real robots on site, including Unitree G1, Open Droid R1D2/R2D3, Open Duck Mini and LeRobot SO-101/LeKiwi, so improvements aren’t theoretical. You’ll see them in action.

Technical Docs
https://docs.google.com/document/d/1VvU5bHygaDBd2SsUidZsjWfZ0BKCUY142_aQGMIwkXg/edit?usp=sharing

​🧠What teams can explore

Teams are free to choose their own technical approach. Possible directions include:

  • Transfer learning and fine-tuning of VLM and VLA models on task-specific data.

  • Closed-loop policies that improve alignment and execution through feedback.

  • Generalization across task variations, such as new shapes or layouts.

Location
Founders, Inc. | San Francisco Lab
2 Marina Blvd B300, San Francisco, CA 94123, USA
Avatar for Solo Tech
Presented by
Solo Tech
Physical AI Diagnostics https://www.getsolo.tech/
1,029 Went