

frontend testing with OpenAI Operator
Frontend testing is one of the hardest places to make AI agents work reliably. Agents that look smooth in demos often break down with flaky performance, authentication errors, and inconsistent results across devices.
Join us to learn how to tackle those challenges head-on with the OpenAI Computer Use API and its open-source companion, tiny-CUA, moving beyond demos into production-ready testing workflows. (Technical Level: 200 - 300)
Learn how you could:
Benchmark and diagnose the performance of AI agents in real frontend testing scenarios
Configure the OpenAI Computer Use API for consistent results across environments
Solve authentication hurdles without breaking agent workflows
Experiment with tiny-CUA, a lightweight library for building and debugging Computer Use agents
Identify and overcome the practical roadblocks that stand between prototypes and reliable systems
💬 Plus: dedicated debugging and peer exploration time, where you’ll work through issues alongside other builders experimenting with agent testing.
*Do note that this will be a hands-on workshop. Please bring an internet-facing laptop for the session.
Schedule
1430 – 1515 Registration & Setup
1515 – 1545 Introduction: theory, library overview, and practical AI agent challenges
1545 – 1600 Hands-on setup with tiny-CUA
1600 – 1700 Debugging and open-ended exploration
👨💻Sharing by Brian Chau
Brian Chau is a Founding Faculty member at Network School and an International Olympiad in Informatics Gold Medallist. He is the founder of Alliance for the Future, a non-profit advocating for open-source AI policy.
Don’t miss out! Limited slots available! Please note that Lorong AI members will be prioritised.