Cover Image for Building iOS Apps with Agents + Coding Agent Evals 101
Cover Image for Building iOS Apps with Agents + Coding Agent Evals 101
Avatar for AI Native Dev
Presented by
AI Native Dev

Building iOS Apps with Agents + Coding Agent Evals 101

Registration
Past Event
Welcome! To join the event, please register below.
About Event

Coding Agent Evals 101

As coding agents become part of everyday development, understanding how to evaluate them is quickly becoming a core skill. This March, AI Native Dev London dives into the fundamentals of coding agent evaluations, from why evals matter, to how they’re designed, and how they work in real-world systems. Join us for a practical, engineering-led session on measuring agent behavior, building confidence in agentic workflows, and learning how teams evaluate coding agents in production today.

Agenda 

  • ​​18:00 Venue opens

  • ​​18:30 Talk 1: Building iOS Apps with Agents with Vivian Qu

  • 19:00 Talk 2: Coding Agent Evals 101: From Design to Real-World Examples by Max Shaposhnikov

  • 19:30 Networking

  • ​​20:30 THE END


Building iOS Apps with Agents

Coding agents are transforming software development, including mobile. In this talk, I'll cover structural obstacles for using agents effectively in native iOS development, such as closed-source platform SDKs and slow validation loops. I'll share observations and lessons from my experience building small-scale apps and mobile infrastructure for thousands of developers. You'll learn about tools, best practices, and workflow patterns for agents to unlock AI native mobile development today. 

Vivian Qu, iOS Software Engineer @ Meta

Vivian works on Meta's native mobile platform. Her recent work has focused on building Meta's internal agentic workflow, mobile UI evals, and end-to-end verification for AI agents. She previously was Head of Engineering at EchoAI, building B2B AI products for customer service and sales teams. Before that she was an iOS engineer at Pinterest, where she integrated React Native and helped grow its user base to 200M+ monthly active users.


Coding Agent Evals 101: From Design to Real-World Examples

I’ll be giving a talk on designing evaluations for coding agents. We’ll look at the unique challenges that come up when evaluating agentic systems in general, and coding agents in particular, along with the practical approaches we use today. I’ll explain why evals are becoming essential both for agent builders and for developers who rely on coding agents day to day, and how they help you build using agents with confidence. I’ll also share concrete use cases we solve at Tessl and how we measure end-to-end success with our own evaluation platform.

Max Shaposhnikov, Research Engineer at Tessl

Maxim is an AI Research Engineer at Tessl, where he is pioneering AI code generation through a Spec-Driven Development approach. He previously worked as an Applied Scientist at Amazon, focusing on pre-training multimodal LLMs for products such as the Alexa voice assistant.

Beyond hands-on research and engineering, Maxim enjoys teaching machine learning, breaking down complex concepts into simple explanations, and building innovative projects for fun.


​​This event is brought to you as part of the AI Native Dev Community. Consider subscribing to the Mailing List, Podcast, and joining our Discord Community

Location
Tessl AI Limited
210 Pentonville Rd, London N1 9JY, UK
Avatar for AI Native Dev
Presented by
AI Native Dev