

AI Agents in Real Workflows
AI Agents in Real Workflows
What changes when AI systems move from generating output to taking action
Tuesday, April 28th, 2026
6:00–8:30 PM
Austin, TX (private venue shared upon confirmation)
Presented in partnership with Tecla
Tecla supports high-growth companies scaling product and engineering teams while building AI-driven systems with top nearshore talent trusted by the fastest-growing technology companies.
Food and drinks for the evening are generously provided by our partner.
Invitation-only. Curated for senior technical leaders and operators.
About the Event
AI agents are moving from demos to production.
In demos, they generate output.
In production, they take action.
They trigger workflows. Modify data. Communicate with customers. Act across APIs and internal systems.
When they misfire, the consequences are operational, not theoretical.
This roundtable focuses on what actually happens when agents touch real systems: where risk surfaces, how autonomy is constrained, and who owns failure when something goes wrong.
No speculation. No hype.
Just technical leaders comparing live deployment lessons under real accountability.
The Curated Circuit Format
GILD is intentionally small and highly curated.
This is a private, off-the-record exchange under Chatham House Rules.
No panels. No pitches. No demos.
After a short framing segment, the room moves into rotating micro-roundtables. Every table discusses the same prompt each round. Groups rotate so you meet a meaningful cross-section of the room.
Designed for candor. Built for operators.
Timed Agenda
6:00–6:30 PM – Check in, Food & Drinks
6:30–6:45 PM – GILD welcome + sponsor introduction
6:45–7:05 PM – Framing discussion
7:10–8:05 PM – 3 breakout rounds (15–20 minutes each)
8:05–8:30 PM – Open mingling
Speaker Framing & Friction
The speaker will create productive tension around:
Demo performance vs. production behavior
False confidence in “human-in-the-loop” safeguards
Expanding risk surface area as agents gain autonomy
Where governance is lagging in deployment
Breakout Prompts
Round 1 – Reality Check
When AI agents in your organization took real action in production, what actually happened, and where did unintended consequences surface?
Round 2 – Failure Modes
How are you deciding how much autonomy to grant agents and where have you pulled that autonomy back after something broke?
Round 3 – Accountability & Containment
When an AI agent makes a wrong decision that affects customers, revenue, or operations, who is accountable and is your organization structured for that reality?
If you scaled agents 10x from here, what would fail first?
Who This Is For
Reserved for senior technical leaders with real authority over architecture, deployment, and operational risk:
CTOs
Technical founders
VP / Head of Engineering
Senior platform leaders
Attendance is limited to protect candor and room quality.
Privacy
Chatham House Rules apply.
Discussion stays private.