

MakersDay: Make Your Coding Agents Run Without You
We are hosting another MakersDay — a day of theory and hands-on building together with software engineers in Vienna.
The focus: How to set up coding agents like Cursor and Claude Code so they can operate as autonomously as possible, with minimal human steering.
👉 Learn how to build the context layer that makes coding agents effective — rules, standards, architectural diagrams, and domain models that agents can actually read and follow
👉 Set up feedback loops that close the gap: give your coding agent access to your UI, infrastructure, observability, and tracing so it can self-correct without waiting for you
👉 Use coding agents for the full development cycle — from implementation to code review
👉 Structure your repo as machine-readable infrastructure: business logic, docs, specs, and constraints that agents treat as their source of truth
👉 Connect with Vienna engineers who care about quality and craft
👉 Pizza, beers, and API credits to keep the momentum going
🔥 Why What determines whether a coding agent can operate autonomously is the environment you give it — the rules, the feedback loops, the access to your stack, and the organizational context it can draw on. We call this the coding agent harness.
At Flinn, we build AI-powered software for medical device manufacturers — a domain where accuracy is non-negotiable. We've learned what it takes to get coding agents to ship reliable software with minimal human intervention, and we want to share that with you.
💫 Details A practical, project-driven day focused on designing and building a coding agent harness — the structured environment that sits between your team and the coding agents doing the work.
The goal is to reduce human involvement as much as possible while maintaining production-level quality. This is not a course on building agentic AI products. It's about making coding agents like Cursor and Claude Code radically more effective at building software for you.
You will learn how to reason about:
🖊️ Setting up agent context — docs, rules files, architectural diagrams, and coding standards that coding agents can consume and follow
🔄 Closing the feedback loop — giving coding agents access to your CI, observability, tracing, and even your UI so they can verify their own output
🛠️ Structuring your repo as the single source of truth — business logic, specs, domain models, and dependency constraints as machine-readable infrastructure
🧠 Using coding agents for code review — not just generation, but automated review against your team's standards
📈 Building custom linters with error messages that teach coding agents how to fix their own mistakes
🔍 Designing architectural constraints and dependency rules that coding agents cannot violate
❓FAQ
Do I need prior experience with AI agents?
No. We start from the fundamental insight—what a harness is and why it matters—and build up from there.
Is this theory or hands-on?
Hands-on building. Theory blocks exist only to unlock the next practical step.
Can I join remotely?
No. This is in-person because the collaboration and speed are far superior face-to-face.
Are spots limited?
Yes. We cap the cohort to keep collaboration tight and ensure every participant gets support.
Does it cost anything?
No, everything is 100% free and sponsored.