

AI Tinkerers Tel Aviv: Who Shipped This?
We’re kicking off 2026 by doing something most GenAI teams don’t do early enough:
trying to break our own systems.
This meetup is about what actually happens after you ship.
How AI systems fail in production, how they get abused, and how you can test and defend them before someone else does.
We’ll start with a hands-on look at AI security in the wild, move into what a real AI SDLC looks like end to end, and wrap up with rapid-fire lightning talks from builders who have shipped and learned the hard way.
If you are running GenAI in production, this one is not optional.
📅 Agenda (Subject to Speaker Acceptance)
18:30 – Doors Open & Builder Networking
Vibes, food, and high-signal peer-to-peer connection.
19:00 – Welcome & Technical Kickoff
AI Tinkerers Tel Aviv
Setting the stage for deep technical exchange.
19:15 – PowerPwn Uncovered: Advanced Agentic Recon & Exploitation
Avishai Efrat, Senior Security Researcher @ Zenity
This is not a theoretical security talk.
Avishai will demo PowerPwn, an open source toolkit that lets you test how vulnerable your GenAI app is with a simple script.
You’ll see:
How real agentic systems are discovered and exploited
The most common misconfigurations in production
How to test your system before attackers do
If you’ve ever wondered “is our GenAI app safe enough?”, this talk gives you a concrete answer.
19:30 – Why Your Agent Should Design Its Own Questions
Tammuz Dubnov, Founder & CTO @ AutonomyAI
Moving from hardcoded forms to dynamic, context-aware, problem-focused clarification flows.
19:45 – Lightning Talks (7 minutes each)
Making Image Generation Work at Scale
Dina Matveev, Data Engineer @ Tastewise
Why image generation looks great in demos and breaks in production, and what it takes to make it fast, stable, and predictable at scale.
So You Want to Build Agents? Here’s What’s Going to Break
Tori Seidenstein, Co-founder & CEO @ Tadata
Lessons from 2 million tool calls on what actually breaks when building agents, across auth, MCP definitions, and reliability.
20:15 – Open Q&A & Community Announcements
Discussion on the future of AI Security tooling.
20:30 – Event Concludes