

AI Safety Law-a-Thon
Join us for a weekend-long hybrid hackathon where lawyers and technical experts pair up to navigate the challenge of holding the right people accountable for AI-caused harm.
Our challenges emulate what practitioners will actually face:
Value chains where responsibilities and risks are spread across multiple stakeholders.
Contractual Terms drafted by Big Tech that just do not hold under EU law.
Blurry fault lines where replication tests and liability shortcuts could lead to miscarriages of justice.
Participants will work with current regulatory frameworks (EU AI Act, Product Liability Directive, tort law) and real-world contract terms.
Each team must also integrate findings from AI Safety research (e.g.: past model evaluations, findings on rare-event failures or misgeneralisation defects), brought in by their technical partners.
The result will be legal arguments and risk reports that mirror the disputes courts and regulators are likely to face in the coming years.
For lawyers, whether in law firms or in-house counsel, this is training in how to defend clients and employers in a context where contractual terms push liability downstream and regulation remains uncertain.
For technical AI Safety experts, it’s a rare chance to show how alignment failures translate into enforceable liability. You will also learn how to produce technical reports for legal proceedings, which is a very valuable and well-remmunerated skill.
Each challenge will be resolved in pairs: lawyers develop legal strategies while technical experts build the risk reports to back them up.
Senior Advisors:
Our Advisors are technical and legal experts with a track record in policy, engineering, governance and legal research. They will judge the quality of the work produced by our participants.
Charbel-Raphaël Segerie
Executive Director of the French Center for AI Safety (CeSIA), an OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Dr. Chiara Gallese
Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
Yelena Ambartsumian
Founder of AMBART LAW, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPP's New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.
James Kavanagh
Founder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWS's Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, James's team was the first to achieve a ISO 42001 certification of any global cloud provider.
Ze Shen Chin
Co-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.
The Challenges
Who Pays When the Model Fails?
When discriminatory outputs emerge in a deployed system, who absorbs liability?: a foundation model provider, a SaaS developer, or a broker? This challenge forces teams to untangle EU regulation, and contract law across a fragmented value chain.
Building a Countersuit
The standard Service Agreements used by big General Purpose AI Providers (Frontier AI companies) usually push most liability for risks downstream, but latent defects originate upstream.
Teams must prepare technical risk assessments to support counterclaims and expose how disclaimers clash with EU liability doctrines.
The Non-Replication Trap
Courts may rely on replication tests that wrongly blame downstream modifiers if harms don’t reappear in the base model. Teams must show why this shortcut fails, and design arguments and evidence strategies that prevent liability misallocation.
Be part of a unique event preparing lawyers and researchers for the AI accountability disputes that are coming.