Cover Image for AI Safety Law-a-Thon
Cover Image for AI Safety Law-a-Thon
Avatar for AI-Plans
Presented by
AI-Plans
27 Going

AI Safety Law-a-Thon

Registration
Welcome! To join the event, please register below.
About Event

Join us for a weekend-long hybrid hackathon where lawyers and technical experts pair up to navigate the challenge of holding the right people accountable for AI-caused harm.

Our challenges emulate what practitioners will actually face:

  • Value chains where responsibilities and risks are spread across multiple stakeholders.

  • Contractual Terms drafted by Big Tech that just do not hold under EU law.

  • Blurry fault lines where replication tests and liability shortcuts could lead to miscarriages of justice.

Participants will work with current regulatory frameworks (EU AI Act, Product Liability Directive, tort law) and real-world contract terms.

Each team must also integrate findings from AI Safety research (e.g.: past model evaluations, findings on rare-event failures or misgeneralisation defects), brought in by their technical partners.

The result will be legal arguments and risk reports that mirror the disputes courts and regulators are likely to face in the coming years.

For lawyers, whether in law firms or in-house counsel, this is training in how to defend clients and employers in a context where contractual terms push liability downstream and regulation remains uncertain.

For technical AI Safety experts, it’s a rare chance to show how alignment failures translate into enforceable liability. You will also learn how to produce technical reports for legal proceedings, which is a very valuable and well-remmunerated skill.

Each challenge will be resolved in pairs: lawyers develop legal strategies while technical experts build the risk reports to back them up.

The Challenges

Who Pays When the Model Fails?
When discriminatory outputs emerge in a deployed system, who absorbs liability?: a foundation model provider, a SaaS developer, or a broker? This challenge forces teams to untangle EU regulation, and contract law across a fragmented value chain.

Building a Countersuit
The standard Service Agreements used by big General Purpose AI Providers (Frontier AI companies) usually push most liability for risks downstream, but latent defects originate upstream.

Teams must prepare technical risk assessments to support counterclaims and expose how disclaimers clash with EU liability doctrines.

The Non-Replication Trap

Courts may rely on replication tests that wrongly blame downstream modifiers if harms don’t reappear in the base model. Teams must show why this shortcut fails, and design arguments and evidence strategies that prevent liability misallocation.

Be part of a unique event preparing lawyers and researchers for the AI accountability disputes that are coming.

Location
London
UK
Avatar for AI-Plans
Presented by
AI-Plans
27 Going