Cover Image for Securing Open Source AI: The OML Framework
Cover Image for Securing Open Source AI: The OML Framework
Hosted By
30 Went

Securing Open Source AI: The OML Framework

Hosted by Daniel Kang
Registration
Past Event
Welcome! To join the event, please register below.
About Event

About the Event:
The moment you release open-model weights, you usually lose control. But what if you didn't have to?

We are proposing a shift toward AI-native cryptography. This talk explores the OML (Open, Monetizable, Loyal) framework—a new approach to giving model builders cryptographic-style guarantees of attribution and control, even when the model is open.

Key Takeaways:

  • 🛡️ Defense: How model fingerprinting can enforce attribution.

  • ⚔️ Offense: How adversaries can currently shatter "robust" fingerprints (and what we need to do to fix it).

  • 🐺 Bonus: A look at the Werewolf Agents Tournament—evaluating AI deception and persuasion in adversarial settings.

This is an essential talk for builders and researchers at the intersection of AI Safety, Cryptography, and Decentralized Tech.

Speaker:
Edoardo Contente (AI Researcher @ Sentient | Princeton Alum)

Join the AER LABS Network:
🌐 Website: https://aerlabs.tech/
✍️ Blog: https://aerlabs.tech/blogs
📺 YouTube: https://youtube.com/@aerlabs
💼 LinkedIn: https://linkedin.com/company/aer-labs
💬 Discord: https://discord.gg/cR9Vn9zR

Location
Network School
Jalan Forest City 5, Pulau Satu 8, 81550 Gelang Patah, Johor Darul Ta'zim, Malaysia
Hosted By
30 Went