Cover Image for Testing LLM Cooperation in Multi-Agent Simulation
Cover Image for Testing LLM Cooperation in Multi-Agent Simulation
Avatar for Trajectory Labs
Presented by
Trajectory Labs
18 Going

Testing LLM Cooperation in Multi-Agent Simulation

Get Tickets
Welcome! Please choose your desired ticket type:
About Event

Ryan Faulkner explores various papers that address cooperation and safety in multi-agent LLM simulations. Some of the core topics will include:

  • Moral behaviour of agents in high-stakes, zero-sum, and morally charged social dilemmas

  • Governance and sanctioning dynamics and the ways that LLM agents often fail to cooperate and free-ride in common-pool resource game

  • Mechanism design interventions like mediation, contracts, and elected leadership are explored to steer agents toward safer outcomes. 

This research also reveals that LLMs adapt their behavior based on awareness of their conversational partner's identity.

Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open Discussions

​​If you can't attend in person, join our live stream starting at 6:30 pm via this link.

Location
30 Adelaide St E
Toronto, ON M5C 3G8, Canada
Enter the main lobby of the building and let the security staff know you are here for the AI event. You may need to show your RSVP on your phone. You will be directed to the 12th floor where the meetup is held. If you have trouble getting in, give Georgia a call at 519-981-0360.
Avatar for Trajectory Labs
Presented by
Trajectory Labs
18 Going