Cover Image for AI Safety Thursday: Monitoring LLMs for deceptive behaviour using probes
Cover Image for AI Safety Thursday: Monitoring LLMs for deceptive behaviour using probes
Avatar for Trajectory Labs
Presented by
Trajectory Labs
Hosted By
1 Going

AI Safety Thursday: Monitoring LLMs for deceptive behaviour using probes

Get Tickets
Welcome! Please choose your desired ticket type:
About Event

LLMs show deceptive behaviour when they have incentive to do so, whether it's alignment faking or lying about its capabilities. A work earlier this year at Apollo proposed using linear probes that detect such behaviour using model’s internal activations.

In this talk Shivam Arora, will share details on how these probes work and share his research experience on a follow up work to improve them conducted as part of a fellowship at LASR labs.

Event Schedule
6:00 to 6:30 - Food & Introductions
6:30 to 7:30 - Main Presentation & Questions
7:30 to 9:00 - Open Discussion

​If you can't attend in person, join our live stream starting at 6:30 pm via this link.

​This is part of our weekly AI Safety Thursdays series. Join us in examining questions like: 

  • ​How do we ensure AI systems are aligned with human interests? 

  • ​How do we measure and mitigate potential risks from advanced AI systems? 

  • ​What does safer AI development look like?

Location
30 Adelaide St E 12th floor
Toronto, ON M5C 3G8, Canada
Enter the main lobby of the building and let the security staff know you are here for the AI event. You may need to show your RSVP on your phone. You will be directed to the 12th floor where the meetup is held. If you have trouble getting in, give Georgia a call at 519-981-0360.
Avatar for Trajectory Labs
Presented by
Trajectory Labs
Hosted By
1 Going