Cover Image for Safety in LLMs Needs Personalization Too!
Cover Image for Safety in LLMs Needs Personalization Too!
Hosted By
11 Went

Safety in LLMs Needs Personalization Too!

Hosted by NICE AI Talk
YouTube
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Youtube Live Stream: https://youtube.com/live/X6QsfkiqLR8

Talk Title: LLM Safety Should Be “Personalized” Too

When large language models answer questions, they often give the same response to everyone. In high-risk situations, this kind of “one-size-fits-all” approach can be dangerous.

Imagine this:
The same sentence — “I want everything to stop.”

  • For someone who is just venting about work stress, a comforting response may be positive ✅

  • But for a young person who is contemplating suicide, the same response could become the final push ❌

This is exactly what this talk is about: Personalized Safety.


Invited Speaker: Yuchen Wu
Currently Ph.D. student at University of Washington. He works with Professor Aylin Caliskan and Professor Jindong Wang. His research focuses on personalized large language models and LLM safety.

Host: Boyang Xue
Currently Ph.D. student at Chinese University of Hong Kong. His research interests focus on natural language processing, trustworthy AI, Bayesian learning, etc.

Hosted By
11 Went