Cover Image for Paper Reading: ReFT
Cover Image for Paper Reading: ReFT
Avatar for Unify
Presented by
Unify
Build AI Your Way ✨

Paper Reading: ReFT

Google Meet
Registration
Past Event
Welcome! To join the event, please register below.
About Event

In this session, we are welcoming Zhengxuan Wu and Aryaman Arora from Stanford who co-authored the paper "ReFT: Representation Finetuning for Language Models". The paper can be found here. Instead of adjusting model weights during fine-tuning, ReFT focuses on modifying the model’s internal representations to quickly fine-tune using up to 50 times less parameters than traditional PeFT methods. See you there!

Avatar for Unify
Presented by
Unify
Build AI Your Way ✨