Presented by
Unify
Build AI Your Way ✨
Hosted By
Paper Reading: ReFT
Registration
Past Event
About Event
In this session, we are welcoming Zhengxuan Wu and Aryaman Arora from Stanford who co-authored the paper "ReFT: Representation Finetuning for Language Models". The paper can be found here. Instead of adjusting model weights during fine-tuning, ReFT focuses on modifying the model’s internal representations to quickly fine-tune using up to 50 times less parameters than traditional PeFT methods. See you there!
Presented by
Unify
Build AI Your Way ✨
Hosted By