

VAM! AI Reading Group: Paper "LLMs can hide text in other text of the same length"
Yunus will present the paper LLMs can hide text in other text of the same length.
📄 Paper: LLMs can hide text in other text of the same length
https://arxiv.org/pdf/2510.20075
To maximize engagement, please try to read the paper in advance.
Paper 3-line Summary: The paper introduces a rank-based steganography method that lets a large language model hide an arbitrary text perfectly inside another fluent text of the same length. By recording and reusing token ranks in the model’s probability distribution, it achieves full-capacity encoding while keeping the stegotext plausible and human-readable. Experiments on Reddit posts show these stegotexts remain within the normal plausibility range of real text, raising concerns for AI safety, censorship, and trust in language outputs.
Want to present?
The list of papers will be available here: https://docs.google.com/spreadsheets/d/1HET5sjnHjwiF3IaCTipR_ZWspfgglqwdBFRWAfKBhp8/edit?usp=sharing
To connect with the group, join the Discord: https://discord.gg/teJvEejs94
Timeline:
🕠 6:30 PM – Arrival & Networking.
🗣️ 6:45 PM ~ 7:15 – Paper Presentation
🗣️ 7:15 PM - Discussions
About the Facilitator
Issam Laradji is a Research Scientist at ServiceNow and an Adjunct Professor at University of British Columbia. He holds a PhD in Computer Science and a PhD from the University of British Columbia, and his research interests include natural language processing, computer vision, and large-scale optimization.
Looking forward to discussing the latest AI Papers!