AI Research Circle #5: Chain of Thought [Public]
About the AI Research Circle
The AI Research Circle is a community gathering at where we explore AI research together. No research background required—just curiosity. Each session, we pick a topic, break it down, and open it up for discussion. The goal: make cutting-edge ideas accessible and spark conversation across disciplines.
Session Details
How did a simple prompting trick become the backbone of modern AI reasoning?
In 2022, researchers at Google showed something surprising: if you ask a language model to "show its work," it gets dramatically better at solving problems. This technique—chain-of-thought prompting—has since evolved from a clever hack into the core architecture of reasoning models like OpenAI's o1 and DeepSeek-R1.
This session, we'll trace that arc:
The original paper — Wei et al. 2022 and the insight that started it all
The extensions — zero-shot prompting ("let's think step by step"), self-consistency, and beyond
Where we are now — how chain-of-thought became built into the models themselves
Pre-read (optional but encouraged): Wei et al., "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" — arxiv.org/abs/2201.11903