

How to Build ChatGPT - Part 1: Prompting & Responses API
Welcome to a new series! In How to Build ChatGPT, you’ll learn step-by-step everything you need to build your very own ChatGPT application.
This will include the following topics/sessions:
Prompting / Open AI Responses API
RAG / Connectors & Data Sources
Agents / Search
End-to-End Application / Vibe-Coding the Front End & Deploying the Back-End
Reasoning / Thinking
Deep Research
Agent Mode
Each of these features is required to build our very own ChatGPT application.
Nearly three years ago, ChatGPT was released to the world - it was the fastest-growing app the world had ever seen. At the time, it was just an LLM with a front end.
Now, it’s so much more.
Meanwhile, nearly three years later in 2025, GPT-5 was recently released on the heels of gpt-oss, the first open-weight model release since GPT-2 in 2019.
We intend to follow the journey that the OpenAI product team has taken.
For aspiring AI Engineers, we believe that taking this approach to learning might be one of the best ways to learn to build 🏗️, ship 🚢, and share 🚀 customized production LLM applications for many use cases.
🛣️ Join us for the entire journey!
In Part 1, we will cover the Responses API. We will also introduce the series, and using the Responses API, we’ll start architecting the basic framework we’ll build on in future sessions.
It’s worth noting here that the Responses API took the place of the Assistants API, initially released in late 2023, which was built on the backbone of the still-relevant OpenAI API standard outlined in the Completions API. There is a rich history here, and we’ll trace the path from Completions API to Assistants API to Responses API to beyond (e.g., the harmony format?).
We’ll root ourselves in the System
, User
, Assistant
role terminology that has become so ubiquitous, and we’ll discuss best-practices for Prompt Engineering that we should be thinking about as we set off on any journey to construct a much larger, more complex, performant and efficient production LLM application.
🤓 Who should attend
Anyone interested in building, shipping, and sharing production LLM applications!
AI Engineers curious about how to start architecting new production systems from first principles
AI Engineering leaders who want to build one or more ChatGPT-like production LLM applications for their organization
Speaker Bios
Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021, he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.
Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.