

How to Build ChatGPT - Part 2: RAG & Connectors
Welcome to a new series! In How to Build ChatGPT, you’ll learn step-by-step everything you need to build your very own ChatGPT application.
This will include the following topics/sessions:
Prompting / Open AI Responses API
RAG / Connectors & Data Sources
Agents / Search
End-to-End Application / Vibe-Coding the Front End & Deploying the Back-End
Reasoning / Thinking
Deep Research
Agent Mode
Each of these features is required to build our very own ChatGPT application.
Nearly three years ago, ChatGPT was released to the world - it was the fastest-growing app the world had ever seen. At the time, it was just an LLM with a front end.
Now, it’s so much more.
Meanwhile, nearly three years later in 2025, GPT-5 was recently released on the heels of gpt-oss, the first open-weight model release since GPT-2 in 2019.
We intend to follow the journey that the OpenAI product team has taken.
For aspiring AI Engineers, we believe that taking this approach to learning might be one of the best ways to learn to build 🏗️, ship 🚢, and share 🚀 customized production LLM applications for many use cases.
🛣️ Join us for the entire journey!
In Part 2, we will cover ChatGPT connectors. We will also discuss what Retrieval Augmented Generation (RAG) is and how connectors can be used to enhance our ability to do retrieval.
We will also introduce the idea of a “Tool,” which will help us to understand Agents.
We’ll root ourselves in how we use RAG to augment the context (we might say engineer the context today), and we’ll discuss best-practices for setting up an initial RAG prompt to be used for direct retrieval or for connectors.
🤓 Who should attend
Anyone interested in building, shipping, and sharing their own ChatGPT-like production LLM application!
AI Engineers who aim to architect production LLM applications according to emerging industry best-practices
AI Engineering leaders who want to build one or more ChatGPT-like production LLM applications for their organization
Speaker Bios
Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021, he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.
Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.