Cover Image for OnCall Lab: Let AI do your Debugging
Cover Image for OnCall Lab: Let AI do your Debugging
Avatar for Beyond Prompts
Presented by
Beyond Prompts
101 Going
Registration
Welcome! To join the event, please register below.
About Event

A live demo of terminal-first debugging where AI pulls evidence from your running app (logs, and optional code slices) so you don’t have to.


Why this exists

Debugging steals focus.

Not because you can’t read code—because you end up doing the same chores every time:

  • chase the right logs

  • grep for clues

  • stitch context across services

  • paste snippets into an AI and hope it doesn’t guess

This lab shows a different loop: give AI the chores, keep control.


What you’ll see

We’ll debug a real incident live.

OnCall runs your command, streams logs, and lets the AI investigate like a careful teammate:

  • it finds the relevant lines

  • it connects signals across services (when present)

  • it makes a claim and points to the evidence

  • it suggests a next check you can run right away

No dashboards. No tab-hopping. No “trust me.”


What you’ll take away

A simple way to delegate debugging without increasing risk:

  • how to ask questions that force evidence

  • how to move from symptom → proof → hypothesis → quick verification

  • how to use access controls (logs-only vs logs + code) so you stay comfortable


What to Bring

To get the most out of this lab, we recommend setting up the tool beforehand so you can try the workflow live on your own code.

1. Install the CLI Please follow the quickstart guide to install oncall on your machine: 👉 Installation Guide

2. Bring a Repo Have a local project ready (Node, Python, Go, Docker, etc.) that you can run in your terminal. You will be able to test the "oncall" wrapper on this project during the session.


⚡ What is OnCall?

OnCall is a terminal-based debugging assistant that bridges the gap between your run command and your AI tools.

Instead of copy-pasting logs into a browser, you simply run your app with the oncall prefix:

$ oncall npm run dev

$ oncall docker-compose up

This streams your live logs and error traces directly to the AI, allowing it to investigate issues, grep for clues, and link evidence across services without you leaving the terminal.


Who it’s for

If you:

  • spend too much time in logs

  • use ChatGPT/Claude sometimes but don’t trust it during incidents

  • want debugging to take less time and less mental energy

…you’ll get value.


Agenda (60 minutes)

  • 5 min — the problem: where time actually goes

  • 30 min — live incident: delegate the investigation end-to-end

  • 10 min — second scenario: a different failure mode

  • 10 min — how it works in plain English + how to try it

  • 5 min — Q&A


What is OnCall (one minute)

Run your app like:

  • oncall npm run dev

  • oncall docker-compose up

  • oncall python app.py

Logs stream live. You ask questions. The AI can read logs across services under one project ID, and (if you allow it) inspect small code slices to back up its conclusions.


After the lab

We’ll share the commands and the workflow so you can replay the same loop on your own repo.


This is an online event.

You can join directly from the Luma event page, the Google Meet button will appear there 15 minutes before we start, so just hop onto the this page around 4:45 PM and you’ll see the button.​​


Built by Builders, Backed by the Best

​​​​​We’ve hosted engineers, financial experts, founders, and researchers from some of the world’s most forward-thinking companies. Here’s a look at who’s been in the room.

Location
Bengaluru
Karnataka, India
This event will be conducted online via Google Meet. The meeting link will be available on the Luma event page → one hour before the session begins.
Avatar for Beyond Prompts
Presented by
Beyond Prompts
101 Going