

Claude Code SWE #3: Skills and Workflow Engineering (read Ralph Wiggum Loop)
Welcome to the third Claude Code meetup. Following our sessions on setup and project structure, we are diving deep into the engine that powers effective agents: Skills and Workflow Engineering.
What We'll Cover
As agentic coding matures, we are seeing a shift in focus: from Prompting to Context, and now to Workflow Engineering. This session is about moving beyond simple instructions and building high-performance agentic systems that are modular, verifiable, and scalable.
Skills: The Atomic Units
We'll start by looking at Skills. Whether you are following the official Anthropic standard or other structures, a skill is a self-contained capability.
We'll look at the concept of using SKILLS.md as a boundary to the context window. A skill should have a verifiable goal—did the skill actually execute what it was designed to do? And a discrete descriptionl—does the desciption clearly show when to use the skill?
Workflows: The Orchestration Layer
Workflows are often the "missing link" in agentic setups. But unlike CLAUDE.md or SKILLS.md, a workflow isn't a documentation file—it's executable logic. The outer loop. The harness that orchestrates your agent.
The Outer Loop: How patterns like the Ralph Wiggum Loop move orchestration logic out of the system prompt and into dedicated scripts that control the agent's execution cycle.
Human on the Loop: To let go of the need of being a part of the input and let the ageent take the decision. Your executive power has never been more valuable.
Beyond Prompting: Moving from "one big prompt" to spec-driven execution with fresh context per task, verification gates, and self-healing retry loops.
Separation of Concerns: Skill vs. Workflow
One of the biggest challenges in agentic coding is knowing where to define behavior. We will demonstrate why separating these two layers is critical:
Skills = What (specs, definitions, goals) Workflows = How (loops, orchestration, execution logic)
Skill Validation: Did the skill achieve its specific goal?
Workflow Validation: Did the outer loop correctly route, verify, and complete the full execution cycle?
Why This Matters
In the beginning, a long system prompt feels like enough. But as your agent grows in complexity, that prompt becomes a "black box" that is hard to debug and impossible to scale.
Workflow Engineering is the practice of building that outer loop—the harness that runs your agent, resets context, verifies output, and iterates until done. By treating this orchestration layer as a first-class engineering artifact, you gain the ability to test, iterate, and swap components without breaking the entire system.
Who Should Come
Developers looking to move beyond basic Claude Code usage.
AI Engineers interested in the transition from prompting to workflow engineering.
Architects building complex agentic systems who need better mental models for modularity.
Anyone curious about outer-loop patterns and agentic orchestration.
No preparation is needed. We will review skill patterns and then look at real-world workflow implementations—from simple bash harnesses to sophisticated orchestration frameworks like Ralph.
Beyond Claude Code
While our examples use Claude Code, the architectural principles of separating "capabilities" (Skills) from "orchestration" (Workflows) are universal. These patterns are directly applicable to the Gemini ADK, OpenAI Swarm, LangChain, or any custom agentic framework you are building today.