Cover Image for Workshop: Beyond the Principles: How to Make Responsible AI Real  A 3-Part Intensive for Tech & Society Leaders
Cover Image for Workshop: Beyond the Principles: How to Make Responsible AI Real  A 3-Part Intensive for Tech & Society Leaders

Workshop: Beyond the Principles: How to Make Responsible AI Real A 3-Part Intensive for Tech & Society Leaders

Hosted by Charley Johnson
Get Tickets
Welcome! Please choose your desired ticket type:
About Event

📣 New 3-Part Intensive Workshop: Making Responsible AI Real (Not Just a PDF)

Practical, hands-on training for tech & society leaders who want responsible AI to actually shape practice — not sit on a shelf.

Every organization now has “responsible AI principles.”

Very few have responsible AI practices.

This 3-part, interactive workshop series is designed to close that gap. It will take place on:

  • March 13 from 12 - 2 pm ET

  • March 20 from 12 - 2 pm ET

  • March 27 from 12 - 2 pm ET

Across three fast-paced, hands-on sessions, you’ll learn how to move your organization from beautifully written principles → day-to-day behaviors, decisions, and norms that actually embody responsible AI.

If you’re tired of ethics that lives in a slide deck, this is for you.


🌱 Workshop 1 — Shift the Mindsets That Quietly Undermine Responsible AI

From “Tech-First” → Sociotechnical Intelligence

The Problem:

Most responsible AI failures don’t begin with a model. They begin with a mindset.

In this workshop, we’ll surface the three default mindsets that sabotage responsible AI from the inside out:

  • Neutrality → treating data as objective truth

  • Techno-determinism → assuming technology drives change

  • Techno-solutionism → believing tools can “fix” social problems

The Shift:

You’ll learn to replace these with a sociotechnical mindset — one that makes visible the power, incentives, relationships, and histories shaping your system.

Hands-On Practice:

Using the Tech-First Thinking Diagnostic, you will:

  • Reframe a current AI initiative through relationships, power, and context

  • Map hidden feedback loops the technology will reinforce

  • See how the tool will actually behave inside your system

You Leave With:

A practical, repeatable method for spotting “solutionist traps” — and a clearer picture of how technology and culture will co-shape outcomes.


🌐 Workshop 2 — Center the System, Not the Tool

From “AI Strategy” → Relational Infrastructure & System Understanding

The Problem:

Organizations fixate on the tool: audits, parameters, “use cases.”

But tools don’t determine outcomes — systems do.

The Shift:

In this workshop, you’ll learn to step back from the tool and diagnose the sociotechnical system it will enter:

  • The relationships that enable or block change

  • The incentives that shape behavior

  • The narratives that orient decisions

  • The boundaries that hide harm

  • The affordances that shape practice

Hands-On Practice:

You’ll use two powerful mapping tools:

  • Boundary & Actor Mapping: Expand the system to reveal hidden stakeholders and unseen influences shaping outcomes

  • Relational Infrastructure Diagnostic: Assess the trust, norms, and patterns of interaction that determine whether AI will enable flourishing or reinforce inequity

You Leave With:

A clear map of the real system you’re intervening in — and a strategy grounded in human dynamics, not technology hype.


🔄 Workshop 3 — Reconfigure Your System for Adaptive, Accountable AI

From “Control & Evaluation” → Sense-Making, Coordination & Human Judgment

The Problem:

Complex systems don’t respond to static plans or top-down control.

Yet most organizations evaluate AI with backward-looking metrics and narrow dashboards.

The Shift:

This workshop will teach you how to:

  • Sense-make, not just evaluate: Detect early signals, shifting narratives, new feedback loops, and emerging risks

  • Translate across difference: Learn how to help team members with very different perspectives — about what AI is and isn't — collaborate without flattening their perspectives.

  • Design decision-making where AI supports — but never replaces — human judgment

Hands-On Practice:

  • Momentum Monitoring: Track relational, behavioral, and narrative signals of change

  • Multiple Perspectives Checklist: Ensure diverse epistemologies, roles, and power positions shape decisions

  • Translation Diagnostic: Assess whether your organization is structured to bridge difference or reinforce silos

  • AI-in-Decision-Making Checklist: Define where AI can assist — and where only humans can decide

You Leave With:

A concrete plan to help your organization translate responsible AI principles into day-to-day practices.


✔ What You’ll Walk Away With

  • 3 interactive, live workshops (2 hours each via Zoom)

  • Practical diagnostics, tools, and templates you can use immediately

  • A 50+ page Playbook with all exercises

  • A replicable process for translating Responsible AI principles into everyday practice

  • New ways to see — and shift — the sociotechnical dynamics shaping your system

Can’t make the time of these sessions?

​​Send me a message — I may open a second workshop series.

Location
https://us06web.zoom.us/j/3707607961