Cover Image for Master Responsible AI Practices: A 4-Part Intensive for Tech & Society Leaders
Cover Image for Master Responsible AI Practices: A 4-Part Intensive for Tech & Society Leaders
1 Going
Private Event

Master Responsible AI Practices: A 4-Part Intensive for Tech & Society Leaders

Hosted by Charley Johnson
Get Tickets
Welcome! Please choose your desired ticket type:
About Event

📣 New 4-Part Intensive Workshop: Making Responsible AI Real (Not Just a PDF)

Practical, hands-on training for tech & society leaders who want responsible AI to actually shape practice — not sit on a shelf.

Every organization now has “responsible AI principles.”

Very few have responsible AI practices.

This 4-part, interactive workshop series is designed to close that gap. It will take place on:

  • Session 1: March 13 from 12 - 2 pm ET

  • Session 2: March 20 from 12 - 2 pm ET

  • Session 3: March 27 from 12 - 2 pm ET

  • Optional Q&A: March 30 from 1:00 - 2:00 pm ET

  • Session 4: April 10 from 12 - 2 pm ET

Across three fast-paced, hands-on sessions, you’ll learn how to move your organization from beautifully written principles → day-to-day behaviors, decisions, and norms that actually embody responsible AI.

If you’re tired of ethics that lives in a slide deck, this is for you.


🌱 Workshop 1 — Shift the Mindsets That Quietly Undermine Responsible AI

From “Tech-First” → Sociotechnical Intelligence

The Problem:

Most responsible AI failures don’t begin with a model. They begin with a mindset.

In this workshop, we’ll surface the three default mindsets that sabotage responsible AI from the inside out:

  • Neutrality → treating data as objective truth

  • Techno-determinism → assuming technology drives change

  • Techno-solutionism → believing tools can “fix” social problems

The Shift:

You’ll learn to replace these with a sociotechnical mindset — one that makes visible the power, incentives, relationships, and histories shaping your system.

Hands-On Practice:

Using the Tech-First Thinking Diagnostic, you will:

  • Reframe a current AI initiative through relationships, power, and context

  • Map hidden feedback loops the technology will reinforce

  • See how the tool will actually behave inside your system

You Leave With:

A practical, repeatable method for spotting “solutionist traps” — and a clearer picture of how technology and culture will co-shape outcomes.


🌐 Workshop 2 — Center the System, Not the Tool

From “AI Strategy” → Relational Infrastructure & System Understanding

The Problem:

Organizations fixate on the tool: audits, parameters, “use cases.”

But tools don’t determine outcomes — systems do.

The Shift:

In this workshop, you’ll learn to step back from the tool and diagnose the sociotechnical system it will enter:

  • The relationships that enable or block change

  • The incentives that shape behavior

  • The narratives that orient decisions

  • The boundaries that hide harm

  • The affordances that shape practice

Hands-On Practice:

You’ll use two powerful mapping tools:

  • Boundary & Actor Mapping: Expand the system to reveal hidden stakeholders and unseen influences shaping outcomes

  • Relational Infrastructure Diagnostic: Assess the trust, norms, and patterns of interaction that determine whether AI will enable flourishing or reinforce inequity

You Leave With:

A clear map of the real system you’re intervening in — and a strategy grounded in human dynamics, not technology hype.


➗ Session 3: Division of Labor

Everyone is asking what AI can do.

This session helps you answer the more important question: what should it do — and what should remain human?

AI systems are excellent at pattern-finding, data management, and rule-following. They can generate text, images, plans, and options at breathtaking speed. But they can’t make meaning. They can’t decide what matters, why it matters, or what kind of future an organization is actually trying to build.

Only humans can do that.

In this session, we’ll examine:

  • Where work requires judgment, discernment, and values-based decision-making

  • How organizations quietly outsource meaning-making when they confuse output with understanding

  • What it looks like to design human–AI systems that are interdependent rather than competitive

You'll come away with practices and frameworks for mapping this division of labor on to your organizational processes and programmatic designs.

🔄 Workshop 4 — Reconfigure Your System for Adaptive, Accountable AI

From “Control & Evaluation” → Sense-Making, Coordination & Human Judgment

The Problem:

Complex systems don’t respond to static plans or top-down control.

Yet most organizations evaluate AI with backward-looking metrics and narrow dashboards.

The Shift:

This workshop will teach you how to:

  • Sense-make, not just evaluate: Detect early signals, shifting narratives, new feedback loops, and emerging risks

  • Translate across difference: Learn how to help team members with very different perspectives — about what AI is and isn't — collaborate without flattening their perspectives.

  • Design decision-making where AI supports — but never replaces — human judgment

Hands-On Practice:

  • Momentum Monitoring: Track relational, behavioral, and narrative signals of change

  • Multiple Perspectives Checklist: Ensure diverse epistemologies, roles, and power positions shape decisions

  • Translation Diagnostic: Assess whether your organization is structured to bridge difference or reinforce silos

  • AI-in-Decision-Making Checklist: Define where AI can assist — and where only humans can decide

You Leave With:

A concrete plan to help your organization translate responsible AI principles into day-to-day practices.


✔ What You’ll Walk Away With

  • 9 hours of live, interactive sessions.

  • Practical diagnostics, tools, and templates you can use immediately

  • A 50+ page Playbook with exercises & tools

  • A replicable process for translating Responsible AI principles into everyday practice

  • New ways to see — and shift — the sociotechnical dynamics shaping your system

  • Private membership in a community of practice for tech & society leaders grappling with how to shift their system amidst uncertainty and across difference. With participants from organizations like New_Public, Discord, Center for Tech & Civic Life, Siegel Family Endowment, Annie E. Casey Foundation, Stanford’s Digital Civil Society Lab, LAist, and many more.

Location
https://us06web.zoom.us/j/3707607961
1 Going