

Tune Up Your AI: Live HumaneBench Implementation Workshop
An official Human+Tech week event:
How humane is your AI? And how would you actually know?
Accuracy is easy to measure. Human impact isn’t—but that doesn’t mean we shouldn’t try.
HumaneBench is now a humane eval — running live in production for continuous monitoring and showing how LLMs treat users. This is your chance to come into the shop for a tune-up.
Hear from folks who have already implemented HumaneBench and dive in to get HumaneBench into your codebase. It's low friction and no cost to implement.
Think "Humane observability" as a subset of "LLM observability."
Learn more:
Our case study with Storytell
Our Feb 19 talk at MIT Media Lab's AHA initiative
Our TechCrunch article
Hosts:
Erika Anderson, Founder @ Building Humane Tech, Co-Founder @ Storytell.ai
Jack Senechal, Founder @ Mirror Astrology
Sarah Ladyman, Experience Designer @ Building Humane Tech
Andalib Samandari, AI & Data Science Architect @ Georgia State University
Resources
Open-Source Repo: Our OSS is your starting point
Our Substack, which covers our work in humane tech
Community Slack: Connect with others in the grassroots humane tech movement
Hosted by Building Humane Tech
👉 Guidelines: Creating a welcoming, inclusive environment. Community guidelines
👉 Consent: By attending, you agree to photography and post-event coverage.
Sign up now - how often do you get to move the needle in the direction of humanity?
Questions? Email erika @ buildinghumanetech.com