

AI Ethical Futures Lab #1
Canada's AI Consultation Workshop
A civil society initiative to advance AI law, policy, and regulation centred around human rights and the public interest
Last fall, Canada ran a 30-day "sprint" to shape national AI policy. It's done now. The task force was industry-weighted, the survey had biased questions, and they used LLMs to analyze 11,000+ submissions with zero transparency. The report dropped in February: "conflicting recommendations" with no clear direction. 160+ civil society organizations called it what it was "a predetermined rush job that excluded the communities most affected by AI".
Civil society responded with the People's AI Consultation... an ongoing, open process to document what the government's flawed consultation missed. March 4, we're doing BC's part. We're working through the consultation guide together in a facilitated workshop led by Jesi Carson (Participedia, Design Nerds), trimming it to questions that matter for BC, drafting responses in small groups, and deciding whether to submit individually or collectively.
We're doing practical policy work. You don't need a PhD or technical background. You need perspective, lived experience, and a willingness to participate. Whether you're affected by AI (workers, artists, educators), building AI systems, governing them, or just a concerned citizen—bring your voice. Policy gets written whether we participate or not. The question is: do industry voices dominate alone, or do community voices push for something better?
RSVP required (30-40 capacity).
Links:
People's AI Consultation: peoplesaiconsultation.ca
What went wrong: BetaKit article
Every AI conversation you've ever been in probably sounded like one of two camps. "AI is the greatest thing ever. It has the potential to revolutionize the world. Get on board!" OR "AI is stealing our work/jobs/autonomy. We should resist."
Here's the thing: both positions are lazy. Both avoid the harder work of holding nuance, complexity, contradiction, and competing values. The big platforms are strip-mining everything we create. They want our data to feed their machines. But those same tools are also transforming what's possible - for creativity, for learning, for building things we couldn't build before.
We walk forward holding both.
The BC + AI Ethical Futures Lab exists because we need spaces where we can be critical and engaged at the same time. Where we don't pretend those fears don't exist, and we don't pretend the possibilities aren't real.
What We're Building
The AI Ethical Futures Lab is a community-driven initiative within the BC + AI Ecosystem, dedicated to fostering responsible AI development through collaborative research, policy engagement, ethical framework development, and, above all, open discourse.
We bridge the gap between AI innovation and ethical consideration, ensuring British Columbia leads in building trustworthy, inclusive, and community-centered AI systems.
Our approach
Community-First Ethics: Ethical AI emerges from diverse voices, not corporate boardrooms. We out Indigenous knowledge systems, grassroots perspectives, and lived experiences front and centre.
Our policy-meets-practice philosophy translates ethical frameworks into actionable guidelines for developers, policymakers, and organizations implementing AI systems.
We champion open processes, clear accountability mechanisms, and inclusive decision-making from design to governance with human-welbeing the perpetual North Star.
About BC + AI Ecosystem Association
BC + AI is the province-wide layer: a community-driven, nonprofit industry association built to create public-interest infrastructure for AI in British Columbia.
Not a corporate lobby. Not a think tank. A commons where meetups turn into working groups, prototypes turn into shared tools, and community values turn into governance.
Vancouver AI was just the start. The ecosystem keeps growing across BC... Surrey, Comox Valley, Squamish, and beyond... each node bringing its own culture, needs, and experiments.
Join support: https://bc-ai.ca/membership