
AI Ethical Futures Lab #2
The Problem With "AI Ethics
The Question
Who owns what when machines learn from everything?
An artist's portfolio trains a model they never consented to. A musician's voice gets cloned. A writer finds their style replicated at scale. Meanwhile, some creators are leaning in... experimenting with AI as collaborator, tool, or medium.
We're not here to relitigate the obvious harms. We're here to explore the harder questions: What does ethical creative AI look like? What do artists actually want? Where's the line between inspiration, reference, and theft... and does AI change that line?
April 1st. Parker Street Studios. A conversation, not a lecture.
What This Is
A discussion circle. Not a panel. Not a workshop. Just a room of people who care about this stuff, talking it through.
We'll open with a short framing (10 minutes max), then move into facilitated conversation. Bring your questions, your frustrations, your experiments. Whether you're an artist wrestling with AI in your practice, a technologist thinking about creative ethics, or someone trying to make sense of where this is headed... your perspective matters.
The venue shapes the conversation: We're in Tanya Slingsby's art studio. The work on the walls is human-made. The questions we're asking are about what that means now.
Questions We Might Explore
What would ethical AI training actually look like for creative work?
Is there a difference between AI learning from your work and a human learning from your work?
When does AI become a legitimate creative collaborator vs. a replacement?
What do artists lose when their style becomes reproducible at scale?
How do we build consent into systems designed to scrape everything?
What rights should creators have over their digital fingerprint?
Are there uses of AI in creative work that feel right? What makes them different?
We won't answer all of these. We might not answer any. But we'll think together.
Who Should Come
Artists: visual, audio, written, performance, any medium
Creators wrestling with AI: whether you're using it, avoiding it, or both
Technologists: especially if you build tools that touch creative work
Lawyers, policy folks: copyright, IP, the legal landscape
Anyone curious: you don't have to be an expert, just willing to engage
Come with a question. Something you're actually wondering about. It doesn't have to be polished.
What We're Building
The AI Ethical Futures Lab is a community-driven initiative within the BC + AI Ecosystem, dedicated to fostering responsible AI development through collaborative research, policy engagement, ethical framework development, and, above all, open discourse.
We bridge the gap between AI innovation and ethical consideration, ensuring British Columbia leads in building trustworthy, inclusive, and community-centered AI systems.
Our approach
Community-First Ethics: Ethical AI emerges from diverse voices, not corporate boardrooms. We out Indigenous knowledge systems, grassroots perspectives, and lived experiences front and centre.
Our policy-meets-practice philosophy translates ethical frameworks into actionable guidelines for developers, policymakers, and organizations implementing AI systems.
We champion open processes, clear accountability mechanisms, and inclusive decision-making from design to governance with human-welbeing the perpetual North Star.
About BC + AI Ecosystem Association
BC + AI is the province-wide layer: a community-driven, nonprofit industry association built to create public-interest infrastructure for AI in British Columbia.
Not a corporate lobby. Not a think tank. A commons where meetups turn into working groups, prototypes turn into shared tools, and community values turn into governance.
Vancouver AI was just the start. The ecosystem keeps growing across BC... Surrey, Comox Valley, Squamish, and beyond... each node bringing its own culture, needs, and experiments.
Join support: https://bc-ai.ca/membership