Cover Image for Animating Intent: Using Language Models to Drive Virtual Avatar Behavior
Cover Image for Animating Intent: Using Language Models to Drive Virtual Avatar Behavior
Avatar for CreateHER Fest
Presented by
CreateHER Fest

Animating Intent: Using Language Models to Drive Virtual Avatar Behavior

Google Meet
Registration
Welcome! To join the event, please register below.
About Event

Make your AI avatars respond naturally — without over-engineering your animation system.

If you've wondered how to make virtual characters interpret emotion and context in real time, this hands-on workshop will show you how. You'll learn the behavior mapping framework used by XR and game dev teams to create avatars that feel intentional and alive.

What You'll Build

In 90 minutes, you'll create:

  • A functional avatar behavior prototype that triggers animations based on AI-detected emotion

  • An emotion-to-behavior mapping system defining "if this emotion, then that action" logic

  • LLM prompt + output schema for behavior control

  • Demo-ready interaction scenario perfect for hackathons

You'll Learn How To

  • Translate language model outputs (intent, sentiment, emotion) into animation triggers

  • Design repeatable behavior frameworks for responsive virtual characters

  • Apply "if-then" logic to connect AI responses with avatar actions

  • Build systems that work across gaming, XR, conversational interfaces, and robotics

Perfect For

  • Women exploring AI/ML, embodied AI, or XR development

  • Hackathon teams wanting expressive, differentiated demos

  • Builders creating conversational interfaces or virtual assistants

  • Developers interested in game NPCs or interactive storytelling

Intermediate level — some coding familiarity recommended.

What You Need

  • Laptop with internet access

  • Visual Studio Code, Blender, API keys (Anthropic, ElevenLabs)

  • Setup guide sent 3 days before - Est. setup: 20 minutes

Why This Wins Hackathons

Projects using emotion-driven avatar behavior stand out by demonstrating thoughtful system design and advanced AI integration — exactly what judges reward. You'll move beyond static demos to create experiences that feel natural and intentional.​

About Your Facilitator

April Gittens is a Principal Cloud Advocate at Microsoft specializing in Generative AI and developer tools. Her career began in luxury fashion as a visual merchandiser for brands like Club Monaco, Saks, and Neiman Marcus. She later transitioned to tech as a project manager for the Consumer Technology Association (CTA), producer of CES. She holds a Master's in Luxury & Fashion Management from SCAD.

A pivot to tech led April from learning Python to pioneering conversational AI and chat bots. In 2019, she became a Twilio Champion and was awarded the Twilio Doer award. A life-changing Microsoft demo in 2019 launched her into Extended Reality. During her time in XR, she championed XR safety, diversity, and inclusivity, serving as Director of Community & Education for the XR Safety Initiative.

April's hackathon debut at MIT Reality Hack 2020 produced Spell Bound, an award-winning immersive speech therapy app. She's collaborated with Warner Bros., Pluralsight, and Codecademy, sharing virtual stages with astronaut Stanley G. Love and Bill Nye.

Today, April explores embodied AI—what happens when intelligence gets a body. Her work spans physical robotics and virtual avatars, investigating how AI learns spatial reasoning, object manipulation, and natural interaction across physical and digital environments.

Avatar for CreateHER Fest
Presented by
CreateHER Fest