Cover Image for Evaluating AI in the Social Sector
Cover Image for Evaluating AI in the Social Sector
Avatar for The Sidebar
Presented by
The Sidebar
Skoll Week 2026
22 Going
Registration
Welcome! To join the event, please register below.
About Event

About the session 

From math tutors to farmer advisory tools, generative AI (GenAI) is rapidly expanding in low- and middle-income countries. Yet many organizations are still asking: how do we know if these tools are working? Evaluations can help, but there is little agreement on what they should include. Tech teams prioritize product performance, often overlooking impact, while impact evaluators focus on outcomes but may neglect the underlying technology. 

This session introduces the AI Evaluation Playbook, which is designed to help organizations assess and build better AI products. Built on a four-level framework, the playbook supports organizations in evaluating AI systems from model performance through to product use, user behavior, and ultimately impact on outcomes.  

What to expect 

  • Opening remarks on AI evaluation. Speaker: Han Sheng Chia, Director AI Initiative Center for Global Development 

  • Presentation on how Digital Green has been evaluating its AI farmer coach. Speaker: Rikin Gandhi, CEO Digital Green 

  • Funder Panel Discussion on how donors and governments are approaching evaluation. Moderator: Temina Madon, CEO The Agency Fund 

  • Closing remarks: Sid Ravinutala, Chief Data Scientist, IDinsight

  • Light refreshments 

Through practitioner and funder perspectives, we'll explore how organizations can build evaluation into their workflows, ask better questions about impact, and make smarter decisions about where and how to use AI. 

Who should attend 

Donors funding organizations building AI products and practitioners building AI products. Space is limited, we hope you’ll join us! 

Location
Jesus College-Circle Room, Turl St, Oxford OX1 3DW
Avatar for The Sidebar
Presented by
The Sidebar
Skoll Week 2026
22 Going