

Evidence & Impact
This session is part of the Introduction to Political Technology course at Newspeak House, open to faculty and fellowship candidates only.
Evaluation is more than a technical exercise: it emerged as a way to discipline decision-making, to separate activity from real outcomes. Yet even strong evidence does not guarantee that organisations will act on it — and poorly designed evaluations risk measuring the wrong thing altogether.
This session examines the principles of rigorous evaluation, why evidence often fails to influence practice, and how to design assessments that withstand scrutiny and drive real change.
What is evaluation, and why does it matter?
What does “good” vs. “bad” evaluation look like?
How do you decide when to do experiments?
How has evaluation been applied in political and civic technology?
Andreas Varotsis is a data scientist and AI engineer who works to improve operational delivery and services across government using technology, data, and evidence. He’s spent the previous decade in various roles in central government and front-line delivery, including the Metropolitan Police Service, the data-science team of 10 Downing Street, and the Incubator for AI.
He works to support a range of cross-government communities, including Evidence House, which works to improve the use of data and IT in government, and the Society of Evidence Based Policing, which champions research to enhance policing practices and reduce crime.