

Deepthinking Policy RETHINK: First Principles of AI and Acceleration
Background:
AI’s most important global impacts for policy and institutions are not just technical; they’re operational and cognitive. AI now makes reasoning, language, coordination and persuasion cheap, scalable, widely deployable, and increasingly customiseable.
Today, policy debate and policy development around the world are heavily shaped by hype and fear of AI, rather than by a shared, first‑principles understanding of what has actually changed.
This workshop steps back from the noise. We start with fundamentals: the implications of cheap cognition for policy design.
Historically, rules and policies in many jurisdictions were written for a time when human attention, verification and expertise were scarce, slow and costly. In that world, “human bottlenecks” quietly protected governance systems from certain kinds of failure and misuse.
Now, however, the gap between fast automation and slow rules is where much of today’s institutional risk lives. Under acceleration, purely reactive governance breaks down.
Governance is the foundational civilisational technology, and rules were humanity’s first automation system.
This workshop series treats rules and policy as an upgradeable governance technology, not just documents, and asks what “next‑generation rules” need to look like under acceleration: modular, interoperable components that behave more like infrastructure or an operating system layer—pieces that AI systems and traditional processes can plug into, and that can be updated without rewriting everything from scratch.
That’s critical both for how we govern AI and how we use AI inside governance systems.
What you’ll leave with:
This workshop is designed to help you:
build a shared first‑principles model of AI as a forcing function on policy—grounded in cheap, deployable cognition, not just sectoral “AI-proof policy” talk
begin drafting a Policy Assumption Map for one policy area you care about, making explicit where it relies on human scarcity, hard‑to‑fake authenticity and observable enforcement
develop an initial shortlist of “what breaks first” touchpoints and early warning signals to watch for in your own work—places where your policy settings are most exposed under acceleration
How to prepare:
To get the most out of the session, please:
Pick one policy area or program in mind that you know reasonably well, and
Have a concrete workflow or decision process from that area in mind (for example, a typical case handling flow, an approval pipeline, or a recurring type of decision).
Facilitator:
Kelvin Chau holds a Master of Strategic Studies and a Bachelor of Arts from the Australian National University. He began researching the intersection between technological acceleration and institutional decline in 2016 and has since consolidated this work into the Open Governance Standard (draft), aimed at improving institutional agility by turning governance principles into mechanisms that can actually run.
He also has extensive experience across the public, private and research sector, giving him a practical view of how institution systems behave under pressure.
Disclaimer - Kelvin is delivering this workshop in a personal capacity. The frameworks and materials he draws on are part of his independent research and do not represent, and are not endorsed by, any government agency or organisation.
About AI Colab:
AI CoLab events are intentionally open and collaborative. For this session we may capture photos to share publicly. This is done in line with the Charter’s values; transparent, ethical innovation and knowledge sharing to accelerate collective learning (see join.aicolab.org). By participating in this event, you agree to being photographed and for those images to be shared in accordance with this policy.