Cover Image for Andrew Trask | It’s Time to Harvest the Secure AI Tech Tree
Cover Image for Andrew Trask | It’s Time to Harvest the Secure AI Tech Tree
Hosted By
Get Tickets
Approval Required
Your registration is subject to host approval.
Suggested Donation
$10.00
Pay what you want
Welcome! To join the event, please get your ticket below.
About Event

Foresight Institute’s Intelligent Cooperation Group

It’s Time to Harvest the Secure AI Tech Tree

AbstractForesight's "Secure AI Tech Tree" represents one of the clearest and most complete taxonomies of Secure AI ingredients available, complete with a catalog of problems with each ingredient's solo use and directions for solutions. Scanning the tree, one observes that the taxonomy of solutions is almost universally formed through combinations with other branches. Yet the tree leaves these "solution combinations" unresolved—a tapestry of dis-integrated observations about ingredient-pairings that can work. So what is the final product when they're actually combined? What is Secure AI?

In this talk, Andrew Trask will attempt to harvest the Secure AI Tech Tree and describe its vision in the form of an integrated theory of Secure AI. He will survey the combination of deep learning, cryptography, and distributed systems technologies listed on the tree, describing a fully combined integration which theoretically addresses many of the key problems in cooperative, privacy-preserving, secure, robust, transparent, verifiable, and aligned AI—the high-level subjects of the tree. In doing so, this talk will reveal the Tech Tree's solutions to many low-level problems while also uncovering a higher-order problem: what are the incentives that would trigger such an integrated solution, what are the problems preventing those incentives, and how can they be overcome? Trask believes this perspective of the tree can reveal the final major hurdle to broad Secure AI adoption in the world. Attendees should come ready for a rich, spicy discussion about the Secure AI Tech Tree and the next steps for the Secure AI community.

Speaker Bio: Andrew Trask is the Founder of OpenMined, a PhD Candidate at the University of Oxford, and a Senior Research Scientist at DeepMind. For the past ~decade in these roles, Andrew has been seeking to understand the ingredients involved in Secure AI and construct an integrated system and theory of change for their adoption, the subject of which is his (nearly complete) PhD Thesis, OpenMined’s catalog of prototypes, pilots, and papers, and which is informed by his time on DeepMind’s language modelling research team (2017-2022) and ethics team (2022 onward). 

Links to Work:
Structured Transparency (paperonline course)

- Attribution-based Control (blogpublication)

- Broad Listening (blogpodcast)

Foresight Institute’s Intelligent Cooperation Group

A group of scientists, engineers, and entrepreneurs in computer science, ML, cryptocommerce, and related fields who leverage those technologies to improve voluntary cooperation across humans, and ultimately AIs.

Location
Foresight Intelligent Cooperation Virtual Seminar Group
Hosted By