Cover Image for Making Privacy-Preserving AI Accessible: A Practitioner-Oriented Framework
Cover Image for Making Privacy-Preserving AI Accessible: A Practitioner-Oriented Framework

Making Privacy-Preserving AI Accessible: A Practitioner-Oriented Framework

Virtual
Registration
Welcome! To join the event, please register below.
About Event

​Ask the Expert ft. Hana Habib

​Synopsis:

The increasing deployment of machine learning systems in sensitive domains has heightened awareness of privacy risks, yet significant barriers remain in translating theoretical privacy guarantees into practical implementations. Building on the NIST Adversarial Machine Learning Taxonomy (2025), we present a community-driven framework that addresses the implementation gap in privacy-preserving machine learning (PPML). Our contribution centers on a curated repository of over 30 privacy-preserving tools, each mapped to specific adversarial threats and accompanied by implementation guidance, code examples, and empirically grounded performance assessments. We organize these tools around five operational ML pipeline phases: Data Collection, Data Processing, Model Training, Model Deployment, and Privacy Governance, with systematic risk identification and structured decision frameworks for each phase. We illustrate the framework’s practical application through MedAI, a case study of a fictitious healthcare company that demonstrates methodical privacy-preserving technique selection in the model training phase. This work contributes to the broader goal of making privacy-aware AI development more accessible by providing actionable guidance that bridges the theory-practice gap in PPML implementation.

​Problem Statement:

Our work aims to address the gap between advances in privacy research and the adoption of privacy-preserving machine learning by providing actionable guidance for those aiming to address privacy risk during ML development.

​Related Privacy Enhancing Technologies (PETs):

  • Differential privacy

  • Federated learning

  • Synthetic data generation

Pre-Discussion Recources:

​Guest Expert: Hana Habib

Hana Habib is an Assistant Teaching Professor in the Software and Societal Systems Department (S3D) and the Associate Director of Carnegie Mellon University's Masters in Privacy Engineering program. Her research supports the development of tools that promote trust and safety in digital spaces, focusing on issues that have societal impact such as security and privacy. Hana completed her PhD in Societal Computing and a Masters in Information Technology - Information Security from Carnegie Mellon University. 

​Moderator: Kimberly Landcaster

​​Trusted Privacy Advisor who Guides Data Protection, Drives Operational Excellence, and Leads with Integrity by aligning with InfoSec, Security, GRC, Compliance, and Data Governance. Board Member, Speaker, and Author.