

Hidden Risks of Integrating AI: Managing Data Proliferation and Leakage
Ask the Expert ft. Patrick Walsh
Synopsis:
A discussion of the hidden risks in apps leveraging modern AI systems, especially those using large language models (LLMs), retrieval-augmented generation (RAG) workflows, and agentic workflows. We will be prepared to demonstrate how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks such as a vector inversion attack. We will also be prepared to discuss or demonstrate how to prevent such attacks through the use of encryption and other PETs, plus the wise application of policy.
Problem Statement:
AI is incredibly hungry for data – all data -- much of it very sensitive, ranging from PII to intellectual property to internal forecasts and roadmaps. This data is getting duplicated, triplicated, or more, in order to work with AI search, AI services, model training, and agentic workflows. Most of the new copies of this data are under-monitored and under-protected. Even worse, it’s the wild west out there and often with few guardrails for how employees are using AI over internal, sensitive, data. Third party SaaS partners are also managing this data. Currently, little is being done to apply privacy policies and laws to it.
Pre-Discussion Resources:
VIDEO: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
PDF DOWNLOAD: AI Shadow Data White Paper
PDF DOWNLOAD: Training AI Without Leaking Data White Paper
BLOG: Privacy-Preserving AI: The Secret to Unlocking Enterprise Trust
BLOG: MCP Servers are Electric But Not In The Way You Might Hope
Patrick Walsh
Patrick is the CEO and co-founder of IronCore Labs, a data security platform that protects the sensitive data within cloud applications without sacrificing the ability to use that data. Patrick has more than 20 years of experience building successful teams and products and solving difficult problems in the enterprise software and security domains. He's a named inventor in multiple patents on novel cryptography and a long-time advocate for privacy and security. Outside of work, he enjoys behavioral psychology, photography, hacking, learning, investing, biking, swimming, and the outdoors.
Moderator: Janelle Hsia
Janelle Hsia is the President and Founder of Privacy SWAN Consulting working as a trainer, consultant, and trusted advisor for strategic and tactical decision-making. While she is focused on the field of privacy and data protection, Janelle Hsia is not a lawyer and brings a diverse background with strong leadership, technical, and business skills spanning 20 years in the areas of project management, IT, privacy, security, data governance, and process improvement. Janelle Hsia is also Co-Founder and Vice-President of the Institute of Operational Privacy Design.