

Hidden Risks of Integrating AI: Managing Data Proliferation and Leakage
Ask the Expert ft. Patrick Walsh
Synopsis:
A discussion of the hidden risks in apps leveraging modern AI systems, especially those using large language models (LLMs), retrieval-augmented generation (RAG) workflows, and agentic workflows. We will be prepared to demonstrate how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks such as a vector inversion attack. We will also be prepared to discuss or demonstrate how to prevent such attacks through the use of encryption and other PETs, plus the wise application of policy.
Problem Statement:
AI is incredibly hungry for data – all data -- much of it very sensitive, ranging from PII to intellectual property to internal forecasts and roadmaps. This data is getting duplicated, triplicated, or more, in order to work with AI search, AI services, model training, and agentic workflows. Most of the new copies of this data are under-monitored and under-protected. Even worse, it’s the wild west out there and often with few guardrails for how employees are using AI over internal, sensitive, data. Third party SaaS partners are also managing this data. Currently, little is being done to apply privacy policies and laws to it.
Pre-Discussion Resources:
VIDEO: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
PDF DOWNLOAD: AI Shadow Data White Paper
PDF DOWNLOAD: Training AI Without Leaking Data White Paper
BLOG: Privacy-Preserving AI: The Secret to Unlocking Enterprise Trust
BLOG: MCP Servers are Electric But Not In The Way You Might Hope
Patrick Walsh
Patrick is the CEO and co-founder of IronCore Labs, a data security platform that protects the sensitive data within cloud applications without sacrificing the ability to use that data. Patrick has more than 20 years of experience building successful teams and products and solving difficult problems in the enterprise software and security domains. He's a named inventor in multiple patents on novel cryptography and a long-time advocate for privacy and security. Outside of work, he enjoys behavioral psychology, photography, hacking, learning, investing, biking, swimming, and the outdoors.
Moderator: Tiffany Soomdat
Tiffany Soomdat helps organizations navigate complex data protection landscapes and build scalable, business-aligned privacy programs that do more than check compliance boxes. She is a Trusted Advisor: Empowering Companies to Thrive Amidst Evolving Privacy and Compliance Regulations, Safeguarding Businesses and Customers in the Digital Age. Master of Studies in Law (MSL) in Corporate Compliance. Certified in CIPP/US and OneTrust.