

Falls Church, VA. AI at Work: Productivity vs. Replacement Anxiety
AI is making us faster than ever — and at the same time, quietly questioning our place at work.
We’re all feeling it: tools that boost output in minutes, automate entire workflows, and sometimes… do the job better. So where does that leave the human?
This conversation dives straight into that tension — no sugarcoating, no generic optimism. Just real perspectives from people working inside the shift.
Speakers:
Nina Borysova, MBA Director, Product Management Technical (PMT), Data Analytics and AI Solutions, Mastercard
With over 15 years of experience as an Associate Professor in Computer Science and Information Technology, Volodymyr is a passionate Cybersecurity Researcher and Educator, who strives to create a safer digital landscape. He holds a PhD in Cybersecurity and has published over 40 papers for cybersecurity-related journals and conferences, focusing on Anomaly Detection and Threat Intelligence. Including monographies and book chapters.
From 2019 till 2025 has proudly served as a Senior Project Manager to the Cybersecurity Department National Bank of Ukraine, where Dr. Tkach applied his expertise in Cybersecurity and Artificial Intelligence to enhance security solutions and manage complex projects. He has achieved significant results in predictive non-pattern-based anomaly detection in time series, which allows for detecting and preventing previously unseen anomalous activity. Additionally, he developed and delivered several courses on Machine Learning and Security, Data-Driven Security, Security Metrics, and Risk Management. Dr. Tkach is always eager to tackle challenges and learn new skills in Cybersecurity and Artificial Intelligence.
Dr. Volodymyr Tkach CEO | Ph.D, Associate Professor | MIT Research Fellow | Cybersecurity, Threat Intelligence, Anomaly Detection, Artificial
LLMs can dramatically speed up software development: faster prototyping, code generation, test scaffolding, and documentation. But they also introduce real risks — hallucinated logic, insecure defaults, hidden vulnerabilities, and IP or data leakage when prompts include sensitive code. This talk balances the upside with the pitfalls, showing where LLMs actually help and where they can quietly harm quality and security. We’ll cover common failure modes (unsafe dependencies, insecure patterns, missed edge cases, and prompt injection into developer tools) and translate them into practical habits: review gates, secure coding checklists, least‑privilege tooling, and automated security testing. The goal is simple: use LLMs to move faster without shipping weaker products.
We’ll unpack:
– where AI actually boosts productivity vs. where it replaces thinking
– which roles are evolving — and which are quietly disappearing
– how teams are restructuring around AI (and what that means for individuals)
– how to stay valuable when execution becomes automated
– what “creative work” even means when AI can generate ideas on demand
– the psychological side: anxiety, denial, adaptation
If you’re feeling both excited and uneasy about AI — you’re not alone. And you probably shouldn’t ignore either feeling.
Come for clarity. Leave with a more honest map of what’s ahead.