Skip to content

An Intelligence Infiltration – Hacking AI Agents from Silicon Valley’s Hottest Startups with guest Rene Brandel

Oct 15, 2025

“We’re not trying to avoid AI because of the scary security issues; we’re trying to deploy it securely so we can unlock its true potential.”

When you work in healthcare, you learn that protecting patient data is a policy and a promise. In this conversation, we explore what happens when innovation outpaces precaution, especially as AI tools evolve faster than most of us can keep up.

Our guest, Rene Brandel of Casco, joined us to share his experience hacking publicly available AI agents from some of Silicon Valley’s hottest startups. The goal was to expose vulnerabilities. Rene reminded us that AI agents are more like digital employees than servers. They each have access that should be scoped, limited, and monitored. Yet in a 30-minute test, he and his team were able to breach nearly half of the AI agents they tested.

We dug into three common issues he uncovered; cross-user data access, code execution without boundaries, and a lack of sandboxing standards. Each one poses unique risks in healthcare, where data is valuable and life-impacting. A single AI agent with misconfigured permissions could pull psychiatric records while scheduling orthopedic appointments. That is the digital version of handing every nurse a master key to every cabinet in the hospital.

What stood out most was Rene’s call for a mindset shift. AI should be treated with the same rigor as any system that handles protected health information. That means authentication matters, sandboxing isn’t optional, and every line of code should be written with security in mind.

As we wrapped up this episode, Rene emphasized that we don’t need to fear AI. However, it is important that we are guiding it responsibly. While innovation is inevitable, the way we protect it will determine whether AI becomes healthcare’s greatest ally or its biggest liability. Sometimes the smartest thing we can do for the future is pause long enough to make sure the back door is locked tight.

Episode Highlights

  • [01:09] – Rene Brandel on why she began hacking AI agents to find security gaps.
  • [02:30] – How quickly AI systems can be breached without strong security oversight.
  • [03:51] – The risk of cross-user data access and violating HIPAA’s minimum necessary standard.
  • [07:05] – Understanding permissions creep and why AI agents should be treated like individual users.
  • [10:23] – How malicious actors can use code execution capabilities to manipulate AI systems.
  • [13:44] – Sandboxing AI agents and why “don’t roll your own security” is the new rule.
  • [15:23] – Three areas of AI procurement to prioritize;  authentication, capabilities, and integration.
  • [18:11] – Why traditional pen tests miss AI-specific threats and the need for continuous testing.
  • [21:21] – Megan reflects on the speed of AI advancement and the importance of security champions.

Browse past episodes on our blog or listen wherever you get your favorite podcasts, including:

Subscribe now to get notifications of new episodes in your inbox.

Have an idea for future episode topic? Share it with us.

Learn more about the security of the Redox data interoperability platform here.

Contacts