An AI agenda - Robots, rules, and really big questions
May 20, 2025

“We have to make sure AI doesn’t just automate what we've always done. It should elevate what’s possible.”
AI isn’t something on the horizon. It’s already woven into our daily workflows, often in ways we barely notice. As Redox team members, we’re right in the thick of it, navigating both the promise and the risks that come with this powerful technology.
Our aim is to make AI practical, secure, and empowering across our organization. With insights from our security engineering team and guest Brent Ufkes, we focused on key strategies that work for us. When new AI tools crop up, curiosity comes first, but we never skip the important questions: Who’s using it? What kind of data is involved? How does it fit into our existing risk frameworks?
Our approach is audience-centered. We evaluate AI exactly as we would any other tool, by layering data classification and security reviews to make sure nothing sensitive, especially PHI, gets mishandled. Education sits at the core: regular updates in Slack, comprehensive living documents, and clear policies all aim to keep things transparent and flexible. Brent reminds us that all policies work together. AI doesn’t trump privacy or compliance, and training never ends.
We’re building a “culture of learning,” leaning on established security tools like DLP solutions and endpoint monitoring to keep things safe behind the scenes. AI tools are only as good as the context we provide and the prompts we write, and we’re always improving together.
The biggest takeaway? AI can give us a real edge if we put security, clarity, and cooperation first. At Redox, we don’t just adapt to change; we shape it, one secure workflow at a time.
Notable Moments
00:40 – What’s pushing us to talk about AI now?
04:22 – A call for AI mission statements
08:18 – When tools lead before people: the risk of reactive adoption
11:05 – Defining AI boundaries: what it should never replace
15:33 – ChatGPT, Canva, Magic School: tools already in use
18:42 – The importance of transparency and human oversight
22:55 – Reframing AI as “instructional support,” not just automation
Browse past episodes on our blog or listen wherever you get your favorite podcasts, including:
Subscribe now to get notifications of new episodes in your inbox.
Have an idea for future episode topic? Share it with us.
Learn more about the security of the Redox data interoperability platform here.
Contacts
Matt Mock: mmock@redoxengine.com
Meghan McLeod: mmcleod@redoxengine.com