AI is here, and it's already rewiring how we work. People aren't waiting for the official green light; they're experimenting, interacting, and integrating generative AI tools and LLMs into their workflows right now. And like every disruptive technology before it, AI has become incredibly polarizing.
Security leaders get the transformative opportunity AI promises, as well as the real-world risks that cannot be ignored. AI presents a different set of challenges despite lessons from the likes of SaaS and the shadow infrastructure of the past. With the rapid growth of ungoverned AI tooling adoption, many security leaders find themselves in a position trying to catch up to their people's fervent usage.
People are going out and signing up for ChatGPT, and if the company doesn't let them use their corporate email, they use their personal account. They'll start bringing in their AI notetakers and store the data in personal directories.
Joel Meiners // Constant Contact
Director of Governance, Risk, and Compliance
“
Unfortunately, many companies are flying blind to why and how AI is being used in the first place. Not knowing means IT and security teams can't guide its usage. This is where a clear, practical cybersecurity policy can guide people toward safer choices. One that is not just focused on technology "yays" or "nays", but contains concrete details on approved use cases and secure behaviors when engaging with AI.
A well-defined AI responsible use policy can spark innovation and make a secure culture a reality. A bad one is a policy in name only, security theater dressed up for a new era. For security, governance, and IT leaders, the path forward is clear: now is the moment to lead with intention and make AI a partner in progress.
We spoke with GRC director Joel Meiners and cybersecurity executive Angelo Longo to explore governance best practices and how to craft AI policies that actually work, for both security teams and the people they're meant to guide. Let's dive in.
First, let's cover why AI usage governance is difficult in the first place. The traditional, top-down policy creation process (write a policy, roll it out, enforce) was built for an era defined by known software, clear access controls, and predictable systems. Generative AI is none of those things, especially with the introduction of agentic AI acting on behalf of people.
It's fast-moving, employee-led, and extremely context-sensitive (translated: needs lots of your data). And while compliance frameworks like NIST's AI Risk Management Framework and ISO/IEC 42001 offer a helpful starting point from a technical controls perspective, teams struggle to keep up with the speed and nuance of AI adoption in practice.
That's why most current policy frameworks are struggling to keep pace. In particular:
These breakpoints demonstrate the same root issue: traditional policy wasn't built to guide human behavior in a cognitively overloaded environment. AI tools demand decisions in real time, not after reading a policy PDF. We also can't expect people to memorize a handbook of dos and don'ts. What they need is guidance that meets them in the moment: clear expectations, well-scoped use cases, and timely behavior interventions that reduce ambiguity and support confident choices.
This is the role of cognitive security: anticipating the mental load, surfacing guidance when it counts, and shaping secure behaviors as they happen. And that's where thoughtful AI usage policy design begins.
We know the value of AI, and we want people to use AI. We want people to test with it and we want it in our product. We want it making their day-to-day lives better.
Joel Meiners // Constant Contact
Director of Governance, Risk, and Compliance
“
Creating an AI governance policy doesn't mean you have to start from scratch. You can borrow the fundamentals from existing acceptable use policies, notably the types of cybersecurity and human risk to manage and mitigate. With generative AI usage for the average knowledge worker, we see these as the big 4:
First, using AI is like subscribing to a SaaS tool. The protections you believe are in place are respective to the licensing agreement you have with the vendor. This opens the door to supply-chain exposure and third party risk.
Generative AI is nebulous in how it interacts, processes, stores and derivatives information put into it. Data is inextricably co-mingled across any number of different sources. Even with obfuscation, there's no guarantee that sensitive information won't leak or reproduce unpredictably.
This was recently demonstrated by security researchers when they successfully used ChatGPT and a flaw in OpenAI's connectors to dump data from Google Drive. If targeted by a nefarious actor, this could create insider threat exposure.
How well you have secured sensitive data, how well you've labeled it, how well you've restricted it. All those things come into play when you present data to a solution like a ChatGPT or a Claude.
Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive
“
AI and LLMs rely on trust. What are your people putting into it? What are they getting out of it? And how are they acting on the output? Each step introduces human-driven vulnerabilities. Next level: autonomous AI agents trained on people's behavior.
It all comes down to what you are sharing and with whom.
Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive
“
AI agents act on human instruction but lack the depth of human judgment. As they're granted more autonomy, behavioral risk scales. Without clear limits, a simple prompt can trigger real-world actions often without visibility or review.
I would start with defining the scope, which would be most anything you can interact with, in addition to standard web-based functionality.
Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive
“
These risks are active already in everyday AI usage, and often in invisible ways. That's why a strong AI governance policy must do more than block tools or write rules for compliance. It needs to translate risk awareness into clear, contextual guardrails that reflect how people actually use AI to get work done.
This starts by mapping what's being used (access), why it's being used (use cases), and how to guide that usage safely (behavior controls). Let's break that down.
Effective AI usage policy means understanding why people use generative AI and LLMs, and guiding how they use it securely. The modern approach requires reframing the conversation around not just access (today's answer), but intent and behavior.
The first step is scoping access. We know, this is pretty standard in today's governance workflow, but worth repeating: if security teams don't know who's using what, or how accounts are authenticated, they lose the ability to detect risk or apply policy controls. A strong AI usage policy should set baseline expectations for what tools they use and how employees access these tools.
Access control actions to include:
Access alone doesn't tell the whole story. The real signal comes from understanding how employees intend to use AI in their workflows and the associated risk level. Are they summarizing customer calls? Drafting email copy? Analyzing product feedback? Offloading rote documentation tasks?
Clear policies specify these use cases explicitly. Not only does this guide safe experimentation, it also helps security, IT, and governance teams distinguish high-value innovation from high-risk behavior.
Rather than issuing blanket approvals or bans, map common AI tasks to:
The more specific your examples, the easier it is for your people to self-correct and make confident choices. This empowers secure innovation and compliance all in one motion.
Once you understand what people are using and why, the final piece is behavioral guardrails: how to keep usage secure in the moment. This means guiding habits around data ownership, safeguarding sensitive information, and ensuring your workforce doesn't introduce vulnerabilities through their engagement with AI tooling.
This includes:
Remember: generative AI introduces human-amplified risk. Employees aren't intentionally trying to bypass security in most cases, so your policy should meet them in that moment of action. Carrying the mental load of cyber hygiene in a fast-moving, AI-powered workplace isn't easy nor should it fall entirely on individuals.
A cognitive security platform like Maro can lighten that load by guiding employees in real time, reinforcing policy through well-timed notifications, and helping people make secure choices without slowing down their work.
AI usage is now a permanent part of our future. Instead of treating it as a threat to manage, organizations have the chance to design governance from the ground up. Security isn't just an add-on anymore; it's a strategic lens for how technology gets adopted, trusted, and scaled. And with generative and agentic AI in their infancies, security leaders have the unique opportunity to get ahead of risk before it spreads, and to lead AI onboarding with intention, clarity, and confidence.
Now is the time to empower your people, guide secure behavior, and make AI work with your security culture and not against it.
Stop flying blind with AI governance. Create a comprehensive, behavior-focused policy tailored to your organization in minutes, not months.