Cognitive Security Blog

The Anatomy of an Effective AI Usage Policy

Written by Gwen Betts | Aug 19, 2025 9:45:06 PM

AI is here, and it's already rewiring how we work. People aren't waiting for the official green light; they're experimenting, interacting, and integrating generative AI tools and LLMs into their workflows right now. And like every disruptive technology before it, AI has become incredibly polarizing. 

Security leaders get the transformative opportunity AI promises, as well as the real-world risks that cannot be ignored. AI presents a different set of challenges despite lessons from the likes of SaaS and the shadow infrastructure of the past. With the rapid growth of ungoverned AI tooling adoption, many security leaders find themselves in a position trying to catch up to their people's fervent usage.

People are going out and signing up for ChatGPT, and if the company doesn't let them use their corporate email, they use their personal account. They'll start bringing in their AI notetakers and store the data in personal directories.

Joel Meiners // Constant Contact
Director of Governance, Risk, and Compliance

Unfortunately, many companies are flying blind to why and how AI is being used in the first place. Not knowing means IT and security teams can't guide its usage. This is where a clear, practical cybersecurity policy can guide people toward safer choices. One that is not just focused on technology "yays" or "nays", but contains concrete details on approved use cases and secure behaviors when engaging with AI. 

A well-defined AI responsible use policy can spark innovation and make a secure culture a reality. A bad one is a policy in name only, security theater dressed up for a new era. For security, governance, and IT leaders, the path forward is clear: now is the moment to lead with intention and make AI a partner in progress. 

We spoke with GRC director Joel Meiners and cybersecurity executive Angelo Longo to explore governance best practices and how to craft AI policies that actually work, for both security teams and the people they're meant to guide. Let's dive in.

Why AI Governance Breaks Current Security Models

First, let's cover why AI usage governance is difficult in the first place. The traditional, top-down policy creation process (write a policy, roll it out, enforce) was built for an era defined by known software, clear access controls, and predictable systems. Generative AI is none of those things, especially with the introduction of agentic AI acting on behalf of people.

 

It's fast-moving, employee-led, and extremely context-sensitive (translated: needs lots of your data). And while compliance frameworks like NIST's AI Risk Management Framework and ISO/IEC 42001 offer a helpful starting point from a technical controls perspective, teams struggle to keep up with the speed and nuance of AI adoption in practice.

That's why most current policy frameworks are struggling to keep pace. In particular:

  • They assume centralized control. Policy is built on the premise that security and IT provisions tools and sets the rules. But with AI, adoption often happens outside that flow. Employees experiment first and often with their personal emails to bypass corporate controls. Security teams lose all observability in this scenario.
  • They're static, not situational. Traditional policies don't easily flex with context. But AI tools are inherently dynamic; what's safe in one scenario might be risky in another. Flat policy can't cover all those permutations.
  • They focus only on tools, not behaviors. Blocking or approving an app ignores the bigger picture. The benefits of AI usage come from legitimate reasons for business use. The risk comes from how people interact with it without behavior guardrails.
  • They're written for audits, not humans. Many policies live in dense documents designed to satisfy regulators, not guide employees in the moment they need help.
  • They act after the fact. Policy enforcement is often lagging behind behavior. But with AI, the window for intervention is short. You have to be proactive, not reactive.

These breakpoints demonstrate the same root issue: traditional policy wasn't built to guide human behavior in a cognitively overloaded environment. AI tools demand decisions in real time, not after reading a policy PDF. We also can't expect people to memorize a handbook of dos and don'ts. What they need is guidance that meets them in the moment: clear expectations, well-scoped use cases, and timely behavior interventions that reduce ambiguity and support confident choices.

This is the role of cognitive security: anticipating the mental load, surfacing guidance when it counts, and shaping secure behaviors as they happen. And that's where thoughtful AI usage policy design begins.

We know the value of AI, and we want people to use AI. We want people to test with it and we want it in our product. We want it making their day-to-day lives better.

Joel Meiners // Constant Contact
Director of Governance, Risk, and Compliance

Generative AI Cybersecurity Risk Types

Creating an AI governance policy doesn't mean you have to start from scratch. You can borrow the fundamentals from existing acceptable use policies, notably the types of cybersecurity and human risk to manage and mitigate. With generative AI usage for the average knowledge worker, we see these as the big 4:

Vendor Risk

First, using AI is like subscribing to a SaaS tool. The protections you believe are in place are respective to the licensing agreement you have with the vendor. This opens the door to supply-chain exposure and third party risk. 

 

Model Risk and Data Entanglement

Generative AI is nebulous in how it interacts, processes, stores and derivatives information put into it. Data is inextricably co-mingled across any number of different sources. Even with obfuscation, there's no guarantee that sensitive information won't leak or reproduce unpredictably.

This was recently demonstrated by security researchers when they successfully used ChatGPT and a flaw in OpenAI's connectors to dump data from Google Drive. If targeted by a nefarious actor, this could create insider threat exposure.

How well you have secured sensitive data, how well you've labeled it, how well you've restricted it. All those things come into play when you present data to a solution like a ChatGPT or a Claude.

Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive

Behavioral Risk

AI and LLMs rely on trust. What are your people putting into it? What are they getting out of it? And how are they acting on the output? Each step introduces human-driven vulnerabilities. Next level: autonomous AI agents trained on people's behavior.

It all comes down to what you are sharing and with whom.

Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive

Delegated Autonomy Risk

AI agents act on human instruction but lack the depth of human judgment. As they're granted more autonomy, behavioral risk scales. Without clear limits, a simple prompt can trigger real-world actions often without visibility or review.

I would start with defining the scope, which would be most anything you can interact with, in addition to standard web-based functionality.

Angelo Longo // Goliath Cyber Security Group
Cybersecurity & Technology Executive

These risks are active already in everyday AI usage, and often in invisible ways. That's why a strong AI governance policy must do more than block tools or write rules for compliance. It needs to translate risk awareness into clear, contextual guardrails that reflect how people actually use AI to get work done.

This starts by mapping what's being used (access), why it's being used (use cases), and how to guide that usage safely (behavior controls). Let's break that down.

Designing an Effective AI Governance Policy

Effective AI usage policy means understanding why people use generative AI and LLMs, and guiding how they use it securely. The modern approach requires reframing the conversation around not just access (today's answer), but intent and behavior.

Access: AI and User Permissions

The first step is scoping access. We know, this is pretty standard in today's governance workflow, but worth repeating: if security teams don't know who's using what, or how accounts are authenticated, they lose the ability to detect risk or apply policy controls. A strong AI usage policy should set baseline expectations for what tools they use and how employees access these tools.

Access control actions to include:

      • Specify approved and unapproved apps: Clearly name which tools employees can safely use and which ones to avoid so they don't have to guess. This reduces ambiguity and helps guide smart decisions at the moment of action.
      • Mandate use of corporate accounts: Require employees to use company-issued credentials for any AI tools that are approved. This eliminates the visibility and data ownership gap created by personal email use.
      • Enforce SSO and MFA where possible: Integrate SSO and MFA wherever possible to centralize authentication, reduce password-related risk, and simplify access provisioning and deprovisioning. 
      • Designate AI account owners or champions: In larger teams, nominate accountable users who can track usage patterns, support compliant tool access, and act as liaison between teams and security.
      • Extend access controls to AI agents and APIs: Agentic systems and API-driven automations, like AI-generated suggestions in LinkedIn or Slack, often bypass traditional oversight. Your policy should explicitly include these interfaces, especially if AI can act autonomously.

Use Cases: Usage Intention Codified

Access alone doesn't tell the whole story. The real signal comes from understanding how employees intend to use AI in their workflows and the associated risk level. Are they summarizing customer calls? Drafting email copy? Analyzing product feedback? Offloading rote documentation tasks?

Clear policies specify these use cases explicitly. Not only does this guide safe experimentation, it also helps security, IT, and governance teams distinguish high-value innovation from high-risk behavior.

Rather than issuing blanket approvals or bans, map common AI tasks to:

      • Approved: Encourage high-value tasks like drafting, summarizing, or brainstorming, especially when done with internal tools or redacted data.
      • ⚠️ Conditional: Allow with safeguards. For example, using AI for client communication may require anonymization or review before sending.
      • Prohibited: Explicitly restrict sensitive actions, such as entering regulated data, legal contracts, HR files, or source code into public AI tools.

The more specific your examples, the easier it is for your people to self-correct and make confident choices. This empowers secure innovation and compliance all in one motion.

Behavior Controls: Secure Usage in Practice

Once you understand what people are using and why, the final piece is behavioral guardrails: how to keep usage secure in the moment. This means guiding habits around data ownership, safeguarding sensitive information, and ensuring your workforce doesn't introduce vulnerabilities through their engagement with AI tooling.

This includes:

      • Training data and AI models: Not all AI tools treat user input the same. Some retain prompts, use inputs to train future models, or route data through additional third-party services. To protect sensitive and proprietary content, consider adding a behavior control to turn off data sharing for model training if the option exists.
      • Data and information sharing protections: Define what types of information can and cannot be entered into AI tools. This includes PII, trade secrets, financial records, internal communications, or any data not explicitly cleared for external sharing. Industry-focused  compliance frameworks can be a great starting point for data types to protect.
      • Behavior observability and policy interventions: Use real-time behavior controls to detect risky input/output patterns, flag suspicious prompts, and surface guidance before users make a high-risk decision.

Remember: generative AI introduces human-amplified risk. Employees aren't intentionally trying to bypass security in most cases, so your policy should meet them in that moment of action. Carrying the mental load of cyber hygiene in a fast-moving, AI-powered workplace isn't easy nor should it fall entirely on individuals.

A cognitive security platform like Maro can lighten that load by guiding employees in real time, reinforcing policy through well-timed notifications, and helping people make secure choices without slowing down their work.

AI Usage Governance:
A Strategic Opportunity for Security Leaders 

AI usage is now a permanent part of our future. Instead of treating it as a threat to manage, organizations have the chance to design governance from the ground up. Security isn't just an add-on anymore; it's a strategic lens for how technology gets adopted, trusted, and scaled. And with generative and agentic AI in their infancies, security leaders have the unique opportunity to get ahead of risk before it spreads, and to lead AI onboarding with intention, clarity, and confidence. 

Now is the time to empower your people, guide secure behavior, and make AI work with your security culture and not against it.

Free AI Usage Policy Builder

Ready to Build a Behavior-First AI Usage Policy?

Stop flying blind with AI governance. Create a comprehensive, behavior-focused policy tailored to your organization in minutes, not months.

Comprehensive behavior guidelines
🎯
Tailored to your organization context
Generates in real-time as you build
📋
Covers actions, apps, and use cases
Create Your AI Usage Policy Now
Build in minutes • Download instantly • 100% free