Cognitive Security Blog

Rethinking Human Risk Starts with Guiding Behaviors

Written by Jadon Montero | Feb 14, 2025 6:08:37 PM

Technology’s relentless evolution continues to redefine the cybersecurity landscape, presenting both opportunities and risks. Generative artificial intelligence (AI), once an expensive luxury, is now democratized and empowering adversaries as much as it enables defenders. Yet despite the sophistication of these modern solutions, human factors are consistently the most exploited vulnerability. What's worse: a growing cynicism toward people and their security behaviors (or lack thereof), colloquially known as the human element, and a feeling that this is an unsolvable problem.

Straight from Verizon's 2024 DBIR, human error plays a role in 68% of all breaches. This lands far beyond ransomware, vulnerability exploitation, third-party risk, and many other attack vectors. In response, the industry leans heavily on compliance-driven security awareness training and a limited set of technical controls. These approaches, while well-intentioned, often fail to address the underlying behavioral and systemic issues at play.

I shared my point-of-view on Maro’s recently launched LinkedIn account, and the tl;dr: our industry needs to shift its mindset toward engagement with the people they’re meant to protect and focus on building secure working behaviors in an optimistic and privacy-first way.

 

The next paradigm shift is here: post-COVID sprawl and generative AI

COVID-19 fundamentally altered the cybersecurity landscape. The rapid shift to remote work brought unsanctioned devices, unregulated browsers, and unrecognized applications into corporate systems. This drove an increase in security spend to facilitate digital transformation, in some cases tripling budgets. Organizations invested heavily in remote-first technology to ensure unencumbered productivity across the workforce. As business operations evolved, so too did worker expectations.

People became accustomed to working across corporate-owned and personal devices, paving the way for exploitable gaps. Thankfully the pandemic receded, but so too did budgets and hiring, leaving many businesses with unmanaged digital sprawl that lingered long after pandemic restrictions lifted. The kicker? Security teams must still secure it all, and now with fewer resources.

The COVID-19 shift also brought an end to the traditional network perimeter. Organizations have turned to technology-based solutions like multi-factor authentication (MFA), zero trust network access (ZTNA), security service edge (SSE), and endpoint detection and response (EDR) to create the modern perimeter as we know it today. Yet despite these investments, the outcomes of cyber human risk remain largely unchanged.

Generative AI, for example, has made social engineering incredibly cheap and accessible. Where once the reconnaissance, pretext, phish, extortion, or bait of a knowledge worker was costly, sophisticated enterprise-grade spear phishing is now standard. In short: attackers aren’t becoming smarter, intelligence is simply built into their systems now. And they wield an incredible power to manipulate people’s behaviors to their whims now more than ever before.

Today’s cybersecurity tech stack isn’t equipped to guide secure operating behaviors at the same rate as attackers are manipulating behaviors for their gain. You see it in prominent attacks on industry giants like WPP, when a deepfake of its CEO was used in an attempt to defraud employees, or when social engineers impersonated the Department of Labor in an attack on Microsoft Office365.

So, what can we do about this?

Breaking free from the security status quo

When looking at the cybersecurity landscape, there are two approaches organizations adopt to protect themselves: compliance frameworks (which include security awareness training) and layer-based technology. While both provide levels of defense, they continue to demonstrate significant gaps in the face of human risk factors and social engineering. 

First: compliance and training approaches aim to provide control-based best practices to mitigate damage from potential attacks while building awareness through pedagogical-style education, all within a set of regulatory parameters. These measures are often generalized and inflexible, and aren’t typically tailored to the reality of an organization’s threat model or their workforce’s operating habits.

The focus on “gotcha” initiatives like phishing simulations can assign blame frequently and lead to fatigue, leaving employees feeling embarrassed and confidence undermined. And when approached from a box-ticking perspective, this sets a negative baseline rather than a best practice. These organizations get stuck in digital purgatory.

The second approach, layer-based technologies, applies technical security enforcement to one or many of the internet abstraction layers. These are familiar solutions: network firewalls, virtual private networks, email filtering, EDR and anti-virus, the list goes on. While solutions that impose technical controls are absolutely necessary, we find ourselves in an era that may focus too heavily on technology while sidelining root causes to unsecure human behavior.

Let's take a recent breach as an example: MGM Resorts’ $13-billion-dollar system couldn’t protect the business when attackers apparently pulled employee information from LinkedIn and used it to socially engineer their way into private systems. The attacker leveraged emotional techniques to deceive a customer support agent just trying to be helpful and do their job. Layered-technology doesn't stand a chance when a well-meaning employee unlocks the door for an interloper tugging at their heartstrings.

Effective cyber human risk defense merges tailored control enforcement with people-focused measures and multi-layered technological safeguards.

Furthermore, weaving in security educational moments while defending people in the face of digital deception conditions behaviors into habits, fostering resilience as an enduring strength. 

What this looks like in practice:

  • Governing actual  secure behaviors as part of your policy set. Security policies are the first step towards defining secure operating habits but often conditions aren't framed around actual behaviors. Rethink how you're expressing security behaviors, not just static "dos/don'ts". Even better? Ensure you can observe and meaningfully measure posture improvements.
  • Considering cognitive-based cyber risk factors. Most security models are ill-equipped to handle psychological and cognitive biases that influence decision-making in cyber risk scenarios. Factors such as fatigue, stress, cognitive overload, and social engineering susceptibility should be actively accounted for in education programs and policies.
  • Moving away from punitive measures to positive reinforcement. Instead of using simulations and spreadsheets to assign blame, organizations can create environments where the workforce is rewarded for their participation in security efforts. Taking it to the next level: applying gamification principles can be a powerful tool to motivate and reinforce secure behaviors by making learning engaging, interactive, and rewarding.  
  • Leveraging generative AI for protection, adherence, and  intervention. AI is already being weaponized by attackers to craft highly personalized, scalable social engineering campaigns. It can also be used to proactively detect and guide secure user behaviors. And when combined with behavioral and cognitive insights, organizations can deploy interventions that help people recognize and mitigate cognition-based risks in real time, strengthening overall resilience.

 

The future of human risk: resilience rooted in behavior guidance and cognitive defense

Digital safety is about building an organizational culture where security is instinctive. The industry’s prevailing approach has focused too long on fixing people instead of equipping them, framing human error as a liability rather than a force multiplier.

This next evolution of cybersecurity means shifting to a proactive, behavior-oriented approach. It’s not just about creating better defenses; it’s about ensuring that people—your first and last line of defense—are prepared, engaged, and empowered.

At Maro, we're building a cognitive security platform with this philosophy at the core. We believe digital safety involves safeguarding the human experience from unsecure practices and social engineering. We envision a security-conscious future where people can recognize deceit, resist manipulation, and take control of their digital worlds. Where protection becomes second nature and online vigilance is a lasting skill.

Next time, we'll break down exactly what cognitive security is and how it will revolution human risk management as we know it. Stay tuned!

Read the next post

What is cognitive security?

It's the year 2025. Attackers continue to rely on human risk factors as the easiest way in. Cognitive security is the only way forward in the era of AI-powered social engineering attacks.


💌 Don't forget to subscribe to our newsletter to follow the Maro journey!