Cybersecurity Theater and the Wicked Problem of Human Risk
Jadon Montero
//
December 19, 2025
The Big Idea
- Human risk is a wicked problem, and it remains largely unsolved. Despite decades of investment, the dominant threats today mirror those of a decade ago.
- The cybersecurity industry repeatedly falls into the same traps when trying to address human risk. Technology-first thinking, over-reliance on attacker telemetry, risk transference through compliance and insurance, and a growing pessimism that human behavior can’t be meaningfully improved.
- Cybersecurity theater simulates progress without changing human risk outcomes. Awareness training, phishing simulations, and static policies may reduce clicks on paper, but they fail under pressure, erode trust, and leave real behavior largely unchanged.
- AI is amplifying both human capability and human vulnerability. Employees are adopting powerful tools faster than policies and controls can keep up, expanding the human attack surface in ways security teams can’t easily see. The time is now to address human risk.
Editor's note: A version of this article originally appeared on LinkedIn in 2024. We're reflecting a year later on the state of human risk and the progress we've made since Maro launched out of stealth. Enjoy! -Gwen
—
I got into cybersecurity with a simple belief: everyone has the right to live in safety from cybercrime, not unlike (physical) security of person. My decade-long career is a testament to this belief, spending it as an engineer turned product leader for bleeding edge solutions that each had its time in the spotlight as our industry's panacea (CloudSec, SOAR, ASM, MDR).
What initially attracted me to cybersecurity was how wicked a problem it seemed, and how meaningful it should have felt to contribute to such a variegated, ever-changing landscape.
So, why is it that despite considerable improvements in technology and the promise of a $2 trillion market opportunity for vendors, the threats that dominate the security landscape of 2024 are eerily similar to those we faced more than a decade ago? Social engineering, bolstered by the manifold flavors of phishing and generative AI-backed subscription services, continues to wreak havoc on organizations and our friends and family alike.
In 2016, I worked on a security orchestration, automation, and response solution called Komand that later became Rapid7's platform automation engine. We thought at the time that its most important feature was to connect different security technologies in a (still) fragmented ecosystem. We were wrong! Our most successful use cases had very little to do with technology at all. They involved distributed alerts: essentially, having security team members work directly with employees at the moments when they were most vulnerable to clicking a phishing link (or opening a maldoc, or misconfiguring an S3 bucket, or...).
It turned out that combining the "people" and "process" pieces of the "people, process, technology" framework were the most effective at preventing data breaches.
This got me thinking... We've built all these layers of technology, put up our digital moats and established defensive perimeters, and even staffed the best and brightest in cybersecurity to keep watch on our behalf. Despite it all, the outcomes of managing human risk are largely unchanged with attackers walking through the front door: our employees.
What's worse is that every year, reports like Verizon's DBIR reiterate the same sobering statistic: more than 68% of breaches involve the human element. It's not because humans are "the weakest link," as the industry loves to proclaim, but because attackers have mastered the art of exploiting human strengths: our empathy, our helpfulness, our need to get things done. And let's be real: when a major breach happens due to an employee innocently opening a malicious email, it's not because that employee is incompetent. It's because the systems we've built have failed to consider their reality.
Their "reality" is spending 9+ hours a day in the digital hellscape that the internet has become: a place utterly devoid of privacy, where things are not as they seem and everyone is out to get you.
How did we get here? Why does this problem persist? More importantly, how can we illuminate a better way that unites security teams and the people they protect? Let's dig in.
What hasn't changed
We spent the last 3 months speaking with 40 CIOs and CISOs about why they think human risk remains perennially under-addressed. They told us that security teams get trapped by compliance-first cultures and binary thinking. Just a few examples...
The "tech solves" trap: We treat the attack surface like it's just a technical problem, patching holes with layers upon layers of solutions. Yet, here we are: still seeing catastrophic breaches, still questioning the ROI of those high-dollar tools. With mass adoption of passwordless authentication looming on the horizon, we should remind ourselves that there is no recourse in a situation where criminals influence users to validly log in. People are often the first and the last line of defense.
The "data solves" trap: Effective defense strategies require behavioral context from the workforce, not just attackers; think human intent, motives, and patterns. That's not easily extracted from oodles of telemetry or automated correlation alone. And sure, the industry has tried to solve this with promising solutions like security collaboration tools, user behavior analytics, data meshes, you name it. But these tools often underfocus on translating people's intention and working context.
The "risk transference" trap: Compliance checkboxes, insurance policies, and managed service providers can't save us either. These approaches might reduce liability on paper, but they are insufficient. Worse, this mindset pushes the problem downstream. The average person gets left in the dust. Case in point: the AT&T breach of 2024. Millions of people affected with their data splayed across the dark web, and for what? A lack of proactive, scalable protection for the everyday internet user.
The "pessimism" trap: Security leaders told us that when they got into the business, they dreamed of more than just checking boxes for the sake of compliance or insurance. They wanted to build exemplary security programs where they could nurture employees and teach them to properly evaluate business risk. They wanted to bridge the divide between how their workforce operated in the status quo, and how they knew the company would need to work more securely in the future. They wanted desperately to inject reality into the way that security is conducted right now.
It is too easy in our industry to succumb to a specific fatalism: that we can't control what matters (human risk), and instead we can simulate progress with spreadsheets and software.
You can invest in state-of-the-art tools, but one human error makes all that tech irrelevant. We've known this for decades, yet the problems persist.
Cybersecurity Engineer
Modern considerations
We also need to consider the current attack surface and its impact on how people actually spend time online. COVID-19 accelerated an inevitable shift: employees work from anywhere, often blending professional and personal devices, using tools and apps the security team has never vetted, with devices that can't be monitored. Attackers exploit this reality with increasingly sophisticated tools now loaded with generative AI, making social engineering style attacks faster, cheaper, and more accessible.
If I'm an attacker, I will go where there are fewer controls and right now, that involves people's personal devices and unauthorized cloud-based solutions that they don't control.
Field CISO
The industry's response? Security theater at best and scare tactics at worst. Our validation partners feel their hands are tied, citing a few ways to combat this perpetual state of darkness:
- Annual training and check-the-box, ornamental quizzes
- Static policies hidden away in a folder somewhere
- Simulation exercises that train users not to click anything ever
The result? A phishing success rate down to 5%. Simulations, in particular, have been an effective addition to organizations' training programs. However, 5% is still unacceptable; 60 employees in 1000 remaining prone to phishing attempts is far more than an attacker needs to gain entry. We've also found that email security awareness training doesn't translate to other interfaces: SMS, voice and video calls, and TikToks just to name a few. In particular, SMS has 7x the clickthrough rate of emails. Attackers, with all their advantages and now armed with AI, remain poised to deceive and defraud at their discretion.
Furthermore, modern DevOps teams herald 99.9999% as the gold standard for availability, because a 99.9% uptime still means 8.76 hours of downtime per year. We should demand the same rigor of our systems and security solutions.
Is it even human behavior? A catch!
The explosion of AI services and agents has added a whole new dimension to the challenge of protecting employees and organizations in the digital age. With their increasing ability to autonomously perform tasks, analyze data, and even make decisions on behalf of people, this creates a double-edged sword: on one hand, these agents are a productivity boon. On the other, their human-like behavior represents a new category of risk that organizations are struggling to understand and mitigate.
AI agents are stepping in, seamlessly mimicking human behavior in ways that blur the line between what's human and what isn't, and this trend is only set to grow. If we can't effectively monitor and guide human behavior, which AI agents train on, how can we ensure security throughout this next wave of innovation?
Many companies don't know what to do about it. Companies should at least understand how it's being used, what people are feeding it, and what the risk ultimately is. [It's] better being able to track it and decide on what to do about it than disabling it outright.
Director of InfoSec
Our validation partners tell us that despite their organizations still formalizing AI strategies and usage policies, employees are already off to the races and, leveraging AI tools at an unprecedented scale. And while this brings undeniable benefits (developers, for instance, are enjoying a 26% increase in productivity), it also introduces uncertainty and risk. I'll be writing more about this in the future.
Suffice it to say, AI introduces even more complication to an already wicked problem.
I get it, securing behavior is hard... What can we do?
People are complex. Vendors and security teams shy away from tackling the "human element" because it's nuanced, dynamic, and often feels like trying to boil the ocean. People going to people, right? Unlike locking down a port or patching a vulnerability, guiding safe operating habits takes collaboration with your entire workforce. And let's be honest, most employees don't wake up in the morning planning to cause a breach. They just want to do their jobs, help their colleagues, and meet their deadlines.
It's our responsibility as security leaders to make that experience as secure, seamless, and optimistic as possible. Now more than ever, this means acknowledging that security is about building a healthy culture of employee confidence first and foremost. The best security leaders know this, and bring unwavering optimism to their organizations in spite of the challenges ahead. They don't see their team as only the people that report to them, they see every employee as part of their flock to guide and nurture.
I want people to come talk to me, not see me as the Punisher. I want to understand why you are making the choices you are, and look for ways to help. I see myself as the Chief Teacher.
CIO
If the internet continues to devolve into a digital hellscape, how can we support these kinds of leaders to guide people to safety? That is the mission of our newest project, Maro. I'm joined by an incredible team to envision a brighter future; one where the optimistic security leader can address human risk head on, move beyond cybersecurity theater, and redefine the working relationship and privacy standards between employees and employers. We're backed by Downing Capital, whose commitment to the mission of each portfolio company means so much to me.
We've named the company Maro as a nod to Virgil Publius Maro, who guided Dante through the nine circles of hell by the light of reason.
Braving this new world requires a set of truisms to illuminate the way. Like many before us that proposed different ways of thinking, we too have a set of guiding principles. At Maro, we believe:
- People are collaborators, not liabilities: Employees are our first and last line of defense. Attackers exploit their strengths, so systems must be built to partner with people, rather than punish them for trying to do their jobs.
- Our digital lives will continue to blur: Security programs need to account for this reality. That means addressing risks across corporate and personal workloads without overreaching controls and making employees feel like Big Brother is watching.
- Education needs an optimistic refresh: Annual training is a checkbox, not a solution. Security education needs to be ongoing, context-specific, and woven into the fabric of daily work. It should be engaging, relevant, and designed to build confidence, not fear.
- Worker dignity must be respected: Draconian controls and constant surveillance erode trust and drive risky behaviors underground. Instead, we must collaborate with employees to create a culture where security is a shared mission.
- AI is a privacy-enabler: Many promising categories (such as UBA) failed because technology was not advanced enough to achieve their aspirations. In its stead, security teams collected as much user data as possible at the expense of privacy. AI will make it possible to detect advanced social engineering techniques locally, honor a user's privacy, and act as a personalized guide to navigating the web safely.
Human risk is not an unsolvable problem. The next set of challenges on the horizon might be daunting, but our belief that they can be overcome is unshakable.
What's Next: Exploring Maro and Cognitive Security
What is Cognitive Security?
Why a new premise for human risk management is the only way forward.
The Policy Enforcement Chasm
Security policies remain astonishingly hard to enforce across people's behaviors. Maro changes that.
November Demo Webinar
Learn how to get visibility into the top AI usage behaviors across your workforce in just 7 days.