Cyber Leadership in the Age of AI Coworkers

This is Cyber Leadership in the Age of AI Coworkers, part of the Wednesday “Headline” feature from Bare Metal Cyber Magazine, developed by Bare Metal Cyber. Picture the first time your engineering lead forwards you a pull request that shows a human name as the committer, but the description casually mentions it was drafted by a coding assistant. The tenth time that happens, and you see the same assistant touching authentication flows, incident runbooks, and configuration scripts, it becomes clear that you are not just dealing with fancy autocomplete anymore. You are dealing with artificial intelligence (A I) systems that behave like coworkers, proposing design choices, editing security controls, and shaping how people think about risk, while still being treated as harmless tools in your governance model.

Over the next few years, most organizations will live in a strange middle ground where humans still own the outcomes, but A I copilots, agents, and chat-based helpers sit in the middle of code, tickets, logs, and board narratives. The tricky part is that your control frameworks still assume tools, while your staff increasingly experience these systems as collaborators. That mismatch quietly breaks assumptions about identity, accountability, and assurance. If you do not name and design for A I coworkers as first-class actors in your environment, they will slip through gaps you did not know you had, and you will discover that your logs and policies are telling a simpler story than the one regulators, customers, and your own teams are actually living.

In all of these cases, A I is not just providing answers; it is providing defaults. It gives people a starting point that is fluent, plausible, and surprisingly hard to argue with when time is short. The human role shifts from original author to editor, and in a lot of real situations the editing is light. That is how A I becomes a coworker in practice. It is always available, never tired, deeply entangled with how work gets done, but it has no intuition, no context beyond what it can infer from data, and no skin in the game when something goes wrong. As a leader, if you keep talking about this as a simple tool, your controls will fixate on licensing, vendor risk, and maybe some data boundaries. Once you acknowledge it is acting like part of the team, you start asking different questions about delegation, review, and ownership.

The most important of those questions is about identity. Right now, many organizations let A I operate by impersonating a human. The assistant lives inside a developer’s session or a responder’s console, using that person’s permissions to pull data and trigger actions. It feels simple, and it avoids a lot of set-up work, but it creates a story in your logs and approvals that is not true. When you look back at a risky change or a questionable action, every trace points to the human account, even if the person simply accepted what the assistant suggested. That makes it harder to reconstruct what actually happened, and it also makes conversations with auditors and regulators more fragile, because you are pretending a human acted where an automated coworker did most of the work.

A more honest model is to treat these systems as identities in their own right and give them something like a badge. In identity and access management (I A M) terms, that means representing copilots, agents, and workflow bots as non-human identities or application accounts with explicit roles and scopes. Rather than letting a copilot roam under the full access of a senior engineer, you define which systems it can see, what data it can read, which actions it can propose, and where it is strictly read only. You might allow it to draft configuration changes but require a clearly logged human approval before anything touches production. That level of clarity makes risk appetite conversations much more concrete, because you are no longer debating abstractions; you are deciding how powerful a specific A I badge should be.

Once you go down that path, familiar lifecycle questions appear in a new light. How do these A I identities go through joiner, mover, and leaver events as you add new copilots or retire old agents? Who signs off when a new A I coworker needs access to customer data or production logs, and who is responsible for reviewing that access every quarter? What happens when you change vendors or models but forget to remove old credentials or roles that are still valid somewhere in your cloud environment? These are the same failure patterns that haunt traditional service accounts today, but now attached to systems that can generate and act on their own plans. Leaders who bring A I coworkers into their existing entitlement reviews and certification rhythms will be far better prepared when regulators start asking how they control non-human decision makers.

Even with clean identities and badges, a lot of the risk comes from human drift. People adapt quickly to anything that saves time, especially when they are under pressure. A coding assistant that began as a way to avoid repetitive boilerplate starts writing critical authentication flows. An operations copilot that once suggested log queries now proposes entire remediation sequences for incidents. A risk team that originally used an assistant to summarize background material starts copying whole sections of generated narrative into board reports with only minor tweaks. Each individual decision feels reasonable in the moment. It looks like smart use of available tools, not like bypassing controls.

Over time, those reasonable decisions accumulate into a quiet erosion of control. Engineers stop double checking A I generated code unless tests fail dramatically. Incident commanders accept suggested containment steps because the language sounds confident and past suggestions mostly worked, even if the underlying assumptions are not challenged. Governance teams allow A I to edit risk language in ways that unintentionally downplay uncertainty or gloss over known gaps. On paper, humans remain in the loop, but in practice their scrutiny thins out as they become reviewers of fluent output instead of owners of the underlying reasoning. That is how over-delegation creeps in: not through one catastrophic decision, but through hundreds of minor handoffs where the machine’s judgment slowly replaces human curiosity.

Countering that kind of drift is not a matter of one more awareness session. It requires structure. You can introduce meaningful friction at critical points without shutting down the benefits. For example, you can require explicit secondary approval for high-impact actions suggested or executed by A I coworkers, such as changes to access controls, customer-facing configurations, or critical runbook steps. You can design review workflows where humans have to confirm key assumptions in plain language rather than just approving a block of generated text. You can adjust performance metrics so that speed is not the only hero: in an A I augmented environment, quality of reasoning, documented thought process, and willingness to challenge assistant output should count as positive signals, not slowdowns to be punished.

From there, real guardrails start to look like architecture, not just policy. If an A I coworker can see whatever a human can see, propose any change that a human might propose, and write to any system a human can touch, no amount of policy language will save you. Guardrails that actually bite begin with shrinking the blast radius. You decide where A I is allowed to operate and where it is not. That means defining and enforcing data boundaries so that assistants work from curated, well understood corpora instead of free roaming across every production dataset and archive. It also means drawing a sharp line between internal A I that never leaves your environment and experimental use that connects to external services and must be hard walled away from your crown jewels.

All of this only works if culture comes along. Technology can move in months, but culture tends to move in years. If you introduce A I coworkers as a cool new capability without reshaping expectations, you will get pockets of experimentation, conflicting norms, and a lot of unowned risk. The message from leadership has to be that these systems are powerful, fallible, and part of how the organization now operates. That means executives talking about A I coworkers in the same conversations where they talk about teams, controls, and accountability, not treating them as side projects that sit outside of normal governance.

One practical way to steer culture is to change how you frame A I in discussions with your teams. Instead of asking where A I can make things more efficient, you start by asking which decisions and workflows you are willing to augment, under what conditions, and with what safeguards. That framing invites people to map out decision points, think about failure modes, and identify places where A I has no role beyond research or background analysis. It also legitimizes boundaries. There will be areas such as regulator responses, sensitive customer communications, or high stakes negotiations where you decide the first word must always come from a human. When staff see that leadership is intentional about both use and non use, trust tends to increase.

Incentives and role design carry just as much weight as speeches. If engineers only ever hear praise for speed, an A I that accelerates shipping will win every argument no matter how much risk it introduces. If incident responders are measured solely on time to resolve, a copilot that aggressively closes tickets will look like a star even if it leaves real issues unaddressed. To support healthy use of A I coworkers, leaders need to redefine what “good” looks like. Careful, documented use of A I, explicit recording of human reasoning, and healthy skepticism should show up in performance conversations and promotion criteria. Stories matter here too. When you share examples inside the organization, do not only celebrate the spectacular productivity gains. Highlight the teams that used A I responsibly, caught subtle errors, or pushed back when an assistant overstepped. Those stories teach people that the goal is not blind adoption or blanket rejection, but thoughtful integration.

When you step back, the core of the problem becomes clearer. This is not just a technology adoption story; it is a question of agency. How much power are you willing to give systems that look like coworkers but cannot be accountable in the way people can? At one extreme, you treat A I coworkers as trivial helpers and ignore the fact that they are now embedded in crucial workflows. At the other extreme, you give them broad access and decision authority because they are effective, and hope that the invisible risks never crystallize into a crisis. In between those poles lies the real leadership work: deciding where A I fits, what it can do, how its identity is expressed, and how humans stay in charge of the reasoning behind critical decisions.

Leaders who approach A I coworkers with that mindset will make some distinct moves. They will insist on clean identities and badges with scoped access, so that non-human actors are visible in the same way as human ones. They will rework key workflows so that people remain owners of judgment, not just proofreaders of fluent text. They will build guardrails into systems and code rather than relying on policy documents alone. They will cultivate a culture where A I is treated as a serious capability that demands craft, skepticism, and ongoing stewardship rather than a magical solution to every problem. In doing so, they will make board conversations about A I risk more concrete, regulator discussions more credible, and internal debates less about hype and more about design.

The next move does not have to be a massive program. It can simply be a better conversation with your own leadership team. Ask where A I is already acting like a coworker today and who is genuinely accountable for the decisions it influences. Ask your architects, your security leads, and your risk partners what it would take to give these A I coworkers real badges, with clear scopes, guardrails, and monitoring you would be comfortable explaining outside the building. The answers will tell you whether you are leading the age of A I coworkers in your organization, or whether that age has already arrived and is quietly leading you.

Cyber Leadership in the Age of AI Coworkers
Broadcast by