Generative AI: Cybersecurity Weapon, But Not Without Adaptable, Creative (Human) Thinkers

Generative AI: Cybersecurity Weapon, But Not Without Adaptable, Creative (Human) Thinkers

Generative AI and cybersecurity concept.
Image: PB Studio Photo/Adobe Stock

Generative AI was — not surprisingly — the conversational coin of the realm at Black Hat 2023, with various panels and keynotes mulling the extent to which AI can replace or bolster humans in security operations.

Headshot picture of Kayne McGladrey.
Kayne McGladrey. Image: Hyperproof

Kayne McGladrey, IEEE Fellow and cybersecurity veteran with more than 25 years of experience, asserts that the human element — particularly people with diverse interests, backgrounds and talents — is irreplaceable in cybersecurity. Briefly an aspiring actor, McGladrey sees opportunities not just for techies but for creative people to fill some of the many vacant seats in security operations around the world.

Why? People from non-computer science backgrounds might see a completely different set of pictures in the cybersecurity clouds.

McGladrey, Field CISO for security and risk management firm Hyperproof and spokesperson for the IEEE Public Visibility initiative, spoke to TechRepublic at Black Hat about how cybersecurity should evolve with generative AI.

Jump to:

Are we still in the “ad hoc” stage of cybersecurity?

Karl Greenberg: Jeff Moss (founder of Black Hat) and Maria Markstedter (Azeria Labs founder and chief executive officer) spoke during the keynote on the increasing demand for security researchers who know how to handle generative AI models. How do you think AI will affect cybersecurity job prospects, especially at tier 1 (entry level)?

Kayne McGladrey: For the past three or four or five years now, we’ve been talking about this, so it’s not a new problem. We’re still very much in that hype cycle around optimism of the potential of artificial intelligence.

Karl Greenberg: Including how it will replace entry-level security positions or a lot of those functions?

Kayne McGladrey: The companies that are looking at using AI to reduce the total number of employees they have doing cybersecurity? That’s unlikely. And the reason I say that does not have to do with faults in artificial intelligence, in individuals or faults in organizational design. It has to do with economics.

Ultimately, threat actors — whether nation-state sponsored, sanctioned or operated, or a criminal group — have an economic incentive to develop new and innovative ways to conduct cyberattacks to generate profit. That innovation cycle, along with diversity in their supply chain, is going to keep people in cybersecurity jobs, provided they’re willing to adapt quickly to new engagement.

Karl Greenberg: Because AI can’t keep pace with the constant change in tactics and technology?

Kayne McGladrey: Think about it this way: If you have a homeowner’s policy or a car policy or a fire policy, the actuaries of those (insurance) companies know how many different types of car crashes there are or how many different types of house fires there are. We’ve had this voluminous amount of human experience and data to show everything we can possibly do to cause a given outcome, but in cybersecurity, we don’t.

SEE: Used correctly, generative AI is a boon for cybersecurity (TechRepublic)

A lot of us may mistakenly believe that after 25 or 50 years of data we’ve got a good corpus, but we are at the tip of it, unfortunately, in terms of the ways a company can lose data or have it processed improperly or have it stolen or misused against them. I can’t help but think we’re still sort of at the ad hoc phase right now. We’re going to need to continuously adapt the tools that we have with the people we have in order to face the threats and risks that businesses and society continue to face.

Will AI support or supplant the entry-tier SOC analysts?

Karl Greenberg: Will tier-one security analyst jobs be supplanted by machines? To what extent will generative AI tools make it more difficult to gain experience if a machine is doing many of these tasks for them through a natural language interface?

Kayne McGladrey: Machines are key to formatting data correctly as much as anything. I don’t think we’ll get rid of the SOC (security operations center) tier 1 career track entirely, but I think that the expectation of what they do for a living is going to actually improve. Right now, the SOC analyst, day one, they’ve got a checklist – it’s very routine. They have to hunt down every false flag, every red flag, hoping to find that needle in a haystack. And it’s impossible. The ocean washes over their desk every day, and they drown every day. Nobody wants that.

Karl Greenberg: … all of the potential phishing emails, telemetry…

Kayne McGladrey: Exactly, and they have to investigate all of them manually. I think the promise of AI is to be able to categorize, to take telemetry from other signals, and to understand what might actually be worth looking at by a human.

Right now, the best strategy some threat actors can take is called tarpitting, where if you know you are going to be engaging adversarially with an organization, you will engage on multiple threat vectors concurrently. And so, if the company doesn’t have enough resources, they’ll think they’re dealing with a phishing attack, not that they’re dealing with a malware attack and actually someone’s exfiltrating data. Because it’s a tarpit, the attacker is sucking up all the resources and forcing the victim to overcommit to one incident rather than focusing on the real incident.

A boon for SOCs when the tar hits the fan

Karl Greenberg: You’re saying that this kind of attack is too big for a SOC team in terms of being able to understand it? Can generative AI tools in SOCs reduce the effectiveness of tarpitting?

Kayne McGladrey: From the blue team’s perspective, it’s the worst day ever because they’re dealing with all these potential incidents and they can’t see the larger narrative that’s happening. That’s a very effective adversarial strategy and, no, you can’t hire your way out of that unless you’re a government, and still you’re gonna have a hard time. That’s where we really do need to have that ability to get scale and efficiency through the application of artificial intelligence by looking at the training data (to potential threats) and give it to humans so they can run with it before committing resources inappropriately.

Looking outside the tech box for cybersecurity talent

Karl Greenberg: Shifting gears, I ask this because others have made this point: If you were hiring new talent for cybersecurity positions today, would you consider someone with, say, a liberal arts background vs. computer science?

Kayne McGladrey: Goodness, yes. At this point, I think that companies that aren’t looking outside of traditional job backgrounds — for either IT or cybersecurity — are doing themselves a disservice. Why do we get this perceived hiring gap of up to three million people? Because the bar is set too high at HR. One of my favorite threat analysts I’ve ever worked with over the years was a concert violinist. Totally different way of approaching malware cases.

Karl Greenberg: Are you saying that traditional computer science or tech-background candidates aren’t creative enough?

Kayne McGladrey: It’s that a lot of us have very similar life experiences. Consequently, with smart threat actors, the nation states who are doing this at scale effectively recognize that this socio-economic populace has these blind spots and will exploit them. Too many of us think almost the same way, which makes it very easy to get on with coworkers, but also makes it very easy as a threat actor to manipulate those defenders.

Disclaimer: Barracuda Networks paid for my airfare and accommodations for Black Hat 2023.

Source of Article