Akamai CTO on how bots are used online in legal and illegal ways

Akamai CTO on how bots are used online in legal and illegal ways

Learn what a bot is, the spectrum of ways bots are used online (especially in social media), and how bots might be used in the next election cycle.

Dan Patterson, CNET and CBS News Senior Producer, spoke with Patrick Sullivan, Akamai CTO, Security Strategy, about the programming and use cases of bots. The following is an edited transcript of the interview.

Dan Patterson: I think all of us have an idea of what a bot is, especially when we look on social media, or we try to use an e-commerce website, but it’s kind of hard to define specifically what a bot is. Let’s start with the basics. What is a bot?

Patrick Sullivan: A bot is just a piece of software that’s deployed with a predictable set of instructions to interact with a website. The bots themselves really have no motivation, so they inherit the motivations of their operator.

Dan Patterson: Who programs and creates bots?

Patrick Sullivan: There’s a whole ecosystem really here when you look at this. There are people that build tools that allow a bot operator to get up and running. At this point really, the barrier to becoming a bot operator is not a technical one–I think it’s more of an ethical one. Are you comfortable performing some of the actions that you’ll need to do there? 

You can get up and running, you can buy the software, you can buy tools for evasion, you can buy some other lists of username/password combinations that you can use in fraud. Really all of the tooling and the documentation, instructional manuals, help desk support is all available as a turnkey solution.

SEE: Security Awareness and Training policy (TechRepublic Premium)

Dan Patterson: I went on Google yesterday, spent all of nine minutes, tops, doing searches for bots–I was pretty surprised to see that for as cheap as $19.95 going up to $500, I could buy my own bot service. How is this possible? Is this illegal?

Patrick Sullivan: Yeah, absolutely. There is absolutely a very robust marketplace. Some of these bot operators even have marketing where they’re competing and they say our bot is better than bot B. They have service-level agreements with support and help to do things there. Some things you’re doing with a bot may not be illegal. Some things that people are doing with bots utilizing them for fraud are highly illegal, and you see arrests being made from time to time on that, where people are caught.

Dan Patterson: If I understand you correctly, the bots, the code, might not be illegal, but the activity the bot engages in could be illegal. Is that a fair characterization?

Patrick Sullivan: It is, yeah. There’s such a spectrum of what these bots are doing, right? At Akamai, we see roughly a third to a half of all the traffic on the web is automation of some sort, so we see about a trillion hits a day from bots. Some things that bots do are perfectly benign. Think about your search engines that are crawling through sites helping people to discover those sites… symbiotic bots that help you measure the performance of your website. 

Certainly those things are all highly legal and ethical in nature–they’re welcomed by the operator of the bot. But some of these things venture into fraud where people are attempting to break into somebody’s personal account, take that account over, steal money, defraud the website–those types of things are illegal. Using botnets to launch distributed denial of service (DDoS) attacks against a target, those things are illegal, and you do see people get arrested and convicted for those charges.

Dan Patterson: You said something really interesting there and that is that a significant portion of the internet that we use every day is inauthentic. Let’s go into that a little bit. How do we know and which parts of the internet are fake?

Patrick Sullivan: At this point, what we’re talking about are the people who are generating requests. If you’re operating a website, almost immediately you’ll start to see bot traffic come to your site and some of that is great. It’s great when you start to see the search engines indexing your site, that helps people find you. You may avail yourself to services to get a site up and running to help you along the way. There’s a lot of tasks that automation can perform, but clearly a lot of what you see there with automation is malicious.

There are people that use automation to find vulnerabilities in your code that they can exploit. They attack your users with credential-stuffing attacks. Determining if a user visiting a site as a human or a bot is really, I think, one of the most interesting challenges for information security. It’s a very adaptive cat-and-mouse game that is evolved through a couple of generations over the last five or six years.

Dan Patterson: For clarity there, you are saying requesters, that means users, so a significant percentage of web activity from users is inauthentic. My next question is, when we look at social networks, especially where there appears to be a lot of bot activity, are you then saying that many of those users, a significant portion of those users, of social networks are inauthentic, fake, not real accounts?

Patrick Sullivan: I think any web experience is going to be dealing with bots. Social media, I think, is well chronicled. You have bots that basically amplify a certain message at the behest of their bot operator. You may have people that generate a synthetic number of followers using bots, so that’s how it manifests itself in social media.

But I think the bot story that you see will be unique to each vertical. It’s different in retail than it is in gaming… media sees a different interface with bots. Probably the best told story there really is social media because of all the attention it got in the last election cycle.

Dan Patterson: What about the next election cycle? If I’m a user of social media right now, is it likely that I will interact with a bot that is designed for political purposes?

Patrick Sullivan: I think it’s reasonable to assume that people that have a motivation to amplify a certain message will deploy bots where they can to generate likes, to generate a large number of followers. I think that’s a pretty reasonable assumption to make is that as you’re viewing somebody on a social media site and they have X number of followers, some percentage of that is likely a bot maybe that they’ve themselves paid for or maybe that somebody else is just following them and amplifying their message.

Dan Patterson: When I think about actors who hacked or attacked the 2016 election, there was reporting that indicates that some Russian hackers had a budget of $1.2 million per month. But when we look at how inexpensive bots are, again, anywhere from $20 bucks up to $500, it seems like anyone could own and deploy a bot. When we look at the next election cycle, who are the threat actors likely to be using bots, and how will they use them for political purposes?

Patrick Sullivan: That’s the interesting spectrum here. It is extremely inexpensive to get up and running as a bot. You can get yourself up and running for a couple of hundred dollars if that’s your motivation, but there are different types of bots with different levels of sophistication. The low-end bots with operators who are not terribly skilled are very easy to detect. 

Some pretty easy mechanisms on the defensive side will be able to detect that it’s a bot and not a human on the other end of that session. As you get into the sites that have more robust defenses–your financial services sites, your high-end retailer sites–they have very active security teams that are working very hard to figure out if the user at the other end of that session is a human being or if it’s a bot.

Also see

20200423-sullivan1-dan.jpg

Source of Article