The United States, Britain, and 16 other countries have written an agreement to make AI “secure by design,” in an effort to keep artificial intelligence safe against threat actors. The non-binding 20-page document writes about the need to deploy and use AI in a safe way for consumers and doesn’t promote misuse.
AI can be a great help in research, website construction, automation, and all sorts of passion projects that may not have been possible before. However, it can also be a dangerous tool in the wrong hands.
Companies use AI to compete with each other, which can push them to not take the appropriate security measures. For example, a company might release an undertested product to compete with the market that hackers can exploit.
AI also comes with the fear of misuse, like promoting misinformation, disrupting democratic processes, hosting fake websites, a general rise of fraud, and a dramatic loss of jobs. The agreement aims to provide a resolution to these concerns.
Note that the resolution does not touch on the potential misuse of user data, instead focusing on security concerns.
Specifically, it details the need to properly test products before putting them to market and how to prevent hackers from hijacking their networks.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” says the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly.
The agreement comes after mounting pressure from the Biden Administration to implement AI regulation. Only a few of the laws that Biden has pushed for have been passed, thanks to a split Congress.
Countries that signed the agreement include the US, Britain, Germany, Italy, Poland, Australia, Israel, Singapore, Nigeria, Estonia, and the Czech Republic.
Source of Article