The Debate Around AI Ethics in Australia is Falling Far Behind

The Debate Around AI Ethics in Australia is Falling Far Behind

A cube with AI written on it.
Image: Shuo/Adobe Stock

AI tools are undeniably useful, and that’s why AI development and use is accelerating exponentially. AI is now used for everything from research to writing legal arguments, from image generation and storytelling for artists through to supporting coders.

However, as useful as these tools are, AI presents a severe ethical concern, and while AI tools are being pushed out for public use in massive numbers right now, the discussions around AI ethics remain just that — discussions with little regulatory push behind them. Many nations, such as Australia, want to look at regulating AI, but such regulation is still a way off, and it’s questionable just how much high-value “innovation” these regulations would limit in the name of ethical best practice.

SEE: Take advantage of TechRepublic Premium’s AI ethics policy to help you implement the best strategy for AI in your business.

Consequently, while the ethical consumption of technology, generally, becomes an ever-greater priority, AI in contrast has become something of a wild west, where too much of the ethical boundaries are decided by the individual’s own ethical center.

From issues of plagiarism and the potential for AI to compromise the integrity of academic research to biases leading to discrimination and the potential for job losses and even deaths being caused by AI, we as a society need better ethical frameworks to be developed around AI.

We need this to happen quickly because, while we might not be headed directly to Skynet, AI is going to have a massive role in shaping our futures. We need to make sure that the ethical foundations that these applications are built on are properly considered, first, before we allow AI to “take over” in any meaningful context.

The ‘lost decade’ for AI ethics

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned.

Unemployment: What happens after the end of jobs?

As the WEF flags: “Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade?”

Meanwhile this year, a paper was published that acknowledges that there are insufficient job alternatives for the displacement of 50% of truckers. Job losses from AI — particularly in fields where the working base tends to be older or have lower educational qualifications — are an ethical concern that has been long known, yet across the world, policymakers and private business alike have shown little urgency in assisting affected individuals in reskilling and finding new opportunities.

Inequality: How do we distribute the wealth created by machines?

The WEF acknowledges that there’s potential for AI to further concentrate wealth. After all, AI works for a fraction of what skilled employees do, and it won’t unionize, take sick days or need to rest.

By tasking AI with work while cutting the total size of the workforce, companies are creating a better profit position for themselves. However, this isn’t a benefit to society unless the wealth is transferred back into it.

“AI will end the West’s weak productivity, but who exactly will benefit,” as noted in The Guardian.

One solution would be for governments to move away from taxing labor and instead directly taxing AI systems. The public wealth generated by doing this could be used to provide those out of work or moved into lower-paid jobs with necessary income support. However, even as jobs are already being impacted, there is no sign of even a debate to transform the taxation system in kind.

Bias: How do we address bias and potential racism and sexism generated by AI applications?

The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core.

“Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. Surprisingly, if you type ‘school boy,’ results will mostly show ordinary young school boys. No men in sexualised costumes or very few.”

It was one thing when these biases were built into relatively benign applications like search engine results or when they simply delivered a poor user experience. However, AI is being increasingly applied to areas where bias has very real, potentially life-altering consequences.

Some argue that AI will result in a “fairer” judicial system, for example. However, the in-built biases of AI applications, which have yet to be addressed despite a decade of research and debate, would suggest a very different outcome than fairness.

Theft: How do we protect artists from having their work and even identities stolen by those using AI applications?

As UNESCO noted, in 2019, Huawei used an AI application to “complete” the last two movements of the unfinished Franz Schubert Symphony No.8. Meanwhile, AI is being used to create voicebanks that allow users to create speech from deceased celebrities such as Steve Jobs. One of the key motivating factors behind the recent actors and screenwriters’ strikes has been concerns that AI will be used to mimic them for projects that they won’t earn money from, even after they’re dead and without direct consent.

Elsewhere, an AI artist used generative AI to create a cover for a video game rather than an original image, as the publisher had commissioned. There was also some fierce backlash at the world’s largest online artists community, DeviantArt, for allowing AI algorithms to gather data from artists without the artist’s permission.

The debate over just where the line lies between what is and isn’t acceptable use of creative rights by AI is raging, loudly and vocally. And yet in the meantime, while that debate is still ongoing, AI developers are releasing their tools that enable a laissez-faire approach for AI artists, while governments continue to rely on antiquated and inadequate IP rights, created before AI was even a concept, to regulate the space.

Disinformation: How do we prevent AI from being used to further the spread of disinformation?

“AI systems can be fooled in ways that humans wouldn’t be,” the WEF report from 2016 noted, “For example, random dot patterns can lead a machine to “see” things that aren’t there.”

Just recently, research found that ChatGPT went from correctly answering a simple math problem 98% of the time to getting it right just two per cent of the time. Earlier this year, a new Google AI application made a critical factual error and it wiped $100 billion off the company’s market value.

We live in an era where information moves quickly and disinformation can affect everything, all the way up to critical election results. There are some mild attempts by some governments to introduce disinformation laws. Australia, for example, is keen to target “systemic issues which pose a risk of harm on digital platforms,” but AI applications are being released into the wild with no obligation to be accurate in the information that they present, and the big problem with disinformation is that once it has influenced someone it can be difficult to correct the record.

A common pattern for a critical problem

These are only five examples of AI impacting jobs, lifestyles, and people’s very existences. In all these examples, the ethical concerns about them have been well known for many years, and yet, even though the ethical debate hasn’t been settled, the release of these applications has gone on uninhibited.

As the popular saying goes “It’s better to ask forgiveness than permission.” That seems to be the approach that those working in AI have taken with their work. The problem is that this quote, originally from Admiral Grace Hopper, has been wildly misinterpreted from its original intent. In reality the longer that AI development goes without being answerable to ethical considerations — the longer they’re allowed to ask for forgiveness rather than for permission — the more difficult it will be to walk back the harm that the applications are causing in the name of progress.

Source of Article