Top challenge to internet health is AI power disparity and harm, Mozilla says

Top challenge to internet health is AI power disparity and harm, Mozilla says

August 21, 2019 San Francisco / CA / USA - Close up of Mozilla stylized sign ( moz://a ) at their office building in SOMA district; Mozilla is a free software community
Image: Sundry Photography/Adobe Stock

The top challenge for the health of the internet is the power disparity between who benefits from AI and who is harmed by AI, Mozilla’s new 2022 Internet Health reveals.

Once again, this new report puts AI under the spotlight for how companies and governments use the technology. Mozilla’s report scrutinized the nature of the AI-driven world citing real examples from different countries.

TechRepublic spoke to Solana Larsen, Mozilla’s Internet Health report editor, to shed light on the concept of “Responsible AI from the Start,” black box AI, the future of regulations and how some AI projects lead by example.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Larsen explains that AI systems should be built from the start considering ethics and responsibility, not tacked on at a later date when the harms begin to emerge.

“As logical as that sounds, it really doesn’t happen enough,” Larsen said.

According to Mozilla’s findings, the centralization of influence and control over AI doesn’t work to the advantage of the majority of people. Given the dimensions that AI technology is taking, as AI is embraced around the world, the issue has become a top concern.

Market Watch’s report on AI disruption reveals just how big AI is. The year 2022 opened with over $50 billion in new opportunities for AI companies, and the sector is expected to soar to $300 billion by 2025.

The adoption of AI at all levels is now inevitable. Thirty-two countries have already adopted AI strategies, more than 200 projects with over $70 billion in public funding have been announced in Europe, Asia and Australia, and startups are raising billions in thousands of deals around the world.

More importantly, AI applications have shifted from rule-based AI to data-based AI, and the data these models use is personal data. Mozilla recognizes the potential of AI but warns it is already causing harm on a daily basis around the globe.

“We need AI builders from diverse backgrounds who understand the complex interplay of data, AI and how it can affect different communities,” Larsen told TechRepublic. She called for regulations to ensure AI systems are built to help, not harm.

Mozilla’s report also focuses on AI’s data problem, where large and frequently reused datasets are put to work, despite not guaranteeing the results that smaller datasets, specifically designed for a project, do.

The data used to train machine learning algorithms is often sourced from public sites like Flickr. The organization warns that many of the most popular datasets are made up of content scraped from the internet, which “overwhelmingly reflects words and images that skew English, American, white and for the male gaze.”

Black Bock AI: Demystifying Artificial Intelligence

AI seems to be getting away with much of the harm it does thanks to its reputation of being too technical and advanced for people to understand. In the AI industry, when an AI uses a machine learning model that humans can not understand, it is known as a Black Box AI and tagged for lacking transparency.

Larsen says that to demystify AI, users should have transparency into what the code is doing, what data it is collecting, what decisions it is making and who is benefiting from it.

“We really need to reject the notion that AI is too advanced for people to have an opinion about unless they are data scientists,” Larsen said. “If you are experiencing harm from a system, you know something about it that maybe even its own designer doesn’t.”

Companies like Amazon, Apple, Google, Microsoft, Meta and Alibaba, top the lists of those reaping the most benefits thanks to AI-driven products, services and solutions. But other sectors and applications like military, surveillance, computational propaganda — used in 81 countries in 2020 — and misinformation, as ell as  health, financial and legal sector AI bias and discrimination are also raising red flags for the harm they create.

Regulating AI: From talk to action

Big tech companies are known for often pushing back against regulations. Military and government-driven AI also operate in an unregulated environment, often clashing against human rights and privacy activists.

Mozilla believes regulations can be guardrails for innovation that help facilitate trust and level the playing field.

“It is good for business and consumers,” Larsen says.

Mozilla supports regulations like the DSA in Europe and follows along closely with the EU AI Act. The company also supports bills in the U.S. that would make AI systems more transparent.

Data privacy and consumer rights are also part of the legal landscape that could help pave the way to a more responsible AI. But regulations are just one part of the equation. Without enforcement, regulations are nothing but words on paper.

“A critical mass of people calling for change and accountability, and we need AI builders who put people before profit,” Larsen said. “Right now, a big part of AI research and development is funded by big tech, and we need alternatives here too.”

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

Mozilla’s report linked AI projects causing harm to several companies, countries and communities. The organization cites AI projects that are affecting gig workers and their labor conditions. This includes the invisible army of low-wage workers that train AI technology on sites like Amazon Mechanical Turk, with average pay as low as $2.83 per hour.

“In real life, over and over, the harms of AI disproportionately affect people who are not advantaged by global systems of power,” Larsen said.

The organization is also actively taking action.

One example of their actions is Mozzila’s RegretsReporter browser extension. It turns everyday YouTube users into Youtube watchdogs, crowdsourcing how the platform’s recommendation AI works.

Working with tens of thousands of users, Mozilla’s investigation revealed that YouTube’s algorithm recommends videos that violate the platform’s own policies. The investigation had good results. YouTube is now more transparent about how its recommendation AI works. But Mozilla has no plan of stopping there. Today, they continue their research in different countries.

Larsen explains that Mozzila believes that shedding light and documenting AI when it operates in shady conditions is of paramount importance. Additionally, the organization calls for dialogue among tech companies with the goal of understanding the problems and finding solutions. They also reach out to regulators to discuss the rules that should be used.

AI that leads by example

While the Mozilla 2022 Internet Health report paints a rather grim picture of AI, magnifying problems that the world has always had, the company also highlights AI projects built and designed for a good cause.

For example, the work of Drivers Cooperative in New York City, an app that’s used — and owned — by over 5,000 rideshare drivers, helps gig workers gain real agency in the rideshare industry.

Another example is a Black-owned business in Maryland called Melalogic that is crowdsourcing images of dark skin for better detection of cancer and other skin problems in response to serious racial bias in machine learning for dermatology.

“There are many examples around the world of AI systems being built and used in trustworthy and transparent ways,” Larsen said.

Source of Article