Eurpean Central: The California Effect Against Silicon Valley: An Analysis Of the Eu's New AI Regulations And Their Impact

Shutterstock

            On February 15th, OpenAI, the corporation behind the revolutionary predictive chatbot ChatGPT, unveiled Sora, an AI application that turns text into a 60-second video. The reaction to this has been dramatic, and the application spread like wildfire across the internet as yet again AI advancements blur the line between truth and fiction. In the wake of the AI developments of the last two years, the European Union (EU) has been a global leader in AI regulations, and their new EU AI Act aims to control the burgeoning tech platforms to protect consumers.

            Ever since its emergence into the public view, predictive AI chatbots and machine-learning-based art algorithms have been controversial, primarily because of how they use datasets. AI does not produce original content, but merely regurgitates interpretations of content from their dataset that corresponds to the prompts they are given. And these datasets may contain sensitive, private, copyrighted, or just plain false material. Because machine learning AIs have no way of interpreting the world other than through their datasets, this can lead to bias, the generation of false data (referred to as “hallucinations” by AI experts), or other undesirable results.

            Consider Tay, one of the first modern algorithms in this genre, and one of its most dramatic failures. An English-language chatbot created by Microsoft, Tay was designed to mimic the language patterns of a 19-year-old girl, developing and changing based on the interactions it had with users of the site formerly known as Twitter through its account TayTweets. Released on March 23rd, 2016, the AI was shut down less than 48 hours later after posting increasingly racist, sexual, and inflammatory content that is not suitable for reproduction on these pages, or any other. A post-mortem of this failed experimental chatbot revealed that internet trolls had bombarded the account with all manner of offensive content, poisoning Tay’s dataset so that when the AI drew from this dataset to respond to messages, it believed that these offensive statements were how people interacted on the internet, as that was what the dataset contained. 

            Besides dataset contamination producing undesirable outcomes, the risks of AI are vast. Text-to-speech programs and image-generating AIs capable of producing hyper-realistic “deep fakes” have the potential to be used to spread misinformation at an unprecedented rate, and to potentially open new avenues for identity theft scams. 

            The rapid pace of AI advancement and the extreme risk of unregulated AI producing either accidental disaster or giving new life to intentional malfeasance has led to a wave of European Union regulation, such as forcing all tech companies to abide by the USB-C charging standard rather than proprietary chargers such as Apple’s lightning charger. 

In response to the rapid progress of machine learning programs, the EU sought to limit the potential damage of heedless AI development with a new regulatory framework.  This structure, the AI Act, categorizes AI into four main categories: unacceptable risk, high risk, general availability, and limited risk. Unacceptable risk AIs, such as biometric identification and categorization systems not in use by law enforcement and “social scoring” AIs designed to rank people based on behavior and socioeconomic status, are completely banned within the EU. High-risk AI, a category that includes AI used in products that fall under EU product safety legislation (i.e. aircraft, cars) or used in a variety of sensitive categories such as infrastructure management, employee management, or legal services, is highly regulated. These programs are subject to assessment before being put on the market, and subject to heavy monitoring throughout their lifecycle. General purpose AI, which covers the most popular programs on the market such as ChatGPT, is required to ensure all content produced discloses that it is AI generated, take concrete steps to ensure that the AI does not produce illegal content, and publish summaries of the copyrighted data used in its training datasets. Finally, limited risk systems are allowed to operate with minimal transparency, at least at the moment.

           

When asked for a quote, the ChatGPT engine. a subject matter expert, had this to say about the necessity of these regulations:

“So, the European Union just dropped a hefty tome of rules on AI like it's trying to civilize a wild west saloon. "No more sneaky surveillance or creepy face scans," they decree, as if laying down the law at a rowdy family reunion. It's like they're saying, "Sure, innovate all you want, but don't forget your moral compass on the ride. Other countries are peeking over the fence, wondering if they should join the party. Looks like the EU might just become the bouncer for the AI club, setting the tone for tech's wild ride into the future."

What ChatGPT gets at here is quite interesting. By setting regulations on AI, the EU is taking advantage of something known as the “California effect”, which is when one market forces a direct change, usually an improvement, in the standards of a product by introducing new regulations for products sold within its market.  This forces companies to comply with this new standard for access to its market, which often leads to that standard being adopted for other markets, as it is often cheaper to design a product to one standard, rather than two. The classic example here is California’s environmental regulations, whose effects led Prof. David Vogel at Berkley to coin the term “California effect”

            It remains to be seen if these regulations will stem the potential flow of malfeasance and destabilization from the rapidly advancing AI algorithms that emerge on an almost weekly basis, or if these regulatory standards will catch on. The aforementioned AI Act, for example, still has many regulatory hurdles to pass, and even if successful, will not take effect until 2026. However, these regulations are part of a larger pattern of the EU attempting to take a greater role in setting global standards through the regulation of products sold in the European Common Market, a development whose full implications have yet to play out, but could be an early sign of a future EU that is more active, independent, and assertive on the global economic and legal stage.

Previous
Previous

Latin Analysis: Uruguay’s Evolving Approach To Cannabis Legalization

Next
Next

Latin Analysis: Nicaragua Seeks To Join South Africa’s International Court of Justice Genocide Case Against Israel