Every good story needs a villain
In the world of artificial intelligence (AI), the significance of regulatory compliance cannot be understated. Companies strive to avoid becoming the face of failure in this rapidly evolving landscape, as the consequences can be detrimental both in terms of reputation and financial impact.
One recurring pattern in the realm of regulations is the tendency to single out prominent global corporations as the poster child for non-conformance that sets a tone for the industry. These high-profile entities, often hailing from the Fortune 500 and Global 2000 lists, are held up as cautionary examples to emphasize the necessity of adhering to the law and respecting authority. Such stories capture the public’s attention far more than those involving average brands.
Amidst this backdrop, Microsoft has found itself in the spotlight due to its association with ChatGPT, an AI language model developed by OpenAI, an organization that Microsoft has supported to the tune of billions of dollars. OpenAI has demonstrated a commendable level of transparency, with its founder, Sam Altman, expressing concerns about the potential risks associated with artificial intelligence during a recent Senate committee hearing. Taking a bold stance, Altman called for regulations to effectively address these risks and ensure responsible AI development and deployment.
As regulatory efforts such as the EU’s AI Act reach their final stages, we can expect that the United States will also follow suit with its own set of similar measures. This impending regulatory environment creates a sense of urgency among CEOs, AI leaders and legal experts to stay ahead of AI responsibility, avoiding any association with this generation’s equivalent of a notorious failure, such as the infamous “Red Ring of Death” that plagued Microsoft’s Xbox 360 gaming console.
A tale of unintended (regulatory) consequences
What is the Red Ring of Death? If you’re asking, you are likely not a gamer. “Ring of Death” was the colloquial term for a hardware failure on the Xbox 360 circa 2007. Gaming enthusiasts would gather around to fire up their gaming console for what was expected to be hours of entertainment only to press the power on to see a flashing red ring emerge around the power button of their system.
So what caused this Red Ring of Death? While it was not originally known or disclosed, it was later determined that the “RROD” was caused by a common hardware failure that affected a significant number of Xbox 360 consoles. The primary cause of the RROD was the overheating of the console’s internal components, particularly the graphics processing unit (GPU) and the motherboard. Over time, the excessive heat would cause the lead-free alloy solder joints connecting these components to the motherboard to weaken or crack. This resulted in a loss of proper electrical connections, leading to the console’s failure to function correctly.
This didn’t happen in the first generation Xbox, so why now? Bad design? Not necessarily. The original Xbox, released in 2001, used a lead-based solder for its electronic components. Lead-based solder was the industry standard at that time and was commonly used in electronic manufacturing. So why the switch?
The use of new lead-free solder in the Xbox 360 manufacturing process was a result of environmental regulations and the EU Restriction of Hazardous Substances (RoHS) directive. The RoHS directive aimed to reduce the use of hazardous materials, including lead, in electronic devices to minimize their impact on the environment and human health.
The choice of lead-free solder, however, had unintended consequences for the Xbox 360. The eutectic solder used in early Xbox 360 models was found to be more prone to thermal stress and fatigue compared to the lead-based solder used in older consoles. This, coupled with design and cooling issues, contributed to the widespread hardware failures and the emergence of the Red Ring of Death problem. Its predecessor Xbox used a pre-RoHS lead-based solder for its electronic components. Lead-based solder was the industry standard at that time and was once commonly used in electronic manufacturing.
Consequence: Estimates and industry analyses have suggested that the costs associated with the recall, warranty extension and related services were “as little as” $1.15 billion and may have been closer to several billion dollars.
Lesson learned: Well-intentioned regulations designed to address specific issues may inadvertently create unintended burdens or loopholes that hinder progress or generate unforeseen negative effects. With AI regulations looming, leaders must carefully consider the potential ripple effects of industry’s actions and strive for more comprehensive and responsible approaches to achieve our goals while minimizing unintended negative consequences.
Crossing wires and getting lost in translation
The ChatGPT fear-mongering and fervor in today’s market surrounding AI regulations are palpable and reminiscent of the days leading up to the EU RoHS coming into force.
In the early 2000’s, the European Union had introduced the RoHS Directive, which restricted the use of lead and other hazardous substances in electrical and electronic equipment (EEE). Known as “the six,” RoHS would later restrict lead, mercury, cadmium, hexavalent chromium, polybrominated biphenyls (PBBs) and polybrominated diphenyl ethers (PBDEs). It aimed to create product safety and eliminate unsafe materials in everyday electronics.
As expected, electronics manufacturers were scanning their product bill-of-materials (BOM) for the taboo six. (Think of a BOM as a recipe for a dish. Just as a recipe lists all the ingredients needed to make a meal, a BOM lists all the parts and components needed to create a product.) It was only natural that electronics manufacturers instantly honed in on their top-tier critical components, triaging the most important components and suppliers to prepare for coming regulations.
Then the news came out.
In what sounded like a drug bust, Dutch authorities seized more than 1.3 million Sony PlayStations and accessories due to high amounts of cadmium. The culprit? A cable. Not the big next-gen chips powering the PlayStation, but a mundane, trivial cable. Lots of confusion surrounded this situation and the interpretation of the EU directive. Sony issued public statements expressing reservations about the Dutch interpretation of the Directive and asserting that the restrictions around healthy risks applied to the responsible end-of-life disposal of the electronics in the future the health risk. In all, $162 million of PlayStation gear were held by customs, and Sony replaced and shipped new units that were devoid of restricted substances.
Lesson Learned: Don’t miss the trees for the forest. With regulations, organizations need to focus on their entire portfolio of whatever the regulations apply to, or the entire BOM. The problem for many electronics manufacturers initially appeared to be connected to a larger, more prominent aspect of their product portfolio, but upon further investigation, it became apparent that the underlying issues run much deeper and are more complex than initially anticipated. In the case of AI regulations, everything that meets the definition of “AI” is in scope, and organizations will have to get smart about their full AI portfolio — from the important to the unimportant.
Good intentions and dual-use artificial intelligence
For most organizations, their digital transformation journey is paved with good intentions. As we begin to understand the scope and ramifications of AI regulations — as well as how to prepare for them — we must ask ourselves where the burden of proof will lie for the various potential uses of the models, applications and systems that are considered “AI.” After all, malicious use of AI is one of the top fears cited among business leaders.
As we saw in the PlayStation example, Sony had not intentionally shipped a potentially hazardous product, nor had it ignored safe handling of the product’s disposal, which is what it may have believed was its obligation to consumers. If our AI-enabled products and services are open to interpretation, could they be deemed harmful despite the developing organization’s intentions?
In global trade, export controls include regulations and restrictions imposed by governments on the export of certain goods, technologies or information from one country to another to protect national security, prevent the proliferation of weapons of mass destruction, restrict the transfer of sensitive technologies or goods, and ensure compliance with international agreements and obligations. For instance, a powerful GPU computer processor can be utilized in the development of advanced weapons systems, simulations or cryptographic applications. This is why NVIDIA’s chips are restricted from being shipped to Russia and China.
Such products are considered a “dual-use item.” While computer processors are commonly used in everyday consumer electronics such as laptops and smartphones, they also have applications in various industries with potential military or strategic implications.
Could an organizations’ own AI-enabled products and services be considered dual use? One need only look at the prevalent use of biometric sensors in smart technology. These sensors measure unique physical or behavioral characteristics for identification. Examples include fingerprint scanners, iris scanners, facial recognition cameras and voice recognition systems — all of which use powerful algorithms to deliver their intended use.
Biometric identification is one of the key aspects of the EU’s pending AI Act to justifiably prevent misuse and bias and to thwart discriminatory hiring, abuse by law enforcement or inability to access education. A company that provides building access products making use of biometrics may have a specific mission of protecting public health and safety, but that company also is now a purveyor of AI that falls under high-risk restrictions and additional requirements of the EU AI Act.
Lesson Learned: In the context of the AI regulations being discussed, the statement “the best offense is a good defense” holds significant relevance. Organizations that produce smart products and AI systems face the challenge of demonstrating responsible and positive intentions while recognizing the potential for dual use.
In this scenario, adopting a defensive approach becomes crucial. By proactively demonstrating transparency and providing clear, transparent documentation of their products’ intended use, organizations can establish a strong defense against potential accusations of nefarious or harmful purposes.
This documentation would allow them to showcase their responsible behavior, highlight their positive intentions and address any potential negative uses, thus mitigating the risk of penalties related to dual-use concerns. By embracing transparency, these organizations can build trust and credibility, ultimately safeguarding their reputation and ensuring compliance with the AI regulation.
Learning from history to navigate new regulations
With forthcoming AI regulations on the horizon, CEOs and AI leaders have a unique opportunity to learn from the past and apply those lessons to their decision-making. By reflecting on historical experiences, they can gain invaluable insights into the potential risks and consequences associated with irresponsible AI usage.
Armed with this knowledge, executives can proactively align their organizational practices to address the disruptive consequences that come from new regulations, avoiding potential negative reputational and financial repercussions. By leveraging these lessons and understanding the holistic implications of how regulations play out, executives can pave the way for a future that upholds ethical standards and fosters trust in the responsible application of their AI products and services.
Subscribe To Our Blog
Get the latest from Verta delivered directly to you email.