The proverbial shoe may finally be in freefall, dropping towards a fundamental change in how businesses use artificial intelligence, forcing organizations to shore up weak and opaque methods of tracking data and algorithms. As reported by Reuters, VentureBeat and others, European Union (EU) lawmakers passed a draft of the EU AI Act late last week, including new copyright rules that will impact applications leveraging generative AI and Large Language Models (LLMs). Widespread fears of AI misuse have surged in 2023, and executives making use of the technology are taking particular note that responsible, trustworthy, and ethical use of AI is likely to be regulated. It’s a matter of “when, not if.”
As we grapple with the need to regulate artificial intelligence, we must ask ourselves: What can we learn from the past? Looking at previous attempts at regulation in other industries, we can see that loopholes and oversights can have catastrophic consequences. Yet, with the right regulatory framework in place, we can strike a delicate balance between innovation and responsible use. By learning from past experiences, we can develop comprehensive and adaptable regulations that protect citizens while promoting the benefits of AI.
It's not surprising that the European Union was the first jurisdiction to put in place meaningful regulations for environmental and social responsibility, such as the EU Restriction of Hazardous Substances (RoHS) directive, and is now the first to pass regulations governing responsible artificial intelligence. The EU has long been a leader in promoting sustainability and social responsibility, and it has shown a commitment to addressing emerging technologies in a way that protects citizens and promotes ethical use. The EU's proactive stance on regulating new and emerging technologies, such as AI, reflects its recognition of the potential risks and challenges posed by these technologies, as well as its dedication to ensuring that they are used in ways that benefit society as a whole.
Over the past roughly 20 years, global manufacturers have had to make a major transformation to identify, reduce, and – in some cases - remove hazardous or potentially unethical materials from their products and supply chains. Today, AI regulations are likely to evolve and come into play cumulatively just as their environmental and social regulation predecessors did. At the end of the day, I posit that AI, its underlying models, ensembles, and data are no different than physical supply chains, albeit digital ones.
Does your generative AI contain copyrighted information? It’s a simple question, right? Yes or no. Or will it be that simple…
The EU AI Act proposes that companies disclose content used in their AI. A disclosure document is a written statement that provides information about a particular subject or topic, typically used to inform and educate stakeholders. Disclosure documents can take many forms, such as financial statements, product information sheets, legal agreements, or regulatory reports, and they are often required by law or industry regulations. The purpose of a disclosure document is to promote transparency and accountability by providing relevant information to stakeholders, allowing them to make informed decisions.
Let’s briefly examine the chronology of disclosure documents regulating chemicals and materials used in the supply chain:
In the past, what started with concerns about lead in electronics as a “yes or no” question (“Is your product free of lead?”) evolved into a global supply chain transformation where, today, one needs to know not just “if” a product is free of lead, but the entire makeup and concentrations of chemical substances (i.e., RoHS, REACH) and their country of origin (i.e., conflict minerals) to demonstrate non-hazardous, responsible, and ethical products.
Why wouldn’t this play out the same way for AI? In other words, regulations that, on their face, focus on copyright infringement can very well also lead to a focus on responsible use, ethics, security, and source of the models. Most production models are pipelines (think “supply chain”) of other models. How does this “model chaining” impact one’s ability to demonstrate compliance?
With environmental compliance, many companies spent years chasing their tail, crashing through one wall collecting and reporting the contents of their products, only to find another wall on the other side. One regulation at a time. Lead free. RoHS, REACH… Starting with literally one substance - lead - and later reporting on some 200 more for REACH.
Full material disclosure programs began to emerge to take a future-state approach to products and the supply chain by taking the position that, first, full transparency would be the norm of reporting to regulators and consumers, and second, companies could gain competitive advantage and achieve business benefits by providing full transparency. Implementing a full material disclosure program provides several business benefits, such as increased transparency and accountability in the supply chain, improved risk management, better product design and innovation, enhanced brand reputation, and increased customer loyalty and trust. By understanding the composition and environmental impact of the materials used in their products, companies can identify potential risks and opportunities for improvement, reduce waste and emissions, comply with regulatory requirements, and respond to customer demands for more sustainable and ethical products. Additionally, full material disclosure can facilitate collaboration with suppliers and other stakeholders, leading to more efficient and resilient supply chains.
What does transparency mean for AI? I wouldn’t be surprised if we begin to see environmental, social, and corporate governance (ESG) declarations (and potentially regulatory reporting obligations) for the extreme resource consumption and environmental impact of AI model development.
The critical system of record for manufacturers to drive their regulatory programs and supply chain transformations were the Product Lifecycle Management (PLM) systems. PLM systems are used by various business areas – including engineering, product design, manufacturing, supply chain management, and quality control – to manage product data, streamline processes, collaborate effectively, and ensure compliance with regulations, thereby improving productivity, reducing costs, and enhancing product quality and innovation. The PLM system is a software-based approach to managing the entire lifecycle of a product, and a Bill of Materials (BOM) is a key component of the system that captures full material disclosure information, such as the composition and environmental impact of the materials used in a product.
Similarly, a Model Lifecycle Management (MLM) system is a software-based approach to managing the entire lifecycle of an AI/ML algorithm, and a model catalog is a key component of the system inventorying models and capturing information about their development, training data, performance metrics, and ethical considerations. MLM systems are used by various business areas – including data science, software engineering, product management, and ethical and legal compliance teams – to manage AI/ML model data, streamline development and deployment processes, collaborate effectively, and ensure responsible and transparent use of AI/ML, thereby improving productivity, reducing costs, and enhancing model performance and ethical considerations. The model catalog will be the cornerstone of MLM that will likely emerge as the go-to information tool to drive companies’ AI compliance efforts. A robust centralized repository of information, versions, and documents related to an AI/ML model will enable responsible and transparent AI/ML practices.
So what’s your organization’s plan for when the shoe does drop on AI regulations, along with potential fines, penalties, and maybe jail time for violations?
With environmental and social regulations in manufacturing, companies responded by deploying PLM systems with robust BOM management capabilities to serve as their systems of record supporting supply chain transformation, product stewardship, regulatory compliance, and social responsibility programs. With these systems in place, companies found that the transformations necessary to ensure regulatory compliance had ancillary business benefits (lower costs, increased efficiency, reduced risks), and early adopters that put those systems in place sooner enjoyed first-mover advantages and a competitive edge over laggard peers.
In the same way, with AI regulations and their coming requirements, companies can prepare today by deploying MLM systems that include robust model catalog capabilities as their de facto system of record to ensure both regulatory compliance and responsible, accountable use of machine learning models. We can anticipate that the ancillary benefits of model catalogs - in terms of accelerated model pipelines, more efficient data science, increased risk mitigation - will also accrue to the early adopters that put model catalogs in place before their laggard peers.
The following are some of the key elements that would be managed in a model catalog and that would support your regulatory compliance obligations:
By managing all these elements in a centralized model catalog, organizations can ensure that they have a complete, up-to-date view of all their AI/ML models. This can help ensure that models are used in a responsible and effective manner, while also making it easier to track model performance, identify issues, and make improvements over time.
At Verta, we were the first (and so far only) company to pioneer model catalog technology with the recent introduction of Verta Model Catalog. It’s exciting to see the uptick in market realization of our vision. Our fascinating world of AI and technology is evolving around us, and I’m looking forward to studying and writing about AI regulations as the evolution continues.