Just because a company is using AI, that’s no excuse for lawbreaking.
That was the message of a joint statement this week from four US government agencies regarding their commitment to enforce laws against discrimination and bias in automated decision-making systems.
The agencies involved include the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission.
Their joint statement notes that automated systems, including those that use machine learning algorithms, have the potential to facilitate fair and efficient decision-making in areas like housing, credit, and employment. But they also have the potential to perpetuate existing biases and discrimination in those and other ways that impact consumers and their finances.
The agencies affirmed their commitment to enforcing existing laws that prohibit discrimination in lending, employment, and other areas, and to ensuring that automated systems are used in compliance with these laws.
For example, current CFPB guidelines confirm that federal consumer financial laws and adverse action requirements apply regardless of the technology being used to make decisions that impact consumers’ finances, according to the statement. “The fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”
The agencies’ statement outlines several potential sources of discrimination in automated systems, including:
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” said FTC Chair Lina M. Khan in a separate statement. “Technological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
A study by Verta Insights, the research group within Verta, revealed that 89% of the more than 300 AI/ML executives and practitioners who participated in the research believe that AI regulations will increase over the next three years. In addition, 77% of the participants believe that AI regulations will be strictly enforced – a finding reinforced by this week’s statement from the government agencies. (Register for the on-demand webcast discussing the research results to receive a copy of the research report upon its release in May.)
Executives and stakeholders in the machine learning lifecycle should consider taking steps now to ensure that they meet not only current regulations but also the requirements of proposed laws like the American Data Privacy and Protection Act (ADPPA), the Algorithm Accountability Act, and the EU AI Act, as well as the various state- and local-level laws coming into force.
Contact Verta to arrange a discussion of further steps your organization can take to meet AI regulatory requirements and a complimentary consultative assessment of your readiness for AI regulatory compliance.