By JESSE BEDAYN, SUSAN HAIGH, TRÂN NGUYỄN and BECKY BOHRER (Associated Press/Report for America)
DENVER (AP) — Artificial intelligence is being used to determine which Americans get the job interview, the apartment, and even medical care, but the initial major suggestions to control bias in AI decision making are encountering obstacles from every direction. first major proposals to reign in bias in AI decision making are facing headwinds from every direction.
Lawmakers working on these bills, in states including Colorado, Connecticut and Texas, are coming together Thursday to argue the case for their proposals as civil rights-oriented groups and the industry play tug-of-war with core components of the legislation.
Organizations including labor unions and consumer advocacy groups want more transparency from companies and greater legal recourse for citizens to sue over AI discrimination. The industry is offering tentative support but resisting those accountability measures.
The bipartisan lawmakers caught in the middle — including those from Alaska, Georgia and Virginia — have been working on AI legislation together in the face of federal inaction. The goal of the press conference is to highlight their work across states and stakeholders, reinforcing the importance of collaboration and compromise in this first step in regulation.
The lawmakers include Connecticut’s Democratic state Sen. James Maroney, Colorado’s Democratic Senate Majority Leader Robert Rodriguez and Alaska’s Republican Sen. Shelley Hughes.
“At this point, we don’t have confidence in the federal government to pass anything quickly. And we do see there is a need for regulation,” said Maroney. “It’s important that industry advocates, government and academia work together to get the best possible regulations and legislation.”
The lawmakers argue the bills are a first step that can be built on going forward.
While over 400 AI-related bills are being debated this year in statehouses nationwide, most target one industry or just a piece of the technology — such as deepfakes used in elections or to make pornographic images.
The biggest bills this team of lawmakers has put forward offer a broad framework for oversight, particularly around one of the technology’s most perverse dilemmas: AI discrimination. Examples include an AI that failed to accurately assess Black medical patients and another that downgraded women’s resumes as it filtered job applications.
Still, up to 83% of employers use algorithms to help in hiring, according to estimates from the Equal Employment Opportunity Commission.
If nothing is done, there will almost always be bias in these AI systems, explained Suresh Venkatasubramanian, a Brown University computer and data science professor who’s teaching a class on mitigating bias in the design of these algorithms.
“You have to do something explicit to not be biased in the first place,” he said.
These proposals, mainly in Colorado and Connecticut, are complex, but the core thrust is that companies would be required to perform “impact assessments” for certain AI systems. Those reports would include descriptions of how AI figures into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards.
The main argument is about who can see those reports. Having more access to information about the AI systems, like the impact assessments, means more accountability and safety for the public. However, companies are concerned that it also increases the risk of lawsuits and the disclosure of trade secrets.
According to proposed laws in Colorado, Connecticut and California, the company would not have to regularly submit impact assessments to the government. It would be mostly up to the companies to reveal to the attorney general if they discover discrimination — a government or independent organization would not be testing these AI systems for bias.
Labor unions and academics are concerned that relying too much on companies self-reporting puts the public or government in danger of identifying AI discrimination before it causes harm.
Kjersten Forseth, representing the Colorado’s AFL-CIO, which opposes Colorado’s bill, said, “It’s already difficult when you have these big companies with billions of dollars. Essentially you are giving them an extra advantage to push down on a worker or consumer.”
Tech companies argue that more openness will reveal trade secrets in a highly competitive market. David Edmonson, of TechNet, a bipartisan network of technology CEOs and senior executives that lobbies on AI bills, said in a statement that the organization works with lawmakers to “make sure any legislation addresses AI’s risk while allowing innovation to flourish.”
The California Chamber of Commerce opposes that state’s bill, worried that impact assessments could become public in legal cases.
Another controversial part of the bills is who can sue under the legislation, which is generally limited to state attorney generals and other public attorneys rather than citizens.
After a provision in California’s bill that allowed citizens to sue was removed, Workday, a finance and HR software company, supported the proposal. Workday argues that civil actions from citizens would leave the decisions to judges, many of whom are not tech experts, and could lead to inconsistent regulation.
“We can’t stop AI from becoming a normal part of our daily lives, so obviously government has to intervene at some point, but it also makes sense that the industry itself wants a good environment to succeed,” said Chandler Morse, vice president of public policy and corporate affairs at Workday.
Sorelle Friedler, a professor at Haverford College who focuses on AI bias, disagrees.
“That’s generally how American society asserts our rights, is by suing,” said Friedler.
Sen. Maroney of Connecticut said there’s been criticism in articles claiming he and Rep. Giovanni Capriglione, R-Texas, have been promoting bills written by the industry, despite the industry spending a lot of money to lobby against the legislation.
Maroney noted that a consumer group, Consumer Technology Association, has taken out ads and created a website urging lawmakers to defeat the legislation.
“I believe that we are on the right path. We’ve worked together with people from industry, from academia, from civil society,” he said.
“Everyone wants to feel secure, and we’re making rules that will enable safe and dependable AI,” he said.
_____
Associated Press reporters Trân Nguyễn contributed from Sacramento, California, Becky Bohrer contributed from Juneau, Alaska, Susan Haigh contributed from Hartford, Connecticut.
___
Bedayn is a corps member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.