Written by: Haim Ravia, Dotan Hammer
Virginia was set to become the third U.S. state to legislate AI regulations, after Utah and Colorado, with its High-Risk Artificial Intelligence Developer and Deployer Bill. However, the Governor of Virginia vetoed the bill, and the Virginia Senate and House of Delegates may now attempt to override the veto with a supermajority vote, an unlikely outcome.
The bill followed similar legislative efforts around the world and adopted the risk-based model pioneered by the EU AI Act, issuing operating standards for developers and deployers of certain AI categories deemed “High-Risk”. High-risk AI includes any AI system intended to make autonomous decisions regarding “consequential decisions” such as those impacting parole, education enrollment and opportunities, financial services, access to healthcare or employment, housing, insurance, marital status, or legal services.
Under the vetoed bill, both developers and deployers must use “reasonable care” to protect consumers from potential harm, particularly regarding potential algorithmic discrimination, and ensure transparency by providing documentation, marking AI-generated content, and disclosing to consumers that they are interacting with an AI and additional information including the risk mitigation measures taken.
The bill also requires deployers to implement risk management policies and programs, complete AI impact assessments, and provide consumers with an opportunity to appeal consequential AI-made decisions. Adhering to established AI risk management frameworks may create a rebuttable presumption that a developer or deployer has met the required standard of reasonable care.
Virginia’s Attorney General is tasked with enforcement, by issuing civil investigative demands, bringing a civil action against violators, and seeking civil penalties of up to $10,000.
Click here to read the full text of the vetoed bill.