Yves here. On the one hand, it is gratifying to see that some curbs are being put on AI, even if in a piecemeal fashion by states. Restrictions on AI use in medicine, for instance, will tend to have broader impact since most providers are national and will find it too costly to run a hodge-podge of processes to cater to different state rules.
Keep in mind that letters and calls to state representatives and senators do make a difference. Naked Capitalism reader letters in California played a big role in getting a landmark private equity transparency law passed. So if you have any protections against AI you are keen to see implemented, some personal lobbying, better yet joined by a few people you know, can make a difference. 10 letters on one topic (particularly when NOT form letters) can make a difference.
However, even the best rules won’t have much impact without serious penalties. So advocates for AI controls should demand not just stiff punishments, but treble damages in the case of willful misconduct or deception, and make both the company/entity that employs the AI and the vendor jointly and severally liable.
In addition, even with Congress overwhelmingly nixing a low to override these initiatives, the Trump Administration is trying to thwart these state initiatives. Nevertheless, it make sense to take as much ground as possible when the opportunity presents itself.
By Anjana Susarla, Professor of Information Systems, Michigan State University. Originally published at The Conversation
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap.
Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.
Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI.
Government Use of AI
The oversight and responsible use of AI are especially critical in the public sector. Predictive AI – AI that performs statistical analysis to make forecasts – has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.
But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harmsposed by AI systems used for government services include racial and gender biases.
Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment.
Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them.
Montana’s new “Right to Compute” law sets requirements that AI developers adopt risk management frameworks – methods for addressing security and privacy in the development process – for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York’s SB 8755 bill.
AI in Health Care
In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI and clinicians’ use of AI.
Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose.
Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology.
Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.
Facial Recognition and Surveillance
In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.
Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.
Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity.
By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.
Generative AI and Foundation Models
The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah’s Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they’re using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information.
Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training.
AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency.
Trying to Fill the Gap
In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers’ compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections.
Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations.”
The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration’s definition of burdensome against needed federal funding for AI.
Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois
I remember many years ago when Microsoft was the villain. Microsoft Office was young, dominating, and putting other software vendors in the same space out of business. They integrated their suite of products so you could embed Excel generated bar and pie charts into Word documents and everyone at the product demonstration went crazy. Microsoft was going to take over the world and no one in tech can compete. They have to be stopped! Then EU came along with their lawsuit that amounted to nothing more than slowing them down. I feel like the same thing is happening with AI, there is so much fear that the point of regulation is to simply slow it down or kill it. If AI is hype then let it die on its own accord. If AI is valuable then let it be like any other disruptive technology throughout the history of the world. I’m not completely ignorant on how AI (neural networks) work and these ideas and the math behind it have been around for decades, abandoned, and dusted back off. Maybe it will be abandoned again, maybe a new generation of geniuses will make it work, or maybe quantum computing will be the leap forward in compute that will make processing even larger data volumes possible to make this all work. To me, it seems dangerous for the US to over regulate this space and potentially fall behind other countries when viewed from a national security standpoint.
Tech bros have gotten accustomed to moving fast and breaking things primarily imo to be in a space that has no regulations wherein they can exploit that lack of regulation to do things not in the best interest of common society but rather to rape and pillage then claim of course “smartiness”, bravery , and patriotism!
The mantra of the tech world writ large is we did it, its done, get used to it. I say regulate peter thiel, bezos musk gates and the whole lot of useless eaters to prevent them from ruining the nation but that horse has left the barn.
The point of AI regulation is to correct asymmetric bargaining power that companies have accumulated and allowed them to lobby legislative regulations that favor their business practices while stifling things that threaten their business models like open source, right to repair, and adaptive engineering. I.E. reverse the trend of corporations saying “Rules for thee, not for me.”
As Cory Doctorow has alluded to in his theory of enshittification, if the Federal government did its proper job of enforcing anti-trust, privacy, accessibility, and copyright laws in a way that wasn’t biased to benefit big corporations over the rest of the public, then states wouldn’t need to add more laws specific to try and claw back AI related abuses of power.
And it’s always “We’ll fall behind if we regulate it.” Fall behind in what? The ability of criminals to get away with identity theft? We can point to concrete harms that happen when companies adopt AI products without guardrails We already suffer enough from cybercrime in the United States.
But we have to remove regulations, because somehow laws that limit the use of AI in places where PII could be leaked will somehow diminish the ability of other companies to train image classifiers on identifying airfields or violent encounters that would be used in national security. It’s always this abstract nonsense that enthusiasts peddle, when experts from not just machine learning but also cyber-security and other technology fields who aren’t selfish how juvenile the logic behind such an assertion. Adding regulations to make sure those selling AI agents don’t leak personal data, or can’t be used as a substitute for proper police and intelligence work enhance national security, not diminish it.
Anyone saying we should let AI loose should read up on the history of Thomas Midgley Jr. The current blase approach to LLMs and similar technologies has a similar tone to his when he was peddling Tetraethyllead. I fear the effects of widespread, unconstrained adoption of these technologies will have the same effect that leaded gasoline did on generations across continents.
Part of the problem is that everything is labeled AI these days when it is not. Most people can’t distinguish because the marketing literature is gobbledygook and has “AI” sprinkled throughout. Your two links were not examples of AI.
I’m familiar with the board game industry. Right now there is a raging debate over AI in board games and it’s heated. Companies are looking to cut costs as board game prices have escalated in recent years with manufacturing costs increasing in China. Board game prices are surpassing what the average consumer is willing to spend in this niche industry. Some companies are willing to blow up prices with super deluxe versions of a game and make higher profits on lower sales volume. This is angering many consumers who can’t afford the latest super deluxe Kickstarter but desire the game. Others are cutting costs in an attempt to keep prices in check. To this end, companies are using AI to produce graphics in board games and cards. This has allowed them to layoff their in-house graphics team. It’s a valid use of AI and AI does static image art very well. The talented people being let go are seeing their livelihood slowly disappear. Should this be regulated to the point of removing the savings incentive because we don’t know how to handle the disruption? Trust me, within the gaming industry people are calling for an all out boycott but then again these same people are also balking at the prices of games these days.
The first link was about enshittification, not an example of AI. It was to emphasize a point that if laws governing anti-trust and privacy were enforced, then competition between firms would actually lead to better results and the welfare-producing iterations of AI flourish, instead of your naive assumption that deregulation is sufficient and necessary when it has led to the actual worsening of many technological products, Microsoft ones included. I framed my paragraph around that link to emphasize that. I am now doubting that you are a human that is actually comprehending and writing responses in this comments section, and is instead just feeding things into and out of GPT or some other LLM.
As for the second link, and subsequent paragraphs where I address your other assertion that “it seems dangerous for the US to over regulate this space and potentially fall behind other countries when viewed from a national security standpoint.” Image classifiers, like YOLO (You look only once), use convolutional neural nets, the thing you claimed you were “Not completely ignorant on how AI (neural networks) works.” Likewise any “AI agent” that uses either a recurrent neural network to process text, or feed forward NNs like transformers to tokenize words before self-reflection, which covers almost all LLMs, is still using the same ‘neural network’ technology that’s colloquially being called AI, by yourself and myself.
Even though you didn’t bother to consider or balance the damages AI agents and LLMs could have on identity theft and cybersecurity issues, I’ll address the example you brought up:
The first problem with using non-human generated art is that only human generated media, or media that has substantial human input/transformation, can qualify for copyright protections. US courts have ruled that generating a prompt for DALL-E or any other Stable Diffusion based image generative AI does not qualify as substantial human transformation.
The second problem is that non-human generated art doesn’t show the provenance and process of how that art was created. Human artists will do non-color concept sketches/stencils before moving to colored rough drafts and then final designs for illustrations. Those iterations are important, because they show that even if a final design may have some resemblance to other illustrations, that those final designs were not from a traced drawing or derivative of someone else’s design. Text-to-image generators don’t show any of that process, so they cannot adequately prove they did not violate a human author’s reserved copyrights when generating an image.
If I was fired from a board game development and publishing house, didn’t authorize derivative works from my original art contributions to a game, and then see the publishing house make a sequel game that has the same art style as the first from a text-to-image generator trained on my original artwork, then I would absolutely try to sue them for damages, and otherwise lobby for regulation to prevent this sort of thing from happening. Doubly so if I were to end up working for someone else and that original company then tries to sue me for making “art that looks too similar to our own,” in a board game that used non-human generated art.
While I am not an artist, I am a healthy adult, and so I have a fully developed anterior insular cortex and so I can empathize and understand the motivations of those artists. The fact you weigh the disruption of computers generating artwork as good (because it makes the treats cheaper!) and something artists just have to get used to, and don’t even bother to balance the disruption of manufacturing and supply chains becoming more expensive as good (because of rising Chinese wages and correctly pricing risk, both of which were underpriced before) and something publishers and consumers just have to get used to without hurting artists, tells me that whatever wrote this is a clanker that lacks an anterior insular cortex, or is acting like a child who hasn’t had one fully developed.
Based on the recent tevelations about Microsofts aiding of war crimes in Gaza, they are very much stillthe thr villain.
Paleo-liberalism and paleo-(American) conservativism both do not like the aggregation of power. Techno-collectivism-feudalism (aka spun as “AI”) is that aggregation of power.
A Martian would think that this would be maybe be one topic where states can find mutually discrete, transactional alliances. Not holding my breath given the toxic effect of culture wars.
The Tucson city council voted unanimously Wednesday against bringing the massive and water-devouring Project Blue data center — tied to tech giant Amazon — into city limits.
https://azluminaria.org/2025/08/06/tucson-city-council-rejects-project-blue-amid-intense-community-pressure/