By: James Lynch – nationalreview.com –
Shared concerns about the welfare of children and workers are bringing sharply divided lawmakers together.
Republicans and Democrats who can’t agree on much of anything these days — including how to keep the basic functions of the federal government up and running — seem to be finding common ground on how to approach an industry that’s poised to dominate the American economy: artificial intelligence.
Citing an imperative to protect American workers and children from the predations of artificial intelligence, a bipartisan coalition is emerging to establish safeguards around the technology.
The AI safety movement is fast becoming a powerful force on the state and national levels with advocates spanning the entire political spectrum. Its growing political strength corresponds with polling data showing an overwhelming majority of Americans want policymakers to prioritize safety, even as AI boosters warn that any guardrails will inevitably slow innovation.
“A political realignment is happening right before our eyes. Poll after poll shows that Americans across the political spectrum overwhelmingly support regulation of AI companies. A bipartisan coalition is coalescing around the urgent need to rein in Big Tech’s unchecked power and ensure that AI is developed in a way that benefits our kids and our communities,” said Michael Kleinman, head of U.S. policy at the Future of Life Institute, a nonpartisan think tank specializing in AI safety.
Earlier this year, during the legislative process for the GOP’s tax and budget megabill, Republican Senator Marsha Blackburn (Tenn.) led the fight to kill a measure that would have prevented states from regulating AI in the immediate future. In a resounding 99-1 vote, the Senate defeated Senator Ted Cruz’s (R., Texas) AI regulatory moratorium after intense pushback from conservative and liberal groups.
“Artificial intelligence is driving a new frontier of innovation, but it’s also creating urgent risks for Americans from AI-generated child sexual abuse material to creators having their voices and likenesses replicated without their consent,” Blackburn said in a
Blackburn has proposed bipartisan legislation, the NO FAKES Act, to prevent AI from using the voice and likeness of any individual without authorization. Alongside Senators Thom Tillis (R., N.C.), Chris Coons (D., Del.), and Amy Klobuchar (D., Minn.), she first introduced the bill last year and reintroduced it earlier this year.
The support Blackburn has received from Republicans, who have been willing to weather criticism from Silicon Valley, is reflective of the AI skepticism emerging within the MAGA movement.
“The MAGA base has emerged as a cornerstone of this movement, voicing concerns about AI’s risks to children and threats to our jobs, as well as distrust of Silicon Valley’s cultural values and related fears about AI-powered surveillance and social manipulation,” Kleinman added.
“This summer’s battle over federal preemption of state AI laws marked a turning point in this coalition’s emergence. When Big Tech lobbyists pushed for federal legislation that would have undermined commonsense state-level protections, they were taken by surprise by the broad-based pushback they received.”
The most vocal MAGA proponent of AI safeguards is Senator Josh Hawley (R., Mo.), who spoke about the possible dangers of AI earlier this year at the MAGA-aligned National Conservatism Conference. Hawley painted a gloomy picture about the threat AI could pose to human dignity and American jobs if it is channeled in the wrong direction.
Since then, Hawley has launched an investigation into Meta’s AI chatbot for sending creepy messages to children and led a hearing about the negative impacts of AI bots on children’s mental health. The hearing featured the parents of children who were driven to suicide and other destructive actions by AI chatbots.
Hawley and Senator Bill Cassidy (R., La.), chairman of the Senate Health, Education, Labor, and Pensions (HELP) Committee, subsequently wrote a letter to AI companies demanding more information about what they are doing to prevent the dangers addressed in the hearing.
Open AI and Character AI, two AI chatbot companies highlighted at the hearing, rolled out new chatbot guidelines to protect teenage mental health. But Open AI CEO Sam Altman suggested Tuesday that it would roll back most of those protections “in most cases” and expand into “erotica for verified adults.”
Meta also responded after its chatbot was caught sending “sensual” messages to children, announcing new restrictions on AI conversations with teenagers as part of its enhanced Instagram safeguards for teenagers. But Meta and Open AI have been at the forefront of opposing any kind of AI regulation on the state and national levels.
In addition, Hawley has introduced bipartisan legislation with Democratic counterparts to allow product-liability lawsuits over AI malfeasance and create an AI risk-evaluation program in the Department of Energy.
The bipartisan legislation reflects the AI safety movement’s position as a rare issue where liberals and conservatives put disagreements aside to address shared concerns.
“It’s really interesting how people doing this work really, truly believe that they’re trying to put aside political differences on other issues to put humanity first and put the good of the country above other important issues that do matter,” said Brendan Steinhauser, CEO of the nonpartisan Alliance for Secure AI and a former GOP communications strategist. “But this is one that sort of supersedes everything.”
Like Hawley, many social conservatives are worried about the impacts AI could have on children and the dignity of the human person.
“The main thing is that anything that seeks to be humanlike and seeks to offer you an alternative reality, or proposes to the user that . . . it’s superior at something fundamentally human, such as being a therapist or being a priest or being a friend. These things are an attack on what it means to be human beings and their social nature,” said Michael Toscano, director of the family first technology initiative at the Institute for Family Studies, a socially conservative think tank.
“And so, I would think one of the main things that AI could do is refrain from seeking to hijack our social natures and instead leave those untouched.”
Those concerns are not unwarranted, as new data show 42 percent of high-schoolers say they or someone they know has used AI for companionship. Nearly one in five say they or someone they know has had a romantic relationship with AI.
Numerous polls have shown the American people are broadly aligned with lawmakers and policy experts who prioritize child safety over AI innovation.
The Institute for Family Studies took a poll last month and found that 91 percent of Americans believe tech companies should be prohibited from allowing AI chatbots to have sexual conversations with minors. Nearly all Americans, 90 percent, support granting families the ability to sue AI companies if their product contributes to the harm of a child. Likewise, 90 percent of Americans want Congress to prioritize protecting children over tech-industry growth, the poll found.
“We do a lot of polling on a lot of issues, and very rarely — I can’t recall actually a poll which came back that is so tilted in one direction,” Toscano said. “People think that Congress should be concentrating on protecting children from what they see as a significant threat to their well-being from new AI products, AI chatbots in particular.”
Another September poll from Gallup found that 80 percent of Americans prioritize AI safety even if it slows down innovation. The Americans who believe in prioritizing AI safety are distrustful of AI, with 66 percent saying they fully or somewhat distrust AI.
Similar results were found in a Future of Life Institute poll of Republicans taken in September. The poll shows 82 percent of Republicans support limits on what AI is allowed to do and 71 percent believe AI will make it more difficult for children to think for themselves. Republican voters think it is urgent to create standards for AI companies, with 70 percent saying it is extremely or very important to have some oversight.
Across the board, policymakers see the problem of chatbots and AI relationships as a chance for a potential do-over for the slow political response to social media platforms.
“I think AI is far more powerful and potentially more dangerous than even social media itself because of the way that it can manipulate and really take that addictive component to the next level,” Steinhauser said.
“Policymakers, especially on the state level, but certainly in Congress as well are aware of that, they’re tracking that, and they want to get it right this time. Because there’s a lot of data out there that shows social media has already done a lot of harm despite the good that it can do as a tool.”
On the left, many liberals now wish social media had been more heavily regulated out of the gate given the harms to teen mental health that resulted and the popularity of right-wing, populist content.
“I think the more senior members saw the cycle of social media, so they’re excited to do something around this, even though they don’t completely understand it,” said Sunny Gandhi, vice president of political affairs at Encode, a left-leaning AI policy organization.
“To really understand this, you have to look to social media. Especially in the wake of the Charlie Kirk assassination, Governor Cox of Utah went on to a press conference once they caught the guy and said, ‘Social media is a cancer on society, and it’s horrible.’ I think those kind of sentiments are very pervasive across the spectrum, and that’s what animates a lot of this.”
On this issue, Republican voters might find common ground with Democratic heavyweights, including California Governor Gavin Newsom (D).
Newsom recently signed legislation creating guardrails for advanced AI models after he brought together a group of AI experts to make recommendations on AI governance.
The legislation, SB53, establishes new transparency standards, protections for whistleblowers, reporting procedures for safety incidents, and a state-backed cloud computing research cluster.
Alongside SB53, Newsom signed other pieces of legislation to address issues such as AI chatbots, privacy, and deepfake content. Given California’s status as the home of the tech industry, Newsom’s legislation is a significant victory for the AI safety movement and demonstrates its prominence on the left.
Although AI safety is gaining bipartisan traction, the Trump administration’s AI Action Plan has little to say about it. The Trump White House’s plan mostly focuses on accelerating innovation and building AI infrastructure to secure American leadership in the industry.
“We’re dominating with AI, which seems to be the new big thing,” Trump said Wednesday. “That’s the new internet, that’s the new . . . whatever.”
To see this article in its entirety and to subscribe to others like it, please choose to read more.
Source: AI Threat: A Bipartisan Drive to Shield Kids from Artificial Intelligence | National Review
Listen Online
Watch Online
Find a Station in Your Area


Listen Now
Watch Online