A bipartisan bill to establish regulatory sandboxes for artificial intelligence (AI) experimentation in financial services took center stage at a Senate subcommittee hearing Wednesday (July 30), as lawmakers weighed how to balance AI-driven innovation with oversight.
Sen. Mike Rounds (R-S.D.), who chairs the Senate Subcommittee on Securities, Insurance, and Investment, announced the reintroduction of the “Unleashing AI Innovation in Financial Services Act.”
The bill, co-sponsored with Sen. Martin Heinrich, D-N.M., would let financial institutions test AI-enabled products and services without immediate risk of enforcement action, as long as they meet transparency, consumer protection and national security requirements.
“By creating a safe space for experimentation, we can help firms innovate and regulators learn without applying outdated rules that don’t fit today’s technology,” Rounds said. The bill was originally introduced in 2024.
If enacted, S.4951 would direct financial regulators — including the Securities and Exchange Commission, Consumer Financial Protection Bureau and the Federal Reserve — to evaluate and potentially waive or modify existing rules for approved AI test projects. Agencies would have 90 days to approve or deny applications, with automatic approval if no decision is made by the deadline.
During the hearing, lawmakers from both parties said they wished to foster innovation while mitigating the risks of unregulated AI adoption.
Sen. Mark Warner, D-Va., remembered a prior hearing called by Sen. Chuck Schumer, D-N.Y., that brought together CEOs of top AI companies. “Remember Schumer asked, ‘How many of you all think AI needs to be regulated?’ Everybody raised their hand.”
But once it came down to brass tacks, “I worry that we’re frankly going almost completely in the opposite direction,” Warner said. For example, President Trump’s AI action plan favored deregulation of AI.
Warner said that if people could turn back time, “most of us would think that if in 2014 we’d put some guardrails on social media, at least [to protect] our kids’ mental health, we’d be in a better spot. We didn’t — and social media is tiny compared to the potential that AI has.”
Warner pointed to the example of Delta Air Lines testing AI systems that use an individual’s data to tailor airfare pricing, a practice he called “surveillance” pricing. Warner and two other senators are concerned and have written a letter to Delta asking for additional information.
Read more: Delta Air Lines Tests AI-Powered Personalized Pricing
Insurers Refuse to Cover AI Risks
Kevin Kalinich, global leader for intangible assets at Aon, said during the hearing that the insurance industry is beginning to respond to the risks posed by emerging AI capabilities, including hallucinations from generative models, deepfakes, and autonomous software agents.
However, Kalinich said that actuarial models lack sufficient historical data to accurately price these risks. As a result, some insurers are excluding AI-related exposures in professional liability and cyber policies.
Meanwhile, “a few cutting-edge insurance carriers have created AI-specific insurance protection, albeit with smaller limits than are sufficient for larger clients,” Kalinich said.
The Aon executive noted that underwriters are more likely to offer favorable terms when firms have strong AI governance practices, including documented model audits, explainability metrics and bias mitigation protocols. “Good governance leads to better insurability, which in turn supports innovation and consumer protection,” he said.
Tal Cohen, president of Nasdaq, described how AI is already improving market surveillance, reducing false positives and streamlining investigations. Last week, Nasdaq launched its agentic AI workforce for compliance and efficiency.
Nasdaq’s first two AI agents — the digital sanctions analyst and digital enhanced due diligence analyst — were put to work to labor-intensive compliance tasks. For the digital sanctions analyst, when integrated into a bank’s alert triage workflow, reduced the review workload by over 80%.
Beyond efficiency is stability. Rounds asked Nasdaq’s Cohen what threats from adversarial nations might be coming that would destabilize U.S. financial markets, since delays of even milliseconds in order execution can erode investor confidence.
Cohen said that Nasdaq’s chief information security officer not only uses the most advanced AI cybersecurity tools but also coordinates with industry peers on protection. “This is not a competitive element for us with other exchanges,” he said. “We share and we collaborate.”
But when pressed whether a formal multiagency task force exists to address AI risks across exchanges, Cohen replied, “We need one. We need to have that discussion.”
Moreover, the liability arising from these AI incidents would be “shared,” Cohen added.
David Cox, vice president for AI models at IBM Research, said an open-source approach in AI development is important in building trust.
“We strongly believe in the value of open source AI. It enhances security, trust and collaboration through transparency, enables smaller firms and research organizations to compete without prohibitive upfront capital costs, and it expands the pipeline of future talent,” Cox said.
Large language models (LLMs) must be auditable, particularly in regulated environments.
“Firms must understand exactly what data underpins their models and be able to audit those systems over time,” Cox said, adding that few model developers disclose their training datasets, making compliance tougher.
Sen. Katie Britt, R-Ala., raised concerns about AI-powered impersonation scams, citing a 148% year-over-year increase in financial fraud driven by generative AI. She also asked Cohen about trading bots and the risk of AI-based decision systems on market integrity. Cohen said any regulated firm would have the “proper controls” in place.
In the end, there was broad agreement at the hearing that the status quo on regulations is not sufficient. Warner noted that China is no longer merely copying technology instead of innovating. “That’s changed,” he said. “China is not playing for second place in the race for AI.”
Added Britt: “This is the race that matters.”
Read more:
California Advances Bill Regulating AI Companions Amid Concerns Over Mental Health Issues
AI Regulations Bring Deluge of Lobbying Efforts to DC
Senate Shoots Down 10-Year Ban on State AI Regulations