Banking Commerce and Insurance

AI safety regulations considered

The Banking, Commerce and Insurance Committee heard testimony Feb. 9 on two bills aimed at regulating companies that provide artificial intelligence services.

LB1083, introduced by Whitman Sen. Tanya Storer, would require “large frontier” AI developers and large chatbot providers to create and publicly post plans describing how they assess and attempt to reduce “catastrophic” risks to the public and to children specifically.

Sen. Tanya Storer
Sen. Tanya Storer

A large frontier developer is defined as an AI developer who, together with its affiliates, had a collective annual revenue in the preceding calendar year of $500 million or more.

Catastrophic risk is defined as a risk that would contribute materially to the serious injury or death of more than 50 people or cause more than $1 billion in damage or property loss arising from a single incident involving a frontier developer.

A covered chatbot service is one that is likely to be accessed by minors and has at least one million active users monthly.

Finally, the bill would require certain AI safety incidents be reported to the attorney general, authorize the AG to update key definitions beginning Jan. 1, 2027, allow enforcement through civil penalties and prohibit retaliation against employees who provide “good faith” warnings about potential risks.

Storer said the measure takes a “light approach” to AI regulation and would not stifle startups or innovation because it applies only to the largest developers. The bill also would comply with President Trump’s recent executive order regarding AI regulation, she said, which specifically allows states to put parameters around the technology as it relates to minors.

AI presents particular risks to children, Storer said, noting several recent cases where teens have taken their lives with the encouragement of chatbots. Parents need to know that the state is addressing these risks, she said, and companies that already are doing the right thing should not have difficulty complying with the bill.

“Nebraska has an opportunity to lead, not with heavy-handed regulation, but with transparency,” Storer said. “We owe it to families in our state and especially to children, to know that companies deploying the most powerful AI systems in history are being honest with us about the risks.”

Andrew Doris of the Secure AI Project testified in support of the proposal. AI developers need room to innovate and improve safety practices, he said, and most legislators lack the expertise to write highly technical regulations specific to the industry.

“We think that the smart way to balance these two truths is to allow AI developers to write their own safety standards, but require them to be transparent about what they are, so we can hold them to their own promises” Doris said.

Bebe Strnad of the Nebraska Attorney General’s Office also testified in favor, saying the bill strikes the right balance between the interests of industry and consumers. LB1083 leaves all technical decisions up to developers and experts, she said, but gives the state the tools to hold companies accountable if they fall short of their own standards.

“We’ve heard many stories about AI products encouraging alarming conduct and even inducing tragic outcomes,” Strnad said. “As a state, we can’t ignore these stories and do nothing.”

The committee also considered LB1185, sponsored by Sen. Eliot Bostar of Lincoln, which would adopt the Conversational Artificial Intelligence Safety Act.

Sen. Eliot Bostar
Sen. Eliot Bostar

Bostar said minors can easily become confused about whether they are in conversation with a chatbot or an actual human being, leading to exposure to adult content or emotional reliance on technology that was not created to act in their best interests.

“Conversational AI tools are increasingly designed to simulate human conversation in ways that can feel personal, emotional and real,” Bostar said. “For minors, those design features can create real risks.”

LB1185 would require disclosure when a user reasonably could believe that they are interacting with a human being and would add additional safeguards for minor account holders, including:
• recurring AI disclaimers;
• limits on engagement-based rewards; and
• deployment of reasonable measures to prevent sexually explicit or sexualizing content and to prevent the system from presenting itself as human or fostering emotional or romantic dependence.

The bill also would require a protocol to respond to prompts involving suicidal ideation or self-harm that includes referral to crisis services, and would prohibit a service from claiming to be designed to provide professional mental or behavioral health care.

The attorney general would be empowered to enforce the bill’s provisions through civil action.

Mary Pipher, a clinical psychologist, supported the measure. She said social media has exposed a generation of young people to an array of dangers and that children seeking mental health guidance from chatbots is particularly concerning.

“When children use chatbots as therapists, they’re likely to be in a great deal of trouble,” Pipher said.

Also speaking in support of LB1185 was Emily Allen, executive director of Tech Nebraska, a statewide industry association under the umbrella of the Nebraska Chamber of Commerce and Industry.

The bill “strikes a workable balance” between safety and industry flexibility, she said, by not creating a private right of action and not making AI developers automatically liable for how third parties use their products.

“We view this bill as a constructive starting point for smart regulation — policy that protects people while still allowing innovation to move forward,” Allen said.

No one testified in opposition to either bill and the committee took no immediate action on the proposals.

Bookmark and Share
Share