One of the bills require political candidates to disclose the use of AI-generated content in ads and communications.
The Wisconsin assembly on Thursday passed two bills aimed at regulating the use of artificial intelligence (AI) in political advertisements and within state agencies amid the upcoming November elections.
“Under the bill, every audio communication paid for with a contribution or disbursement that contains synthetic media must include at both the beginning and the end of the communication the words ‘Contains content generated by AI’,” it stated.
The Republican lawmaker said the use of AI can be scary “especially when you think of the possibilities when it comes to misinformation and misleading people, especially in things as important as elections.”
“It used to be that we could trust what we see with our eyes and believe what we heard with our ears, but that’s no longer the case. With artificial intelligence, it’s getting harder and harder to know what is true,” he said.
Mr. Neylon called the bill “an important first step to provide clarity to voters that protects the integrity of our elections,” noting that it allows people “the tools to determine fact from fiction.”
The audit report will include an inventory of each AI tool being used, a summary of written guidelines that govern the use of AI, as well as a summary of policies and practices in place to evaluate any data collected with AI.
This bill also would give state agencies until 2030 to develop a plan to reduce their positions. By 2026, the agencies would have to report to legislators which positions AI could help make more efficient and report their progress.
‘Surging Complaints’ Around Impersonation Fraud
States across the United States have taken steps to regulate AI within the last two years. Overall, at least 25 states, Puerto Rico, and the District of Columbia introduced AI bills last year alone.
Legislatures in Texas, North Dakota, West Virginia, and Puerto Rico have created advisory bodies to study and monitor AI systems their state agencies are using. Louisiana formed a new security committee to study AI’s impact on state operations, procurement, and policy.
The proposed rule changes follow “surging complaints” around impersonation fraud and “public outcry” about the harms caused to consumers and to impersonated individuals, according to the FTC.
The FTC said it is seeking public input on whether the revised rule should include provisions prohibiting the use of AI platforms for impersonation.
“As scammers find new ways to defraud consumers, including through AI-generated deepfakes, this proposal will help the agency deter fraud and secure redress for harmed consumers,” it stated.
There has been an increase in concerns regarding deepfake technology, especially after a robocall pretending to be President Joe Biden was used to discourage people from voting in New Hampshire’s primary election. The call’s source was traced to a company in Texas.
The Associated Press contributed to this report.
Original News Source Link – Epoch Times
Running For Office? Conservative Campaign Consulting – Election Day Strategies!