2026

Summary

Artificial Intelligence The Smart Way Is Stay Out of the Way.
AI Report cover

Executive Summary

As the United States celebrates its 250th anniversary, the genius of the Founding Fathers in protecting and promoting innovation should get special recognition. Article 1, Section 8 of the Constitution includes the only property right protected in America’s founding documents: “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

James Madison wrote in Federalist 43 that the “utility” of the protection of intellectual property (IP) in the Constitution “will scarcely be questioned,” and that “the public good fully coincides” with protection of the individual rights of inventors. He also wrote that these protections should be provided exclusively by the federal government rather than the states.

These principles, which cover copyright, patents, and trademarks, have underpinned the technology and innovation that has made the United States the world’s largest and greatest economy. Every invention, from the lightbulb to the telephone to the assembly line was made possible by the government’s protection of the rights to those products. The creativity of America’s artists, performers, and writers flourished for the same reasons. These protections became more critical following the invention of the computer and development of the internet, which led to the burgeoning prevalence of advanced computing and artificial intelligence (AI).

The potential benefits to society of innovation are often accompanied by fear of the disruptions and displacement that might occur. Electricity lit up homes, but it could also cause fires. Horseless carriages increased mobility but could also cause more accidents. The industrial revolution caused fears that mechanization would replace human labor, leading to widespread unemployment and economic instability.

It sounds familiar because many of those same concerns are expressed about artificial intelligence.

Fears of unknown consequences generated by new technology often convince lawmakers and regulators at all levels of government to attempt to control the technology. And the potential to imitate or steal someone’s creative work and present it as being produced by a human being rather than a machine is a new challenge that needs to be addressed. Since AI is being developed around the world, enacting the wrong laws and adopting burdensome regulations would stifle innovation, fail to protect original works, threaten U.S. global leadership, and open the door for other countries to take the lead on this critical technology.

This report will examine how AI impacts businesses, consumers, creators, and taxpayers; explore benefits and concerns over its use; and make recommendations as to how it should be regulated.

Artificial Intelligence: Machine Learning and Generative AI

AI includes Machine Learning (ML) and Generative AI, both of which use and analyze data collected by computer systems. ML learns and adapts without following explicit instructions, using algorithms and statistical models to analyze and learn from data patterns. The information is stored in big data models, which gather and manage large amounts and types of data from a variety of sources that help businesses better understand and improve their decisions, including how customers will behave and detecting and preventing fraud in banking.

Amazon, Facebook, Google, Instagram, Wayfair, and X use ML to direct targeted marketing to customers and users to deliver the products and services they indicate they want to see through their searches. ML also enhances automated email and spam filtering, financial accuracy, social media optimization, healthcare advancement, mobile voice to text and predictive text, and predictive analytics to deliver improved insights to users.

Generative AI is a form of ML that creates new content like images and text based on large language model datasets. ChatGPT, which stands for generative pre-trained transformer, was the first generative AI to process and generate text to create new content in response to user comments and questions. Google’s Gemini, Meta’s Llama, and Microsoft’s Copilot, have the same capabilities. Generative AI models go beyond “making a prediction and identifying a pattern” to “identify relationships within traditional datasets that machine learning cannot.”

As the Government Accountability Office’s (GAO) Artificial Intelligence issue page states, “Artificial Intelligence (AI) has created excitement because of its potential. For instance, AI can help diagnose medical conditions, forecast natural disasters, and protect national security. However, there are also significant concerns, including intellectual property rights, built-in bias, and effects on humans and the environment.” Even with these concerns, the embrace of AI is increasing across all sectors, including the government. A July 29, 2025, GAO report noted that, “Across the 11 selected agencies GAO reviewed with artificial intelligence (AI) inventories, the total number of reported AI use cases nearly doubled from 571 in 2023 to 1,110 in 2024. At the same time, generative AI use cases increased about nine-fold, from 32 to 282.”

With the increased use of AI, and in particular generative AI, computing power and the need to accommodate these advances has increased exponentially. Abundance Institute Head of AI Policy Neil Chilson noted during the April 1, 2025, House Oversight and Government Reform Subcommittee on Economic Growth, Energy Policy, and Regulatory Affairs hearing, America’s AI Moonshot: The Economics of AI, Data Centers, and Power Consumption, AI is part of the “current wave of advanced computing technologies,” which requires “the government to tackle some important challenges facing both AI innovators and the energy infrastructure providers on which they rely.”

Setting Standards for Using Artificial Intelligence

Like other technologies, AI is not bound by global borders and deriving the greatest benefits from its capabilities requires international cooperation for standard setting. The United States, like it has done for other technologies like wireless communications, must take the lead.

A March 3, 2025, University of Miami School of Law International and Comparative Law Review article noted that countries across the globe are investing billions on AI development, research, infrastructure, and regulatory frameworks. Although the United States has been maintaining its leadership, China has been investing heavily, as “Companies like DeepSeek have developed AI models that nearly match the capabilities of American competitors despite using inferior chips.”

The U.S. must strategically place itself at the forefront in setting and implementing global standards for AI. This means a need for companies and governments to come together in a consortium like the Institute of Electrical and Electronics Engineers, which was created in 1884. Congress and the executive branch must thoughtfully work together to allow AI to develop, while at the same time imposing strategic national guardrails so that businesses and consumers can benefit from its capabilities and be protected from potential harm.

The benefits of AI stretch across every industry and impact every American. For example, generative AI’s ability to recreate someone’s voice for new uses allowed former Rep. Jennifer Wexton (D-Va.), who lost her ability to speak due to progressive supranuclear palsy, speak again in her own voice.  Other healthcare applications for AI include “drug discovery, virtual clinical consultation, disease diagnosis, prognosis, medical management and health monitoring capabilities.”

For healthcare workers, the Food and Drug Administration (FDA) approved AI transcription software can streamline office visits and improve patient care by allowing a practitioner to spend more time focused on the patient instead of hand transcribing notes into the patient’s chart. The FDA has compiled a list of approved AI medical devices that assist doctors in the fields of cardiology, gastroenterology, neurology, ophthalmic, orthopedics, and radiology. These medical devices will help improve patient care and outcomes.

AI’s benefits can also extend to the environment. A Belgian company, BeeOdiversity, is currently studying bee nectar from flowers and other plants using AI to compile and analyze the data to determine harmful factors that might be causing the bee population to dwindle, including invasive species, heavy metals, and pesticide application and use. An October 19, 2024, CBS News report demonstrated how this data is helping an Oregon farmer better manage his 400-acre farm. Among the changes the farmer made based on the feedback he received from the data collected from his hives were increasing wetlands areas, which created more biodiversity in native plants and safer application processes for herbicide use.[1]

AI’s Impact on Intellectual Property Rights

Among the more visible and controversial uses of AI is its ability to replicate an individual’s voice and image. It can be not only misleading and confusing but also pose risks to IP rights, especially generative AI, which can be used to mimic an individual’s voice or image without their express permission.

Consumers of music or videos expect that what they are listening to or watching, unless they are explicitly informed otherwise, is that person’s voice and image.  The potential to be misled is enhanced by the duplication of voices and songs or statements that are falsely attributed to an individual. 

These issues have not only arisen in music but also in politics and other industries.  In 2024, AI voice cloning technology was used in New Hampshire to create fake robocalls from “President Biden,” and in Maryland to create a false racist statement by a school principal. A person’s voice and their image are unique to them, but as AI programs develop, the frequency of fake performances and other recordings will increase.

An online search on “AI replication of music” features many YouTube videos offering AI programs that can replicate the music of both alive and deceased performers.  These unauthorized recordings not only hurt consumers by misleading them into purchasing what they believe to be genuine recordings but also hurt the performers who receive no compensation for the unauthorized use of their voice or image.

Such use also has obvious potential for economic and societal harm.  While some individuals may accept another’s use of their likeness post-duplication, the right to maintain control over thoughts, words, and appearance and be compensated for their work should be determined by the individual, not by a random AI developer who can live anywhere around the world. 

The protection of these rights underlies the bipartisan Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, S. 1367 and H.R. 2794. According to the lead co-sponsors of S. 1367, the NO FAKES Act would help prevent the “use of non-consensual digital replications in these kinds of audiovisual works, images, or sound recordings” by holding companies liable for unauthorized use; holding platforms liable for unauthorized hosting; excluding some digital replications from the bill based on First Amendment Rights; and preempting state laws intended to address the same issues.[2]

On May 21, 2025, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing entitled, The Good, The Bad, and The Ugly: AI Generated Deepfakes in 2025, which focused the need to address the harms created by deepfakes including enacting the NO FAKES Act. On July 16, 2025, the Senate Judiciary Subcommittee on Crime and Counterterrorism held a hearing entitled, Too Big to Prosecute?: Examining the AI Industry’s Mass Ingestion of Copyrighted Works for AI Training, where committee members explored the use of copyrighted material by AI to train its models. Protecting IP rights, while allowing new innovations to evolve is a delicate balance that continues in Congress and across the country.

Using AI to Save Taxpayer Dollars

AI is being used by government agencies at every level to streamline applications, consolidate resources, increase efficiency, and save money. On September 8, 2023, the Internal Revenue Service (IRS) announced it would be using AI as part of its efforts to help compliance teams, “better detect tax cheating, identify emerging compliance threats and improve case selection tools to avoid burdening taxpayers with needless ‘no-change’ audits.” In October 2024, the Department of the Treasury announced that it had used AI to detect fraud, helping to prevent or recover more than $4 billion in fraud and improper payments in fiscal year (FY) 2024, a 413 percent increase over the $652.7 million in FY 2023. The savings included $2.5 billion from identifying and preventing high-risk transactions; $1 billion recovered from Treasury check-fraud schemes; and $500 million in prevented losses via enhanced “risk-based” screening.

Successful use of AI to realize its benefits in fraud detection requires a skilled workforce, as noted by GAO Chief Scientist Dr. Thomas Sterling in his January 13, 2026, testimony before the House Oversight and Government Reform Subcommittee on Government Operations regarding fraud and improper payments. He cited GAO’s estimated losses due to fraud as between $233 billion and $521 billion annually, and revealed that since FY 2003, the “cumulative improper payment estimates reported by executive branch agencies have totaled about $2.8 trillion.” Dr. Sterling stated that, “Artificial intelligence (AI) and data analytics have the potential to enhance efforts to combat fraud and improper payments … However, agencies need solid, reliable data and a human in the loop to ensure data reliability and appropriate application of the technology.”

While AI can help reduce fraud and improper payments, a skilled workforce with knowledge of AI and data analytics to help train the AI models is essential for those efforts to be successful.

Powering AI and Future Innovations

AI chips are larger than prior generations of chips and require specialized cooling systems that require sophisticated infrastructure. Building data centers to accommodate these energy needs continues to be debated at every level of government.

The International Energy Administration estimates that data center electricity demand will double by 2030, and AI will be the most significant factor in that growth. In the U.S. by that date, more electricity will be consumed by data centers than “for the production of aluminum, steel, cement, chemicals and all other energy-intensive goods combined.”

On March 4, 2026, President Donald J. Trump was scheduled to meet with major technology companies like Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI to sign a “Ratepayer Protection Pledge” to supply AI data center power at their own expense. President Trump referred to this commitment during his February 24, 2026, State of the Union address.

The meeting follows public statements by several technology companies that they would act to address concerns over energy costs related to data centers. For example, on January 13, 2026, Microsoft Vice Chair and President Brad Smith announced the company’s initiative to put communities first in AI development by mitigating the power and water usage that data centers require in local communities. The pillars of this new initiative address the needs of local communities with respect to electricity, water, jobs, taxes, and skills. According to Smith, the company’s approach to building new data centers will be to “stand up and step up as an industry and ensure that we pay the tab for things like the cost of electricity data centers will need.”

On January 26, 2026, OpenAI reiterated its commitment to similar objectives by stating, “we commit to paying our own way on energy, so that our operations don’t increase your electricity prices.” The process includes paying for grid upgrades and working with local utilities and state regulators proactively to address conditions that would require changes in consumption during peak energy conditions.

Amazon committed to cover the costs of energy and infrastructure for its $12 billion data center project in Louisiana. The company’s February 23, 2026, announcement of the plan included an arrangement with Southwestern Electric Power Company to cover the cost of energy upgrades and infrastructure, the use of “only verified surplus water” that is not otherwise needed by the community, and spending up to $400 million for improved water infrastructure along with $250,000 for a local community fund that will improve STEM education and support other local programs.

A February 2, 2026, Charles River Associates report on retail residential energy costs found that rate increases over the prior 10 years were consistent with inflation, and pressure on national average costs were caused by localized factors in a few regions, like increased operating costs, changes in markets, and government policies. There were 34 states where prices were lower than the national average, and in some states, much lower.

The analysis revealed “that, for the most part, retail rates have not been driven up by the emergence of data centers as large consumers of energy. Few hyperscale data centers have started operations, making it unlikely that they could have contributed to rate increases, and the markets where rates have increased are not those where most data centers are being built.” The report added that utilities are expected to use “emerging best practices in regulation and ratemaking,” protect consumers from any rate increases that may be caused by new data centers, which in conjunction with standards and practices being developed by Amazon, Microsoft, and other companies, will protect consumers from any potential rate increases that result from the construction of new data centers.

The need to maintain U.S. global leadership on AI development and the scope of the benefits to society necessitate that any issues related to data centers or other energy needs be resolved in a manner that does not impede progress. Companies have expressed their willingness to accommodate consumer and community concerns and utilities are also on board with ensuring that AI energy needs can be met with minimal impact on rates.

How States Can Leverage AI to Achieve Economic Benefits

While some states are adopting policies that promote AI, others are creating roadblocks by implementing burdensome regulations.

During the 2025 state legislative sessions, more than 1,000 bills were introduced, and 38 states adopted 100 measures to regulate AI. California adopted the first comprehensive bill, SB 53, the Transparency in Frontier Artificial Intelligence Act, which was signed by Governor Gavin Newsom (D) on September 29, 2025. In New York, S6952B/A6453B, the Responsible AI Safety and Education Act was signed into law on December 19, 2025. It requires AI companies to report their safety protocols, and information on incidents within 72 hours after they occur. In her statement when signing the bill, New York Governor Kathy Hochul (D) stated, “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement commonsense regulations that protect the public.”

It would be a mistake for states to regulate AI without consideration of the national impact those regulations would impose. States can harness the movement toward AI by partnering with companies developing AI to ensure their safety concerns are addressed and not impeding the power capacity and data center infrastructure necessary to support its increased usage.

States leading the way for AI development will reduce barriers to build out and allow local communities and businesses to determine how residents are minimally impacted.

In Louisiana, Governor Jeff Landry (R) cited the legislature’s role in approving a tax reform bill that is intended to attract new investment when he announced a new $10 billion Meta AI data facility, the firm’s largest center in the world, that would provide thousands of new jobs and promote economic growth in northeastern Louisiana. In addition, Hut 8, an energy infrastructure company announced plans to build a $10 billion AI data center in southeast Louisiana.

Regulate or Innovate

Former President Joe Biden released his administration’s AI plan on October 30, 2024, in Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It provided eight principles and directed more than 50 federal entities to take 100 specific actions to regulate the AI industry and suggested that the U.S. be “a global leader” rather than “the global leader.”

President Trump is instead encouraging rather than discouraging the development and use of AI and attempting to ensure that the U.S. continues to lead the world. On January 21, 2025, the first day of his second term in office, President Trump announced a joint venture among MGX, OpenAI, Oracle, and Softbank to invest up to $500 billion in AI infrastructure and development, with $100 billion expected to be invested in the first year alone. It will be known as Stargate, and the funds will be used for data centers, research and development, and the facilities needed for electricity generation required for developing AI technology.

Two days later, on January 23, 2025, President Trump issued EO 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked certain existing AI policies to clear “a path for the United States to act decisively to retain global leadership in artificial intelligence.” In his February 11, 2025, remarks at the Paris Artificial Intelligence Action Summit, Vice President J.D. Vance elaborated on the EO by emphasizing the need to keep government from getting in the way of AI development, stating that, “excessive regulation of the … AI sector could kill a transformative industry just as it’s taking off,” and “we’re developing an AI Action Plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential.” He stressed the need for an “open regulatory environment” and the need for “good energy policy” that is not overly focused on regulation.

The administration’s AI plans include a roadmap for national strategy by the Cybersecurity and Infrastructure Security Agency to use “AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.” The Department of Health and Human Services (HHS) has created an AI strategy that is “intended to encourage AI adoption; enable HHS-wide familiarity, comfort, and fluency with artificial intelligence (AI) technology and its potential; promote AI scaling with the application of best practices and lessons learned from piloting and implementing AI capabilities to additional domains and use cases across HHS; and spark AI acceleration by increasing the speed at which HHS adopts and scales AI and ML.”[3]

This “hands off” approach to AI parallels President Bill Clinton’s 1997 “Framework for Global Electronic Commerce.” Its five principles included having the private sector lead in development and avoiding undue restrictions by governments by providing predictable and minimal regulations.

President Trump reiterated his administration’s light touch approach to AI in his December 11, 2025, EO 14365, “Ensuring a National Framework for Artificial Intelligence.” It stated that, “AI companies must be free to innovate without cumbersome regulation. But excessive State regulation thwarts this imperative.” The varied state laws make compliance difficult, especially for start-ups. The EO called for Congress to enact a “minimally burdensome national standard – not 50 discordant state ones. … That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.  A carefully crafted national framework can ensure that the United States wins the AI race, as we must.”

While President Trump has called for a national framework and the preemption of state laws, Congress had not agreed to implement that concept. During consideration of the One Big Beautiful Bill Act, a five-year moratorium on state AI laws that was included in the House version of the bill was removed in a 99-1 Senate floor vote. The moratorium was raised again by President Trump as Congress was considering the FY 2026 National Defense Authorization Act, but there was no language about the issue in either the House or Senate versions of the bill, and it was not included in the conference report.

On September 10, 2025, the Senate Commerce, Science, and Transportation Subcommittee on Science Manufacturing, and Competitiveness held a hearing entitled, AI’ve Got a Plan: America’s AI Action Plan, where Assistant to the President for Science and Technology Michael J.K. Kratsios spoke about President Trump’s AI Action Plan, and the president’s commitment to making sure the U.S. is the global leader in AI. In his remarks, he noted that “The administration can only promote and protect America’s position as the global AI standard-setter with the Legislative Branch’s support.” This support includes accelerating federal permitting for data center infrastructure; ensuring that the U.S. remains a global leader by having the best frontier models; and defining what AI export packages might look like. As Subcommittee Chairman Ted Budd (R-N.C.) noted in his remarks, “I am also excited about what the future holds with the acceleration of AI adoption. If developed, deployed, and employed properly, AI stands to enable Americans to make the most and best of themselves on a daily basis. We must ensure that our AI policy is anchored in maximizing economic opportunity for Americans.”

On September 18, 2025, the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet held a hearing entitled, AI at a Crossroads: A National Strategy or Californication? This hearing discussed the future of AI in the U.S. and examined how preempting state regulations over AI could help reverse “misguided regulatory actions in the states and restore clarity and predictability.”

The subcommittee held several hearings on AI in the 118th Congress, including Artificial Intelligence and Intellectual Property: Part III – IP Protection for AI-Assisted Inventions and Creative Works. The hearing was intended to determine whether works or inventions that were generated with AI assistance should be granted IP protection and what changes, if any, should be made in current law.

In addition to committee hearings, bipartisan groups in the House and Senate released reports on AI policy. In May 2024, the Bipartisan Senate AI Working Group released its report, “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.” The report was based on nine bipartisan AI Insight Forums held in the fall of 2023, which addressed elections, innovation, IP, privacy, national security, risks, the workforce, and transparency, among other issues.

The Bipartisan House Task Force on AI in the 118th Congress released its final report in December 2024. It covered agriculture, education and workforce, financial services, healthcare, IP, preemption, privacy, national security, and small business among other areas that are impacted by AI.

In their summary letter to House leadership, Task Force Co-Chairs Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.), wrote that, “this report encapsulates a targeted approach that balances the need to promote vibrant AI innovation while safeguarding Americans from potential harms as we enter an era of widespread adoption of AI.” The report established five principles for future congressional examination of AI policy: “Identify AI Issue Novelty, Promote AI Innovation, Protect Against AI Risks and Harms, Empower Government with AI, Affirm the use of a Sectoral Regulatory Structure, Take an Incremental Approach, Keep Humans at the Center of AI Policy.” While the two reports addressed comprehensive AI benefits and concerns, the four specific areas in common were IP, privacy, national security and the workforce.

The bipartisan approach to AI policy is not shared by all members of Congress. On December 17, 2025, Sen. Edward Markey (D-Mass.) introduced S. 3557, the States’ Right to Regulate AI Act, which would prohibit any federal funding from being used to implement, administer or enforce the President’s executive order on the National Policy Framework for Artificial Intelligence.

Conclusion

AI is a rapidly evolving and critical technology. The U.S. must maintain global leadership to protect both economic and national security. The technology will enhance productivity and create efficiencies, enabling companies and their employees to broaden their efforts in new ways. AI will increase economic growth both in industries and communities where data centers and manufacturing plants that develop AI technology and chips are built.

The growth in AI must be accompanied by addressing concerns about data centers and the impact on IP rights. Many companies have committed to mitigating any impact on energy costs and the White House is asking them to formalize this by signing a “Rate Protection Pledge.” Congress has held hearings on AI to both determine its impact on the economy and how to provide IP protections for creative works.

Heavy-handed regulation and impediments to innovation at any level of government will stifle AI technology and give an advantage to other countries that would threaten U.S. global technology leadership. To ensure a smooth glidepath for innovation, Congress and the executive branch should be united to provide a light touch regulatory environment for AI and take the necessary steps to keep the U.S. ahead of the rest of the world.

Notes

[1] U.S. Constitution, Article 1, Section 8, National Archives, https://www.archives.gov/founding-docs/constitution-transcript .
[2] James Madison, The Federalist Number 43, January 23, 1788 , https://founders.archives.gov/documents/Madison/01-10-02-0248 .
[3] Sara Brown, “Machine learning and generative AI: What are they good for in 2025?” June 2, 2025, MIT Management Sloan School, https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-and-generative-ai-what-are-they-good-for .
[4] “Real-World Examples of Machine Learning (ML),” Tableau from Salesforce, https://www.tableau.com/learn/articles/machine-learning-examples .
[5] Ibid.
[6] Sara Brown, MIT Management Sloan School.
[7] Government Accountability Office (GAO), “Artificial Intelligence: Issue Summary,” https://www.gao.gov/artificial-intelligence .
[8] GAO, “Artificial Intelligence: Generative AI Use and M anagement at Federal Agencies,” GAO-25-107653, July 29, 2025, https://www.gao.gov/products/gao-25-107653 .
[9] Neil Chilson, “Building the Launchpad for an AI Moonshot ,” Testimony Before the House Committee on Oversight and Government Reform Subcommittee on Economic Growth, Energy Policy, and Regulatory Affairs, April 1, 2025, https://oversight.house.gov/wp-content/uploads/2025/04/Chilson-Written-Testimony.pdf .
[10] Mariana Salazar, “The International Race for AI Dominance,” International and Comparative Law Review , University of Miami School of Law, March 3, 2025, https://international-and-comparative-law-review.law.miami.edu/the-international-race-for-ai-dominance/ .
[11] Institute of Electronics and Electrical Engineers , “ IEEE History ,” https://www.ieee.org/about/ieee-history.html .
[12] Danya Gainor and Haley Talbot, “Wexton makes history as first member to use AI voice on House floor,” CNN, July 25, 2024, https://www.cnn.com/2024/07/25/politics/jennifer-wexton-ai-voice-house-floor/index.html .
[13] Junaid Bajwa, Usman Munir, Aditya Nori, Bryan Williams, “Artificial intelligence in healthcare: transforming the practice of medicine,” Future Healthcare Journal , July 8, 2021, https://www.sciencedirect.com/science/article/pii/S2514664524005277?via%3Dihub .
[14] Food and Drug Administration, “ Artificial Intelligence – Enabled Medical Devices, ” https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices .
[15] BeeOdiversity, YouTube, September 16, 2022, https://www.youtube.com/watch?v=YIohP7JGCqk&t=1s .
[16] CBS News, “How honeybees can be used to collect environmental data,” October 19, 2024, https://www.cbsnews.com/video/how-honeybees-can-be-used-to-collect-environmental-data/ .
[17] Nick Breen and Josh Love, “Attack of the Clones: AI Soundalike Tools Spin Complex Web of Legal Questions for Music (Guest Column),” Billboard Pro , May 19, 2023, https://www.billboard.com/pro/ai-music-tools-copy-artists-voices-legal-questions/ .
[18] Holly Ramer, “Political consultant behind fake Biden robocalls says he was trying to highlight a need for AI rules,” A ssociated P ress (AP) , February 26, 2024, https://apnews.com/article/ai-robocall-biden-new-hampshire-primary-2024-f94aa2d7f835ccc3cc254a90cd481a99 ; Ben Finley, “Athletic director used AI to frame principal with racist remarks in fake audio clip, policy say,” AP, April 25, 2024, https://apnews.com/article/ai-artificial-intelligence-principal-audio-maryland-baltimore-county-pikesville-853ed171369bcbb888eb54f55195cb9c .
[19] Mohar Chatterjee, “A New Kind of AI Copy Can Fully Replicate Famous People. The Law is Powerless,” Politico , December 30, 2023, https://www.politico.com/news/magazine/2023/12/30/ai-psychologist-chatbot-00132682 .
[20] Jarrid Outlaw, “Can I Be Protected Against Myself? Artificial Intelligence and Voice Replication,” University of Richmond School of Law, Journal of Law & Technology , May 17, 2024, https://jolt.richmond.edu/2024/05/17/can-i-be-protected-against-myself-artificial-intelligence-and-voice-replication/ .
[21] Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, S. 1367 , 119th Congress (2025) https://www.congress.gov/bill/119th-congress/senate-bill/1367/text ; Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, H.R. 2794, 119th Congress (2025), https://www.congress.gov/bill/119th-congress/house-bill/2794/text/ih .
[22] Sen s . Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) , and Thom Tillis (R-N.C)., “Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act Bill Summary,” https://www.coons.senate.gov/imo/media/doc/no_fakes_act_one-pager.pdf .
[23] Senate Judiciary Committee, “ S.Hrg. 119-171, The Good, The Bad, and The Ugly: AI-Generated Deepfakes in 2025 , May 21, 2025, https://www.congress.gov/event/119th-congress/senate-event/LC74823/text .
[24] Senate Judiciary Subcommittee on Crime and Counterterrorism, Too Big to Prosecute?: Examining the AI Industry’s Mass Ingestion of Copyrighted Works for AI Training, July 16, 2025, https://www.judiciary.senate.gov/committee-activity/hearings/too-big-to-prosecute-examining-the-ai-industrys-mass-ingestion-of-copyrighted-works-for-ai-training .
[25] Internal Revenue Service (IRS), “IRS announces sweeping effort to restore fairness to tax system with Inflation Reduction Act funding; new compliance efforts focused on increasing scrutiny on high-income, partnerships, corporations and promoters abusing tax rules on the books,” September 8, 2023, https://www.irs.gov/newsroom/irs-announces-sweeping-effort-to-restore-fairness-to-tax-system-with-inflation-reduction-act-funding-new-compliance-efforts .
[26] Department of the Treasury, “Treasury Announces Enhanced Fraud Detection Processes, Including Machine Learning AI, Prevented and Recovered Over $4 Billion in Fiscal 2024,” October 17, 2024, https://home.treasury.gov/news/press-releases/jy2650 .
[27] Rob Wile, “Treasury Department now using AI to save taxpayer billions,” NBC News, October 17, 2024, https://www.nbcnews.com/business/consumer/how-ai-artificial-intelligence-fights-taxpayer-fraud-treasury-dept-rcna175916 .
[28] Dr. Sterling Thomas, Chief Scientist, GAO, “Fraud and Improper Payments: Data Quality and a Skilled Workforce Are Essential for Realizing Artificial Intelligence’s Benefits,” House Oversight and Government Reform Subcommittee on Government Operations, GAO-26-108850, January 13, 2026, https://www.gao.gov/assets/gao-26-108850.pdf .
[29] Ibid.
[30] Ibid.
[31] International Energy Agency, “ Energy and AI,” April 2025, https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf .
[32] Emma Colton, “Scoop: Trump brings Big Tech to White House to curb power costs amid AI boom , ” Fox News, February 25, 2026, https://www.foxnews.com/politics/scoop-trump-brings-big-tech-white-house-curb-power-costs-amid-ai-boom.
[33] Brad Smith, “Building Community-First AI Infrastructure,” Microsoft, January 13, 2026, https://blogs.microsoft.com/on-the-issues/2026/01/13/community-first-ai-infrastructure/ .
[34] Heather Clancy, “Microsoft’s plan to counter community backlash over AI data centers,” Trellis Group , January 14, 2026, https://trellis.net/article/microsofts-plan-to-woo-communities-skeptical-about-ai-data-centers/ .
[35] Open AI, “Stargate Community,” January 26, 2026, https://openai.com/index/stargate-community/ .
[36] Lisa Stiffler, “Water, power, and transparency; Amazon ’s $12B data center deal signals a new era of accountability,” GeekWire , February 23, 2026, https://www.geekwire.com/2026/water-power-and-transparency-amazons-12b-data-center-deal-signals-a-new-era-of-accountability/ .
[37] Charles River Associates, “Retail rate trends in the US,” February 2, 2026, https://media.crai.com/wp-content/uploads/2026/02/02092628/Retail-rate-trends-in-the-US.pdf .
[38] Ibid.
[39] Ibid.
[40] “Governor Newsom signs SB 53, advancing California’s world leading artificial intelligence industry,” State of California, Office of the Governor, September 29, 2025, https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/ .
[41] “Governor Hochel Signed Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models,” Office of the Governor, State of New York, December 19, 2025, https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models .
[42] Office of the Governor, “Landry Announces Meta Selects North Louisiana as Site of $10 Billion Artificial Intelligence Optimized Data Center,” December 4, 2024, https://gov.louisiana.gov/news/4697 .
[43] Phoebe James, “Hut 8 Selects Entergy, Southeast Louisiana for $10 billion artificial intelligence data center, ” Entergy, December 17, 2025, https://www.entergy.com/news/hut-8-selects-entergy-southeast-louisiana-for-10-billion-artificial-intelligence-data-center .
[44] The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 23, 2023, https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ .
[45] Clare Duffy, “Trump announces $500 billion AI infrastructure investment in the US,” CNN, January 21, 2025, https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment/index.html .
[46] The White House, “ Removing Barriers to U.S. Leadership in Artificial Intelligence,” Executive Order 14179, January 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ .
[47] Vice President J.D. Vance, “Remarks By the Vice President at the Artificial Intelligence Action Summit, Paris, France,” February 11, 2025, https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france .
[48] Ibid.
[49] Cybersecurity & Infrastructure Security Agency “Roadmap for AI,” October 1, 2024, https://www.cisa.gov/resources-tools/resources/roadmap-ai .
[50] Department of Health and Human Services, “ HHS Artificial Intelligence (AI) Strategy, ” June 6, 2024, https://www.hhs.gov/programs/topic-sites/ai/strategy/index.html .
[51] Adam Thi e rer, “Fifteen Years On, President Clinton’s 5 Principles For The Internet Remain The Perfect Paradigm” Forbes , February 12, 2012, https://www.forbes.com/sites/adamthierer/2012/02/12/15-years-on-president-clintons-5-principles-for-internet-policy-remain-the-perfect-paradigm/?sh=5c0680507170 .
[52] The White House, “Ensuring a National Policy for Artificial Intelligence,” December 11, 2025, https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/ .
[53] Ibid.
[54] Ibid.
[55] Owen Teford , “Federal Preemption of AI Regulation Proposals Sparks Controversy,” Forbes , November 24, 2025, https://www.forbes.com/sites/owentedford/2025/11/24/federal-preemption-of-state-ai-regulation-proposals-sparks-controversy/ .
[56] Senate Commerce, Science, and Transportation Committee, Hearing Report, S.Hrg. 119-284, AI’ve Got a Plan: America’s AI Action Plan, September 10, 2025, https://www.congress.gov/event/119th-congress/senate-event/LC75188/text .
[57] Ibid.
[58] House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet, AI at a Crossroads: A National Strategy or Californication? September 18, 2025, https://judiciary.house.gov/committee-activity/hearings/ai-crossroads-nationwide-strategy-or-californication .
[59] House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet, Artificial Intelligence and Intellectual Property: Part III – IP Protection for AI-Assisted Inventions and Creative Works , April 10, 2024, https://judiciary.house.gov/committee-activity/hearings/artificial-intelligence-and-intellectual-property-part-iii-ip .
[60] Bipartisan Senate AI Working Group, “ Driving U.S. Innovation in Artificial Intelligence : A Roadmap for Artificial Intelligence Policy in the United States Senate,” May 2024, https://www.schumer.senate.gov/imo/media/doc/Roadmap_Electronic1.32pm.pdf .
[61] Bipartisan House Task Force on Artificial Intelligence , “Bipartisan House Task Force on Artificial Intelligence Final Report, ” December 2024, https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf .
[62] Ibid.
[63] States’ Right to Regulate AI Act, S. 3557, 119th Congress ( 2025 ), https://www.congress.gov/bill/119th-congress/senate-bill/3557 .