The 24-Hour AI News Cycle: A Snapshot, a Trend, and a Brief Look Ahead
By: John Pavolotsky
In a recent presentation at the Center of Innovation, Learning, and Society (U.C. Davis School of Law) on AI legal developments, I remarked to the audience that while typically I finalize my slides at least a few days before a presentation, in the present instance I finalized them only a few hours before my talk. Why? The 24-Hour AI News Cycle. Talk of AI is everywhere. The AI ecosystem has quickly become vast and global, with interdependencies that few of us can truly fathom. As a part of that ecosystem, you might be a developer, a deployer, a consumer, an investor, or a provider of infrastructure, data center services, or energy sources to support those services. Perspective matters greatly. Thus, what’s the biggest risk of AI? The biggest benefit? To whom? Same question. An individualized discussion is warranted in each instance, but a framework for thinking about and conceptualizing the benefits and risks to your organization will likely be helpful as well. Here are some aspects of that framework:
- Tracking AI Developments. Tracking each and every AI development is challenging, if not impossible. Trackers are helpful, but typically do not shed light on what is important to your business or how your business may be affected. Turning off your news feed and checking back in six months is not an option either. Rather, a more targeted, selective approach is warranted. To that end, a monthly digest, tailored to your business, industry, and current and planned use of AI technologies, is recommended, with pinpoint summaries where appropriate.
- State Legislative Developments. State-side, on the legislative front, California remains a hotbed of AI bills, with, as of February 18, 2025, 19 having been introduced, up from 11 on February 5, 2025, the date of my presentation. An organization’s specific interest in these will generally align to where it is situated in the AI ecosystem. For example, AI model developers, data center operators and investors should be interested in AB 222 (data centers: energy usage reporting and modeling), which would, among other things, require data center operators to estimate the total energy used to develop a covered model and to annually report energy consumption and performance data. AB 410 would update California’s “Bot” law to prohibit undisclosed use of bots on large online platforms. SB 53, ostensibly the successor to SB 1047, vetoed by Governor Newsom last September, while still a placeholder, is possibly garnering the most attention. Currently, SB 53 provides: “It is the intent of the Legislature to enact legislation that would establish safeguards for the development of artificial intelligence (AI) frontier models and that would build state capacity for the use of AI, that may include, but is not limited to, the findings of the Joint California Policy Working Group on AI Frontier Models, established by the Governor.” The final report of that working group is expected by summer 2025. More broadly, AI legislation has been introduced in Alaska (SB 2), Florida (HB 369), Hawaii (HB 1384) and Maine (LD 109), among others.
- Guidance from State Regulators. State regulators have issued many legal advisories on AI. For example, the California Attorney General issued the following guidance: California Attorney General’s Legal Advisory on the Application of Existing California Laws to Artificial Intelligence.
- While more than a few AI bills were signed this past September (including AB 2013 (generative artificial intelligence: training data transparency), the advisories posit that the current legal toolkit, including general consumer protection laws and the California Consumer Privacy Act, applies quite well to the AI space, and just because a law does not have AI in its title does not mean that it doesn’t apply to those operating in this space. The Oregon Attorney General has taken a similar approach: What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence.
- Federal Legislative and Regulatory Developments. Thirteen AI bills have been introduced, up from eight as of February 5, 2025. All the legislation introduced so far has been piecemeal; there is nothing comprehensive yet akin to the EU AI Act. On that topic, the first compliance deadline—February 2, 2025—prohibited unacceptable risk AI systems, including cognitive behavioral manipulation, social scoring, and emotion recognition in the workforce, among others. The deadline seemingly came and went with little attention, as the world was instead transfixed on an inference model (R1) developed by a Chinese startup (DeepSeek), which had reported training the model at only a fraction of the cost of competitive models. Not long after, some countries (Italy, Australia) and states (Texas, New York) moved to block, either in whole or in part, the use of DeepSeek’s technologies. Meanwhile, on January 20, 2025, Executive Order 14110 (Safe, Secure, and Trustworthy Development of AI) was revoked. Three days later, on January 23, 2025, a new Executive Order titled, “Removing Barriers to American Leadership in Artificial Intelligence,” was signed. It calls for developing an Artificial Intelligence Act Plan (180 days) and revising OMB Memoranda M-24-10 and M-24-18 (60 days) in accordance with the general guidance provided in this EO. As a practical matter, the not inconsiderable output from EO 14110 is expected to be reviewed to determine suitability for use on a going-forward basis. To the extent comprehensive, particularly safety, federal legislation does develop, it may face preemption challenges, although in contrast to the data privacy space, where the preemption mountain has to date been insurmountable, no state, as of yet, has in operation omnibus AI legislation that would support a credible preemption argument. The FTC is under new leadership, with the elevation of Commissioner Andrew Ferguson to chair. Don’t expect any AI rulemaking. To date, in its AI enforcement actions, the FTC has focused on the deception prong of the FTC Act, g., claiming, without supporting data, that AI systems are unbiased.
- Across the Pond. The next compliance deadline as set out in the EU AI Act is August 2, 2026. It addresses “high risk” use cases – critical infrastructure, education and vocational training, etc. In December 2024, the European Data Protection Board published “Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models.” This opinion is in response to a query from the Irish supervisory authority and attempts to answer, for example, “When could AI models be considered anonymous and how could that be demonstrated?” Ultimately, the opinion, by setting out myriad considerations, raises more questions than answers, and seems to provide little practical direction to data protection authorities, and controllers as well.
- Focus on the Actionable. Headlines are important, but even more important are the discrete uses of AI: what your customers are asking your organization to warrant regarding the use of AI in creating and delivering products and services, your methodology for vetting AI training sets, disclosures that you are making in connection with chatbots on your website, your internal AI policy for using AI tools to generate, test and debug code, and so on.
In future articles, we will focus on some of these discrete tasks and activities. We will also continue to highlight new developments, given this very fluid regulatory area.
Related Professionals
- Partner