A Breakdown of New AI Regulations in the US, the EU, and the UK 

A big week for AI regulation

This week, the Biden administration announced its approach to regulating artificial intelligence through an executive order—a landmark move. For readers outside the US, an executive order (EO) is a decree made by the current president that has the force of a law. Executive orders don’t require the legislature’s ratification—which makes their use a bit controversial, though they are quite common in contemporary gridlock politics—but they are subject to the oversight of the courts. US civics lesson aside, the EO is lengthy and gets into a number of technical details for individual federal agencies and sets out different timelines for specific actions.

In this piece, I’ll highlight the most interesting parts of the EO for librarians and information workers, and then take a look at competing frameworks and policies in the European Union and the United Kingdom.

The eight guiding principles and policies of AI 

The EO names eight guiding principles and policies to guide the development of AI. It tries to balance the risks of AI with its potential benefits. The full list: 

  1. AI must be safe and secure. 
  1. The US seeks to promote “responsible innovation, competition, and collaboration” in the AI marketplace and maintain an economic landscape for entrepreneurs and small businesses to be competitive. 
  1. Workers should have input in the development and applications of AI in the workplace, including through collective bargaining proceedings. The EO notes that “AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.” 
  1. AI should seek to overcome discrimination, bias, and inequities, especially in uses for hiring, housing, and health care. 
  1. Consumer protections should extend to cover AI technologies (e.g., by covering uses of AI to commit fraud). 
  1. Individuals’ privacy and personal information should be protected. 
  1. The federal government must take steps to protect its own employees from the threats of AI and instill proper training. 
  1. The federal government must lead the path globally for the ethical development and use of AI technology. 

AI guardrails in the EO 

The principles and policies are the “meat” of the EO for ordinary people. The EO goes on to call for the development of benchmarks for auditing AI capabilities, guidelines for AI developers, the availability of safe testing environments, and tools for preventing AI from disseminating information about weapons development and biological sequences (oh dear). It also seeks to limit a Skynet scenario by putting up guardrails against AI “self-replication.”

For librarians and other information workers, the good news is that the EO calls for the evaluation and development of tools for identifying materials created by AI. This includes techniques for “authenticating content and tracking its provenance,” watermarking AI-created content, detecting synthetic content, and preventing the creation of materials depicting the sexual abuse of children and non-consensual intimate imagery of existing persons. 

The EO also impels the Secretary of Education to “develop resources, policies, and guidance regarding AI.” It enjoins giving special sensitivity to the disproportionate impacts of AI on vulnerable and marginalized communities. The Secretary of Education must also develop a “toolkit” for educators and create education-specific guardrails for AI. The timeline for this project is one year, so hopefully we’ll gain a clearer picture of what these developments will entail soon. 

A comparative view: AI regulation in the European Union 

The US has somewhat lagged behind the EU in setting AI policy. The EU approved the AI Act back in June, though its final vote won’t happen until the end of the year. Though the US and the EU share many of the same concerns—privacy, transparency, nondiscrimination—they have very different approaches.

In short, the EU’s plan sorts AI technologies and uses into different risk levels and imposes a set of regulations based on that specific level of risk. For instance, using real-time biometric identification systems or creating sexually explicit images of children is deemed “unacceptable” and are both completely banned. However, something like using an AI system in an airplane is deemed only “high risk,” and so it faces stern, but not insurmountable, regulations. This strategy avoids a blanket approach to regulation and tries to respond to diverse applications of AI technology.

Like the EO in the US, the AI Act wants to add some transparency requirements to generative AI to identify its outputs as the products of AI. The EU also calls for AI companies to disclose any copyrighted materials that it trains its AI on. As we covered a few months ago, the copyright issue has already sparked lawsuits stateside

A comparative view: AI nightmare in the United Kingdom 

If the EU’s approach comes off as very measured, then it will seem more so in light of UK Prime Minister Rishi Sunak’s recent comments on AI. Sunak gave a speech recently on the risks of AI, in which he cited the nightmarish possibility of humans losing control entirely over an AI superintelligence. At the same time, he said that the UK would not “rush to regulate” AI. Not to editorialize, but !?!?!?.

Though the UK hasn’t passed any sort of regulation on AI, it has tipped its cards slightly with some of its recent policy and white papers. Over the summer, the Department for Science, Innovation and Technology (SIT) and Office for Artificial Intelligence published a policy paper titled “AI regulation: a pro-innovation approach.” Though wary of the drawbacks of AI, the paper sees great economic opportunity in AI innovation. In the preface, the Secretary of State for SIT echoes Sunak’s position of waiting to impose regulations on AI in order to let businesses innovate with it first.

In light of the UK’s modest, if somewhat gloomy, economic outlook, I’m given to speculate that the current government sees a potential economic wellspring in creating, by stalling, a temporarily unregulated AI climate and attracting tech companies to the UK should initial regulations in the US and the EU scare them off. (The paper salivates over the possibility of tech industries moving to the UK.) The legal firm Mayer Brown put out a great summary of the rest of the report, which goes into more of the actual regulatory framework proposed.

It seems the pro-innovation approach hasn’t persuaded all facets of the bureaucracy. This past week, the UK’s Department of Education put out a policy paper on generative AI in education. The information it contains is a fairly basic description of the pros and cons of AI in educational settings—nothing much of interest. The paper does warn against any moves to replace actual student learning with AI: “It is more important than ever that our education system ensures pupils acquire knowledge, expertise and intellectual capability.” The paper ends by cautioning against over-relying on any specific technological tool in education, offering a more skeptical view of AI.

——

As I said above, time will tell how these policies and frameworks will actually affect AI development and applications. I find it promising that governments across the West are beginning to take a hands-on approach to AI risk mitigation. At the same time, I worry that the economic possibilities of AI innovation are too alluring and will shift priorities away from the vital human concerns at the root of this technology. Hopefully, these frameworks at least set us in the right direction.


🔥 Sign up for LibTech Insights (LTI) new post notifications and updates.

❗️ LTI is looking for new contributors! Interested in writing for us? Send your topic idea to Daniel P.