Consultation outcome

Artificial intelligence call for views: copyright and related rights

Updated 23 March 2021

Introduction

Ever since it introduced the world’s first copyright act – the Statute of Anne in 1710 – the United Kingdom has been a copyright pioneer. Over the years, there have been many changes to copyright law to keep pace with developments in media and technology, from the printing press to the cassette recorder and the internet. Developments in artificial intelligence (AI) raise new questions about the purpose and scope of copyright protection.

Copyright is relevant to AI in a variety of ways. Machine learning systems learn from data which may be protected by copyright or other rights. For example, a system which generates music may be trained using multiple musical works, each protected by layers of copyright which may be infringed.

Some AI systems are capable of autonomously generating new works, which may themselves be protected under UK law. Some are capable of creating copies of existing protected works. When they do this without permission copyright will be infringed.

The software code in which an AI system is written will also be protected by copyright. This protection enables the creators of AI software to be paid for their work and to control how others can use it.

In some of these areas there is uncertainty about how copyright law applies. We also need to consider how we should treat works created and used by AI systems and whether the current approach is right.

This section of the call for views examines these issues. We look at how the law treats the use, storage, creation, and infringement of copyright works by AI systems. We also look at the government’s policy objectives for AI and ask whether there are any opportunities to provide a better environment to develop and use AI.

Different rationales exist for copyright protection. In common law countries, such as the UK, copyright is seen primarily as an economic tool that incentivises and rewards creativity. Continental Europe has a tradition of “authors’ rights”. This frames copyright as a natural right of creators, protecting their works as expressions of their personalities. Copyright also helps to ensure that works are widely available to the public, enriching our culture and society.

Whether it is seen as an economic tool or a natural right, copyright is centred on human creativity. This is reflected in its scope and duration. Copyright protects original works that reflect the personality of their creators. Its scope is limited by human concerns like privacy, free speech, and access. And for many works its duration is calculated with reference to the author’s lifetime.

But copyright is also shaped by technology. It has developed in response to recorded music, film, radio, television, computers and the internet. Protection for computer-generated works was introduced in the 1980s. The scope of copyright is limited in specific ways to allow technological processes to work efficiently.

The government supports a copyright system which encourages creativity and investment in the creative industries and promotes economic growth. The IPO Strategy 2018 to 2021 reflects this. Sometimes there is a tension between these objectives, such as when businesses face obstacles to their use of copyright works. The government aims to find balanced outcomes which benefit owners and users of copyright works alike.

The government also wants to make the UK a global centre for AI and data-driven innovation. Its mission is to increase uptake of AI for the benefit of everyone in the UK. This includes ensuring AI technology works for the people and making sure the UK has the best environment for developing and using AI.

In this call for views we examine three issues relating to copyright and AI:

  • the use of copyright works and data by AI systems;
  • whether copyright exists in works created by AI, and who it belongs to;
  • copyright protection for AI software

Machine learning systems learn from the data available to them, including copyright works like books, music and photographs. For example, the Next Rembrandt Project trained AI to develop a new painting in Rembrandt’s style, and used data from 346 of Rembrandt’s works to do so. Google’s AI Quick, Draw!, where AI attempts to identify doodles drawn by users, has been used more than 50 million times. The AI improves as it learns from each submitted doodle.

Each work used by an AI may be protected by copyright. This means that the copyright owner’s permission is needed to use the work unless a copyright exception applies. This permission may be granted using a licence, which will set out who can use the work, how and why.

It is possible to avoid infringing copyright by using licensed or out-of-copyright works. For example, an AI could be trained using the works of Shakespeare, which are no longer protected by copyright. But unless a work is licensed, out of copyright, or used under a specific exception, an AI will infringe by making copies of it.

The UK also provides a right in databases. These protect databases in which a substantial investment has been made in obtaining, verifying or presenting their contents. Extracting and reutilising a substantial amount of data from protected databases without permission may infringe the database right.

Some people argue that the risk of infringing copyright or other rights stops people making full use of AI. They say that AI should be able to use copyright material more easily and this could support increased adoption of AI technology. For example, the government’s 2017 independent AI review, “Growing the artificial intelligence industry in the UK”, recommended allowing text and data mining by AI, through appropriate copyright laws.

Others argue that copyright and related rights are not an obstacle to the development of AI. They note that licensing models evolve with technology and there are increasing opportunities for AI developers to license copyright works. Rather than taking steps to limit copyright protection, they argue that the focus should be on ensuring that rightholders are remunerated when their works are used by AI.

Copyright is infringed when someone uses a substantial part of a copyright work without the copyright owner’s permission. Copies made inside a human brain do not infringe copyright. For example, a person may remember a song and sing it in their head, without infringing copyright in it. But they would infringe copyright if they wrote down the song or performed it in public without permission.

In contrast, copies made within an AI “brain” may infringe copyright. For example, an AI may store a copy of a song within its memory. Like a human, an AI may also infringe copyright by generating copies of the song externally, performing it, distributing it, or communicating it to the public.

When copyright is infringed, the copyright owner has the right to take action against an infringer. This means that when an AI infringes copyright, a person or legal entity must ultimately be legally responsible. The person who is liable is normally whoever has control over the infringement, the ability to stop future infringement and can compensate the copyright owner.

Were copyright infringed by an AI, the responsible person would be the one who has control over the infringement. If the infringement occurs while the AI is being trained, then the person with control would be the person training the AI. If the AI generates a work that infringes copyright, then the person liable would be whoever has made the necessary arrangements that have led the AI to infringe copyright. This is likely to be the operator of the AI.

Copyright law allows copying in certain cases to enable technology to work more effectively. For example, it allows temporary copies to be made during processes such as web browsing and signal processing. As long as these copies do not have independent economic significance and enable a lawful end use, they do not infringe copyright (s28A Copyright Designs and Patents Act 1988 (as amended) – The ‘CDPA’).

The law also allows copying during text and data mining (TDM) for non-commercial research. TDM is the analysis of large amounts of text and other information to reveal patterns and relationships, which can lead to new discoveries and innovation. For example, it can help researchers analyse clinical data and identify links between drugs and medical conditions (s29A CDPA).

The exceptions to copyright which allow these activities are likely to cover some copies made by AI systems, but not all of them.

The temporary copying exception will not apply to copies stored permanently by an AI. For example, some AI systems store sets of works which are used to train neural networks and or serve results to users. Some neural networks may also be capable of storing copyright works, similar to how a human brain remembers a song or poem. Even though it stores information in an abstract form, as long as the neural network reproduces the creative elements of a work it will be considered to have made a copy.

The TDM exception does allow permanent storage of copies but can only be used for non-commercial research purposes. It is also unclear whether all activities of AI systems can be described as “data mining”. For these reasons, some copies made by AI systems may be covered by the TDM exception, but many will not be.

It is also important to note that these exceptions are focused on copies made within an AI system, not those output by it. For example, copies used internally by an AI under the TDM exception would still infringe copyright if they were subsequently output by the system without permission from the copyright owner.

Database rights also allow fair dealing by lawful users of a database and for non-commercial research. But, as with copyright, while these exceptions may cover some activities of AI systems, they are unlikely to cover all of them.

The government has an objective to support the AI sector. We need to consider if the copyright framework complements that aim, or if there is evidence that changes are needed. We would like you to consider whether our current exceptions support the AI sector and to what extent exceptions are appropriate for encouraging the use and development of AI. Is there evidence and support for new exceptions, or should we explore other approaches such as increased support for licensing?

It is not clear that the possibility of copyright infringement by AI systems is a major impediment to their use. It may be that existing exceptions and licensing models in the UK are already sufficient, that obstacles to the use of copyright materials by AI systems are minimal, and a new approach is not needed.

Should copyright infringement be identified as an obstacle to AI development, one option would be to review, and potentially broaden, the exceptions which allow copies to be made within an AI system – for example for training purposes.

This approach would be consistent with the view that an AI “brain” should be treated in a similar way to a human one. A human does not infringe copyright in a song by remembering it, but it can infringe by recording it or singing it to an audience. A similar approach to AI would focus on infringement in material created by the AI, rather than material used internally by it.

Another approach would seek to broaden existing protection so that it is easier for copyright and database right owners to seek remuneration for the use of their works. Responses could include limiting existing exceptions or introducing new rights in input data. To ensure that such measures do not create new obstacles for users of AI, they could incorporate measures to facilitate licensing.

Questions

1. Do you agree with the above description of how AI may use copyright works and databases, when infringement takes place and which exceptions apply? Are there other technical and legal aspects that need to be considered?

2. Is there a need for greater clarity about who is liable when an AI infringes copyright?

3. Is there a need to clarify existing exceptions, to create new ones, or to promote licensing, in order to support the use of copyright works by AI systems? Please provide any evidence to justify this.

4. Is there a need to provide additional protection for copyright or database owners whose works are used by AI systems? Please provide any evidence to justify this.

Protecting works generated by AI

The extent of human involvement in AI creativity

AI is already used to create copyright works, including music and artwork. In most cases, the AI will be used as a tool, and human creativity will still be part of the process. For example, Sony’s Flow Machines allows users to create new songs based on its machine learning of existing music, taking into account the user’s preferences.

Some believe there is currently no situation where a copyright work can be created without any human involvement. A human is likely to be involved in training an AI. An AI may learn from copyright works authored by humans. A human may direct the type of work produced by the AI – for example, choosing the type of song an AI should produce, the instruments that should be used, how the song should sound and what its tempo should be.

To the extent that a work is made with assistance from AI but involving human creativity it will be protected like any other work. Copyright protection will protect the work to the extent that it is the human creator’s “own intellectual creation”, and the first owner of the work will be that creator. The AI in these cases may be considered to simply act as a tool which allows an artist to express their creativity.

In other cases, although a human may have been involved in the process of generating an AI work, their input will not be considered creative enough to attract protection as a standard copyright work.

How the law treats AI-generated works

Unlike most other countries, the UK protects computer-generated works which do not have a human creator (s178 CDPA). The law designates the author of such a work as “the person by whom the arrangements necessary for the creation of the work are undertaken” (s9(3) CDPA). Protection lasts for 50 years from the date the work is made (s12(7) CDPA).

When proposed in 1987, this was said by Lord Young of Graffham to be “the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence”. It was expressly designed to do more than protect works created using a computer as a “clever pencil”. Instead, it was meant to protect material such as weather maps, output from expert systems, and works generated by AI.

Although it was expected that other countries would follow suit, few countries other than the UK currently provide similar protection for computer-generated works.

Originality and computer-generated works

Since these provisions became law in 1988, the concept of originality has evolved. This has led to some uncertainty about how the computer-generated works provision applies.

Literary, dramatic, musical and artistic works are only protected by copyright if they are original. In 1988, “original” meant a work must be the product of the “skill, labour or judgement” of its author. But the current approach is that a work must be “the author’s own intellectual creation”. This means it must result from the author’s free and creative choices and exhibit their “personal touch”. It is unclear how these concepts can apply to AI works and some argue that a separate definition of originality may be needed.

By designating a human as the author of a work generated by an AI, the UK approach also separates authorship and creativity. The creator of the original work is the AI, but the “author” under the law is a person who has not made any creative input to it. This sits uneasily with the modern approach to originality in wider copyright law, where creativity and authorship go hand-in-hand.

As computer-generated works have “no human author”, it appears that the concept of “joint authorship” does not apply to works co-created by humans and AI systems. As such, there is some ambiguity about the status of AI-assisted works.

In light of the above, clarification of these provisions may be needed.

So-called “entrepreneurial works” – sound recordings, films, broadcasts and typographical arrangements – do not have an originality requirement. These belong to their producers, makers and publishers, regardless of their creative input. This protection would appear to apply to AI-generated material, without need for specific provision. However, it is less extensive than the protection granted to original works. For example, the owner of musical copyright can prevent any reproduction of their music but the owner of copyright in a sound recording can only prevent copying of that particular recording.

In 1987, when the government legislated to protect AI-generated works, the Earl of Stockton said he hoped this would allow future investment in AI “to be made with confidence”. But it is unclear whether it has had this effect. The UK remains one of only a handful of countries worldwide that provides this protection. Investment in AI has taken place in other countries, such as the United States, which do not provide this type of protection. Some people argue that this protection is not needed, and others that it should be provided differently.

From the perspective of an AI system, the role of copyright as an incentive would appear to have little meaning. An AI does not seek protection of its personal expression nor financial reward for its work. Regardless of copyright, an AI system will generate content.

It also seems hard to justify protecting AI-generated works on natural rights grounds. AI systems are still far from being considered individuals with their own personalities.

There may also be a more fundamental reason to distinguish between human and AI-generated works. Some argue that copyright should promote and protect human creativity, not machine creativity. According to this view, works created by humans should be given protection but those generated by machines – and potentially competing with human-created works – should not.

Addressing these arguments could mean removing or limiting protection for computer-generated works.

On the other hand, protection for AI-generated works may be justified if it incentivises investment in AI. This was the original basis for providing this protection. If there is evidence that this is the case, it could make sense to continue to protect these works. Depending on the economic and legal impacts, this may mean maintaining the current approach, or providing a different type of protection.

The International Association for the Protection of Intellectual Property (AIPPI) – an international organisation which promotes the development of intellectual property laws – recently asked its members whether AI-generated works should be protected. The responses it received highlight the different approaches to this topic. The UK Group suggested that AI-generated works could be protected by a new right, lasting for 25 years, which recognises the investment AI developers make in this technology. But other respondents argued that copyright protection should result only from human creativity. The resulting AIPPI Resolution (PDF 500 KB) on this topic emphasises the need for human intervention and originality.

Regardless of whether these types of works are protected by copyright, it may be difficult in future to determine if a work was generated by a human, a machine, or both. As such, technological solutions may be needed to determine the authorship of works and ensure the right type of protection is applied.

Questions:

5. Should content generated by AI be eligible for protection by copyright or related rights?

6. If so, what form should this protection take, who should benefit from it, and how long should it last?

7. Do other issues need to be considered in relation to content produced by AI systems?

Copyright protects software as a literary work (s3(1)(b) CDPA). To be protected, a literary work must be original, which means it must result from the creative choices of its author.

Copyright does not protect concepts or ideas in general. It does not protect material whose form is dictated solely by functional considerations. As such, copyright would not protect a method of machine learning like deep learning. It would protect specific software which implements deep learning in an original way.

It appears to be accepted that most AI software will be protected by copyright, and it is treated as such within the industry.

We are unaware of specific problems with protection for AI software. Copyright appears to provide adequate protection for software, allowing its creators to benefit financially from their work.

We note that many AI tools and interfaces are made available under open licences. For example, AI-developing businesses like Microsoft and Google make many of their AI tools available under various open licences. Publicly funded UK research in the area also often uses open licences. But like other software, businesses are free to license AI tools under whatever conditions suit their business.

We are not aware of any particular problems with the licensing of AI software, but would welcome views on this topic from those with experience in this sector.

Questions

8. Does copyright provide adequate protection for software which implements AI?

9. Does copyright or copyright licensing create any unreasonable obstacles to the use of AI software?

Respond to the call for views by emailing AIcallforviews@ipo.gov.uk