Consultation outcome

Government response to call for views on artificial intelligence and intellectual property

Updated 23 March 2021

Executive summary

1.Artificial intelligence (AI) is a transformative technology, which is already revolutionising many areas of our lives. The government wants the United Kingdom to be at the forefront of this revolution.

2.Intellectual property rights, such as patents and copyright, protect creations of the mind. They reward people for their inventiveness and creativity. And they encourage investment in technology and culture for the benefit of us all.

3.But what if the creators are not humans, but machines? Should intellectual property protect their creations? Who would own it? And what rules should apply when machines use the creations of others?

4.To seek answers to these and other questions, and to understand how we can provide the best environment to develop and use AI, the Intellectual Property Office (UK IPO) published a call for views, which ran from 7 September to 30 November 2020.

5.The call for views posed questions covering patents, copyright and related rights, designs, trade marks and trade secrets.

6.We received 92 responses and are very grateful to everyone who took the time to respond.

7.This is the government’s response to the call for views. It summarises the views we received and outlines our conclusions and next steps.

Summary of responses

8.The responses came from a range of owners and users of IP rights, including firms and individuals producing AI technology, those using AI technology in the innovative or creative process, and right holders whose rights may be infringed by AI. We also received a number of responses from academics and interested members of the public.

9.A detailed summary of responses in relation to each intellectual property right is provided below.

10.Many responses pictured a positive future where intelligent computers support human researchers, creators, and inventors in developing new technologies, and had opinions on how intellectual property could encourage and support this.

11.But there were also warnings that AI could take the humanity out of the creative process and harm the human creators that intellectual property is designed to protect and reward.

12.In many areas there was general agreement that the present framework could meet the challenges of the future. For example, most people felt that existing liability rules could be relied on to deal with infringement by AI.

13.There was also a consensus that AI itself should not own intellectual property rights. But there were different opinions on whether works or inventions created by AI should be protected.

14.In relation to patents, many respondents felt that current conditions to establish the inventor may act as a barrier to innovation as the use of AI systems increases. Some argued that inventorship criteria may impact patent availability, with less incentive to invest in AI research and less transparency in the innovation process. There was general agreement that patents have an important role to protect and support AI innovation. But there were mixed views about the extent that patents should be available and its impact for follow-on innovation.

15.Many emphasised the need for more clarity and predictability in UK IPO patent exclusion practice in relation to AI software, along with a call for more international harmonisation in the area.

16.A number noted that there may be challenges if AI patent applications are to satisfy the requirement for disclosure of an invention. Some are seeking new facilities so they can file large amounts of information in support of their patent applications. Not all respondents agreed that this is necessary.

17.In relation to copyright, many respondents stressed the importance of putting human creators first. Some argued that works created solely by AI should not be protected by copyright at all. Others argued that they should, but as a separate category of right, with lesser duration and scope.

18.Many recognised the importance of copyright-protected material in training AI systems. Some argued that copyright restrictions made this difficult, and potentially led to bias. Others argued that licensing was adequate and available to those who needed it.

19.There were also comments on the difficulty of identifying which works had been used to train an AI, and of determining whether works had been authored by a human or machine.

20.In relation to trade marks, designs and trade secrets, respondents generally felt that the law was adequate and flexible enough to respond to the challenges of AI. However, issues were identified which could potentially lead to challenges in the future, and which should be kept under review. Links between patents and trade secrets in relation to disclosure of inventions, and between copyright and designs in relation to computer-generated works and designs, were also highlighted.

21.There was general agreement that we are not at the stage where “artificial general intelligence” exists. The government will continue to monitor the development of AI as it progresses.

22.A detailed summary and discussion of the responses received in each area is provided below. The summary of responses is not intended to be exhaustive but gives a flavour of the different views expressed. The actions the government will take forward are based on a full consideration of all responses.

Next steps - objectives

23.Our work on AI and intellectual property supports the government’s wider ambition for the UK to be a leader in AI technology.

24.The refreshed Industrial Strategy (“Strategy for Growth”) will put the UK at the forefront of technological opportunities and will boost productivity across the country, while supporting our long-term recovery and transition to net zero. It will build on the UK’s strengths, create new opportunities for growth, deliver on our commitments to levelling up the nation and ensure we remain a global science superpower.

25.The UK’s AI sector is one of those strengths. As set out in its AI Sector Deal and R&D Roadmap, the government wants the UK to be the best place in the world for research and innovation, and at the forefront of the artificial intelligence and data revolution.

26.The Digital Strategy also sets out the government’s vision and approach to harness digital and support the UK’s recovery from Covid-19. Supporting the digital sector, digital rights, ensuring digital markets are competitive, and the innovative use of data are more important than ever, and, among other issues this strategy will consider their role in our future recovery.

27.The National Data Strategy, published in September 2020, sets out how best to unlock the power of data for the UK. The government believes that unlocking the value of data is key to driving growth both within the digital sector and across the economy.

28.Given this ambition, when taking forward actions following this Call for Views, we will aim to ensure that any measures we implement:

  • encourage innovation in AI technology and promote its use for the public good;
  • preserve the central role of intellectual property in promoting human creativity and innovation;
  • are based on the best available economic evidence

29.The IPO will continue to take a leadership role on AI and IP. In taking forward measures in this area, it will:

  • collaborate with experts from business, technology and research to build our understanding of the AI sector and develop a robust evidence base;
  • advocate common approaches with like-minded nations, and take a lead on the international stage;
  • communicate and engage with developers and users of AI and owners and users of intellectual property to promote understanding in this area

Next steps - actions

In the sections below, we summarise the responses that were received, and set out the government’s response, including actions that we propose to take. These actions explore issues raised in the call for views process, with the aim of providing a system better equipped to meet the government’s ambitions on AI.


  1. Build on the suggestions made by respondents and consult later this year on a range of possible policy options, including legislative change, for protecting AI generated inventions which would otherwise not meet inventorship criteria.

  2. Publish enhanced IPO guidelines on patent exclusion practice for AI inventions and engage AI interested sectors, including SMEs, and the patent attorney profession to enhance understanding of UK patent exclusion practice and AI inventions. The IPO will review its patent practice in preparation for the guidelines and establish any difference in outcome for AI patent applications filed at the IPO and the European Patent Office (EPO).

  3. Commission an economic study to enhance our understanding of the role the IP framework plays in incentivising investment in AI alongside other factors. This will draw together the international evidence. Additionally, engage with other government departments to gather emerging data and understanding of the drivers of the AI sector in the UK context. This will provide an evidence base on which to judge whether there is a rationale for further intervention in the area.

  4. Work with stakeholders and international partners to establish the feasibility, costs and benefits of a deposit system for data used to train AI systems disclosed within patent applications.

5. Review the ways in which copyright owners license their works for use with AI, and consult on measures to make this easier, including improved licensing or copyright exceptions, to support innovation and research.

6. Consult on whether to limit copyright in original works to human creations (including AI-assisted creations). In tandem with this, consult on whether or not to replace the existing protection for computer-generated works with a related right, with scope and duration reflecting investment in such works. Also consider whether action should be taken to reduce confusion between human and AI works, and the risk of false-attribution.

In addition, as part of its broader strategy on intellectual property and AI, the Intellectual Property Office will:

7. Engage with like-minded nations and multilateral organisations (including the World Intellectual Property Organisation and EPO) on issues raised in the Call for Views, to deepen understanding, foster co-operation and establish common ground. The aim of this international leadership will be to shape the global debate in order to develop policy approaches that give the opportunity for growth as part of a balanced world IP system.

8. Work with partners including the Office for AI and AI Council to further engage with and develop our understanding of the AI sector, including technology start-ups and researchers.

9. Hold a UK-wide programme of university-led seminars, building on the content of this government response. The first phase will start with a joint seminar with The Alan Turing Institute in the spring of 2021.

10. Conduct research into artificial intelligence and IP enforcement, and the opportunities and challenges in this area. Research has been commissioned, to report in autumn 2021.

11. Continue to look for opportunities to integrate AI into operational delivery of intellectual property rights, as part of the IPO’s Transformation programme. This will support the IPO’s aim to deliver timely, reliable and quality services. It will build on the IPO’s recently launched trade marks Pre-Apply service, which uses AI to support customers to make high-quality applications. The IPO will also use AI to validate its data and make it readily available for use by businesses.

Analysis of responses


The patents section of the Call for Views considered the role of patents in promoting innovation in AI, and the use of AI in the innovation process.

The questions on patents covered four themes:

  • the aims of the patent system;
  • AI as an inventor;
  • the conditions for granting a patent;
  • patent infringement

Responses came from umbrella bodies, businesses, academics, patent attorney and law firms and individuals.

For ease of reference we have included the questions in the patents section.

Aims of the patent system

Question 1.

What role can/does the patent system play in encouraging the development and use of AI technologies?

The majority of responses to this question felt the patent system had a positive influence on innovation. Many noted this applied across all fields of technology.

A small group of private individuals argued that patents could be used by large companies to block others entering the market. They argued that this could “handicap future innovation” in the field.

A high proportion of responses from a range of respondents said the patent system already encourages work in the AI field. A subset of this group argued that there were ways to change the system to improve incentives. There were respondents who said they balanced use of IP with “open source software”, which brought its own benefits for business.

A smaller group of respondents said the IP system wasn’t delivering the right incentives for AI technologies. These argued the system could and should encourage development and use of AI technology. This group made suggestions for improvements in patent policy and other areas of IP. A few felt that improved guidance on AI would give greater clarity for users of the patent system.

Of those who felt the patent system was good for AI development, some told us that patents play an important role in securing investment. They argued this in turn sustained research and development in the field of AI. Others identified the disclosure requirement as a positive impact of the patent system. A theme in responses in this group was also the importance of a fair and balanced patent system.

Some responses identified different modes of AI innovation. These different modes broadly included:

  • innovations in ‘Core AI’ including the data used in AI (for example an innovation in neural network structure);
  • innovations which apply AI to another technical field (for example a control system which uses AI);
  • innovations which are “AI generated” in which either an AI autonomously generates the innovation or in which the AI plays an assistive role (the resulting innovation could be in any technical field)

Some argued that the different modes of AI innovation should be able to fit in the patent system. Others commented that some of these modes of AI were less suited to the patent protection. Some responses suggested other modes of IP protection may be more appropriate.

There were two responses from patent attorney firms that said they wanted to get a patent for AI innovations but couldn’t because software was not patentable. These respondents recommended a change to the scope of what is patentable. Others felt that the length of time to get protection and the term of protection was not appropriate to a fast-moving sector. One respondent said the iterative nature of AI inventions made patent protection less suitable.

There were some respondents who felt that the patent system did not suit AI technology and that a new, different form of IP right may help.

There were also a few responses which discussed the use of confidentiality agreements as a means to protect AI innovations. For some, this was a result of the failure of the patent system, others took a more neutral stance.

Government response - Aims of the patent system

We asked about the role of the patent system in incentivising development of AI in the UK. The views of those who responded to this question mainly support an effective and balanced patent system as a means to achieve this. We acknowledge that this isn’t the only point of view and note that some respondents felt that IP can limit development of new technology.

Some respondents raised interesting points about the way that patents interact with other IP rights. It is also informative to hear from users of the patent system about how different types of protection can apply to different AI innovation. This highlights the need to consider the IP landscape as a whole when we consider next steps.

AI as an inventor

Questions around inventorship and AI were answered by a great many respondents. Sectoral trends in views were less evident than in other areas.

The predominant mode of AI innovation discussed by respondents was “AI generated” in which either an AI autonomously generates the innovation or in which the AI plays an assistive role.

Questions 2.

Can current AI systems devise inventions? Particularly:

(a) to what extent is AI a tool for human inventors to use?
(b) could the AI developer, the user of the AI, or the person who constructs the datasets on which AI is trained, claim inventorship?
(c) are there situations when a human inventor cannot be identified?

Respondents broadly fell into two camps:

  1. Those that felt that it was not possible for an AI system to devise an invention without human involvement; and,
  2. Those that felt that it was (or at least might be in the near future). Those in the first camp most often saw AI as a tool or assistive technology for use in the development of inventions. Those in the second camp often made reference to the facts surrounding the recent court case on the issue (Thaler v Comptroller-General of Patents, Designs and Trade Marks [2020] EWHC 2412 (Pat)). This said, there appeared to be near complete agreement that AI systems are not, or not yet, independent agents seeking patent rights without human intervention. Many considered that such an arrangement would require artificial general intelligence.

Question 3.

Should patent law allow AI to be identified as the sole or joint inventor?

Questions 4.

If AI cannot be credited as inventor, will this discourage future inventions being protected by patents?

Would this impact on innovation developed using AI? Would there be an impact if inventions were kept confidential rather than made public through the patent system?

Some from innovative industry were of the view that an invention should not be excluded from patent protection merely because an AI assisted in developing, or even devised, an invention. Arguments were made that providing patent protection for AI generated inventions would encourage innovation, as those who build, own and use AI would be able to protect their investments in research and development. Others cited a concern that if AI systems are not recognised as inventors under the patent system, then such inventions might go unpublished. A view was that this could mean trade secret protection is used instead, and confidence to invest decreases.

Some stated that it might be appropriate to list an AI as an inventor per se, others felt this would be inappropriate. Some gave the view that the definition of “inventor” in patent law should be clarified to include “a person by whom the arrangements necessary for devising an invention are undertaken”, and that an AI per se should not qualify as an inventor. Others saw that there was no need to change the current legal rules around inventorship and that ‘AI generated’ inventions were already patentable. Some felt that the concept of an inventor should be discontinued, and that ownership of patent rights should rest with the entity that sets up or hosts the AI system. Many felt there was not incompatibility with these ideas and the notion of joint inventorship.

An alternative suggested was that AI generated innovations be protected through sui generis rights. Others counselled caution against the development of new protections.

One respondent from academia speculated that we should only grant patents to AI if doing so “would stimulate research and development that would not otherwise occur”. The respondent suggested carrying out a study to ascertain the extent of research and development not being carried out due to lack of patent incentive for AI generated inventions.

Question 5.

Is there a moral case for recognising AI as an inventor in a patent?

Many felt there was not a moral case for recognising AI as an inventor. Others felt that it would be more honest/transparent to recognise AI systems as inventors and that this might help prevent people taking false credit for contributing to an invention, merely because they own/control the AI system. Some from academia suggested that additional patentability requirements to ensure that the award of patents is only granted to AI systems with “ethical, safe and inclusive” safeguards.

One respondent felt there may be issues with an AI system owned by one organisation devising an invention for the benefit of another, but that this might be resolved via contract.

Question 6.

If AI was named as sole or joint inventor of a patented invention, who or what should be entitled to own the patent?

Several respondents thought the question of entitlement stemmed from whether an AI system is able to devise inventions autonomously. Some thought that an “AI’s owner, user, or developer are obvious possibilities for entitlement to an AI-generated invention”, and that an AI’s owner would be the “most consistent choice with general principles of property ownership”.

Government response - AI as an inventor

There are mixed views around the extent to which current AI systems can devise inventions with some expressing scepticism. We acknowledge several respondents therefore thought inventorship criteria should not change. But, many argued that the current approach to inventorship criteria potentially has a detrimental impact on innovation, including transparency in the innovation process.

We recognise that AI systems have an increasing impact on the innovation process. We want to ensure the intellectual property systems support and incentivise AI generated innovation.

We also want to ensure transparency in the innovation process and that inventorship criteria do not present a barrier to protecting investment in AI generated innovation.

To achieve these aims, we will build on the suggestions made by respondents and consult later this year on a range of possible policy options, including legislative change, for protecting AI generated inventions which would otherwise not meet inventorship criteria.

Conditions for Grant of a Patent

Question 7.

Does current law or practice cause problems for the grant of patents for AI inventions in the UK?

Question 9.

How difficult is it to secure patent protection for AI inventions because of the list of excluded categories in UK law? Where should the line be drawn here to best stimulate AI innovation?

Question 10.

Do restrictions on the availability of patent rights cause problems for ethical oversight of AI inventions?

Patent Exclusions

After inventorship, patent exclusion was the other main concern given in responses. This is because some respondents felt it prevented getting a patent on “AI inventions”. All who commented agreed that the impact of patent exclusions depended on the type of AI innovation.

Respondents said patent exclusions generally do not cause problems where AI assists or generates an invention (“AI generated invention”). One comment highlighted the number of AI patents that are granted in the UK to support this view. Some respondents noted not all AI generated inventions are patentable. Many illustrated this with the same example: AI algorithms applied to image processing qualify for patent protection. However, AI algorithms applied to text processing are excluded from patents.

Commentators agreed that patent exclusions make it difficult to protect developments in the AI system itself (“core AI invention”) by patent. Many respondents noted that the patentability of AI inventions is typically assessed as any other computer-implemented invention by the IPO. They say it is hard for core AI inventions to meet the tests for patentability set by the UK courts and applied by the UK IPO. A number reflected that a patent limited to the specific technical application of core AI innovation is not a satisfactory solution to the problem.

There were different views on the impact of patent exclusions.

The majority of respondents said that inability to get patents was a problem for the AI sector. This view was expressed across all categories of respondents. They argue that there is less incentive to innovate without access to patent rights. Respondents said this particularly affects the area of core AI. Some respondents said that more AI patents would encourage investment in start-up businesses. Other respondents told us it is hard to predict the outcome of UK IPO decisions on exclusion. This uncertainty has cost implications for businesses.

Many considered the more permissive patent exclusion approach of the EPO gave a better outcome for AI patent applications when compared to the UK IPO. Some commented that the UK IPO refused more AI patent application search requests than the EPO due to a different approach to patent exclusion. One respondent said that search reports are important as they help to decide whether to file for patents in other countries.

Many respondents said the UK patent system is fit for purpose for the AI sector. This view was expressed across most categories of respondents, including business. One reason given by some is that the UK patent system provides a good balance. They said the UK system incentivises AI research and development but does not stifle follow-on innovation. Their view was that more liberal grant of patents for AI innovations would put this balance at risk. A few noted it is common to protect investment in AI innovation through confidentiality agreements without significant issue.

Some respondents suggested a review would be beneficial. They proposed a review could look at the extent that patents stimulate AI innovation. Several replies noted the importance of avoiding pre-emptive action when issues are only now emerging for this sector. Other comments reflect the need for more information on how AI businesses use AI.

Some suggest that a new sui generis IP right might be better suited to protect patent excluded AI invention. They also argue this was better suited to low cost AI innovation and reflected the fast obsolescence of some innovation.

Several respondents are awaiting a decision by EPO on a significant pending patent exclusion case. These respondents would like to know this outcome before giving their view on patent exclusion.

There was no single agreed position on whether ethical oversight of AI plays a role in the policy debate on UK patent exclusions. Some respondents proposed that less restrictive patent exclusions would assist with AI ethical oversight. They argued that more information would be in the public domain. However, the majority saw ethical oversight of AI inventions and patentability as separate issues.

Respondents gave a range of possible solutions to the problem of UK patent exclusions. A common view is that UK IPO should change its practice on patent exclusion, rather than change UK law. Some commentators want the UK IPO to follow the patent exclusion practice of the EPO. To achieve this, respondents highlighted the importance of new UK case law or a less formal approach to existing case law by the IPO.

A few responses said that core AI inventions are more than a “computer program as such”; they are part of technology or to be regarded as a bespoke computer. If UK IPO practice established this principle, commentators said patent exclusion would not be a problem.

A few proposed the amendment of UK law to delete exclusions that impact AI inventions. Others point out that the UK and the EPO share the same excluded patent law categories. They said that it is important to avoid divergence between the UK and the EPO as this creates extra burden. Some businesses want more international convergence on patent office practice for computer implemented inventions as well as AI inventions.

Several commentators want UK IPO to issue search reports for excluded AI inventions. A number call for UK IPO to provide more clarity on patent exclusions in practice guidelines to make patent application outcomes more certain.

Disclosure of the invention

Question 11.

Does the requirement for a patent to provide sufficient detail to allow a skilled person to perform an invention pose problems for AI inventions?

Question 12.

In the future could there be reasons for the law to provide sufficient detail of an AI invention for societal reasons that go beyond the current purposes of patent law?

All responses agree that disclosure is a fundamental part of the patent system. There are no calls to change the test for disclosure.

Respondents generally agree that AI patent applications can include enough detail for a skilled person to perform an invention. They explained it is possible even though it is the nature of AI “to adapt, change and evolve, whereas disclosure takes a snapshot of the system at patent application”. Respondents said this may present challenges for the courts and the UK IPO but the current UK patent system can deal with this. Responses also fed back that it is possible for the invention to be supported across the full breadth of the claim.

A number commented that the requirement to disclose, for example, algorithms or training data may discourage the use of the patent system by some AI innovators. Others point out that the choice between protecting information through confidentiality or by patent is a commercial decision. This is a common decision in the field of computer implemented inventions. One respondent proposed a new form of protection for data. They argued this would ease data disclosure and sharing, while preserving its intrinsic value to its creator.

Some report that it may be more difficult to know the type and extent of information to disclose in an AI patent application. Others say that this is also the situation for other complex technologies and is not specific to AI.

A number of commentators were concerned about the practicalities of filing large amounts of information as part of AI patent specifications. Data could include training sets or a specification of machine connection weightings. Some suggested that there was benefit in the creation of a deposit system for this information. As an alternative, it was proposed that the UK IPO could develop an appropriate format for the data. Others did not agree, saying that it was not necessary for information in this volume to be a common feature of AI patent specifications.

Most commentators agreed that it is important to keep any future requirements to disclose AI details for the purpose of societal oversight outside of patent law. There were concerns that such additional mandatory disclosure could discourage the use of the patent system. Another said that the data needed to meet patent disclosure was likely to be different to the information needed by any AI regulator. A few respondents suggested there might be the need for a new form of data protection that allowed access by regulators but preserved the confidentiality of the data.

Inventive step

Question 13.

Does or will AI challenge the level of inventive step required to obtain a patent? If yes, can this challenge be accommodated by current patent law?

Question 14.

Should we extend the concept of “the person skilled in the art” to “the machine trained in the art”?

A large majority of responses said that the current inventive step framework is flexible enough to deal with AI innovations. Their view was that “the person skilled in the art” has a range of tools available to them and AI technologies will be one of those tools. Some respondents said this allows for different approaches in different technology areas. For example, the skilled person would be more likely to use an AI tool if AI is commonly used in their field of technology. A few responses highlighted that the current framework is already flexible. These responses noted the system coped with new technologies, such as computers and the internet.

Within the large majority who viewed AI as a tool available to “the person skilled in the art”, respondents answered question 13 differently based on how they interpreted the ”level of inventive step required”. Most said the “level” would change as AI tools improved and more modifications are considered obvious. A few responses defined the “level” as a measure of the creativity of ‘the person skilled in the art’. They said this “level” should remain constant across all areas of technology. But, the availability of AI tools would change the outcome of what was considered obvious.

A few responses called for legal change. Of these, some wanted to ensure the system enabled protection of AI inventions. Others wanted to tighten requirements to avoid large numbers of patented AI inventions.

A large majority considered that it was not necessary to extend the concept of “the person skilled in the art” to include “the machine trained in the art”. Many thought this was unnecessary because the AI was part of the tools available to the person skilled in the art. A few responses said extension may become necessary if AI-generated inventions come to dominate an area of technology. Others indicated that this would be one of many issues to be reconsidered as we move towards independent AI invention.

Some responses expressed concern that “the machine trained in the art” would not be a stable or predictable standard to apply. They said this was because AI tools will develop over time.

A few responses highlighted that “the person skilled in the art” is a legal fiction, not a real person. These responses remarked that the way “the person skilled in the art” is defined, may be closer to current AI technology than a real person.

Question 8.

Could there be patentability issues in the future as AI technology develops?

Some respondents called for the patent system to be kept under review as AI technologies develop. Many reasons were cited for this.

A few comments predict that the rate of invention and patent application will increase rapidly with more AI involvement. They say this risks the increase of patent application backlogs and pressure on patent office resources.

There was also concern that more patents will make it harder for new entrants to enter the AI sector. Others felt that there might be competition issues if many AI generated patents were granted to a single entity in a business sector. One commentator noted that this issue may arise during the period before AI tools are commonplace.

Others said it will be harder to patent AI inventions as AI tools will identify more prior art. Many respondents believed that a fresh look at the entire patent system would be needed when AI acts independently to generate inventions. For example, it would not be clear whether patents would require different standards in the area of sufficiency of disclosure and inventive step or even if patents should be available at all. In the view of some, this review should consider excluding AI patents that are not ethical, safe or inclusive.

Government response – conditions for patent grant

Although the conditions for the grant of AI patents in the UK look to be generally fit for purpose from the views received, there were mixed comments about patent exclusions.
The government agrees that the UK patent system should ensure the right balance between protection and disclosure of inventions and freedoms to innovate. This is important if innovation in the UK AI sector is to be maximised. The government also agrees that AI inventors need UK patent exclusion practice to offer more clarity to improve predictability for patent application outcomes. We also appreciate concerns that a lack of international harmonisation in patent exclusion practice across patent offices places burdens on patent applicants. In view of this the government will:

  • publish enhanced IPO guidelines on patent exclusion practice for AI inventions and engage AI interested sectors, including SMEs, and the patent attorney profession to enhance understanding of UK patent exclusion practice and AI inventions. The IPO will review its patent practice in preparation for the guidelines and establish any difference in outcome for AI patent applications filed at the IPO and EPO
  • commission an economic study to enhance our understanding of the role the IP framework plays in incentivising investment in AI alongside other factors. This will draw together the international evidence. Additionally, engage with other government departments to gather emerging data and understanding of the drivers of the AI sector in the UK context. This will provide an evidence base on which to judge whether there is a rationale for further intervention in the area
  • work with stakeholders and international partners to establish the feasibility, costs and benefits of a deposit system for data used to train AI systems disclosed within patent applications


Question 15.

Who is liable when AI infringes a patent, particularly when this action could not have been predicted by a human?

The majority of responses from representatives, businesses and academics agreed that legal persons[footnote 1] should be liable when an AI infringes a patent and not the AI itself. A legal person means an individual or corporation which has legal personality.

There was a mix of answers on which specific legal persons should be liable and some stated that this would depend on the facts of each case. Many agreed that it was the legal person who performs the infringing act through the AI or makes the necessary arrangements for performing the infringing act. Another common suggestion was the owner or developer of the AI should be liable. Some respondents felt that liability should be determined in the same way as ownership and that current infringement rules should continue to apply, regardless of technology.

Questions 16.

Could there be problems proving patent infringement by AI? If yes, can you estimate the size and the impacts of the problem?

The majority of responses from representatives, businesses and academics agreed that there would be problems in proving patent infringement by AI. A smaller number suggested that there would be no more problems in relation to AI than other complicated technologies. Respondents frequently mentioned issues that arise because of the AI “black box”. Further, some thought it may be difficult to prove the territory in which infringement took place. This is an issue when AI is a neural network hosted on different servers across the globe and infringement occurs in multiple jurisdictions. Those respondents consider it would be hard to prove infringement took place in the UK.

Only a few respondents commented on the size of the problem with one individual and representative saying it could be large, whilst other representatives said they couldn’t give an estimate.

Solutions to problems proving patent infringement by AI included the use of legal procedures such as disclosure, and ensuring that patent claims are drafted “with specific regard to the ability to show infringement”.

Government response - Infringement

The current practice of “legal persons” being liable for infringement appears to be in keeping with most respondents’ views. Many of the problems proving patent infringement by AI already exist when trying to prove patent infringement with other technologies.

We consider that in respect of “AI patents” the courts have appropriate flexibility to make decisions based on the facts of the case. And that claimants are able to use court processes to support their actions. Therefore, we do not currently intend to intervene in this area.

The copyright section of the call for views considered how the law treats the use, storage, creation, and infringement of copyright works by AI systems, and whether there are opportunities to provide a better environment to develop and use AI.

The questions on copyright covered three themes:

  • the use of copyright works and data by AI systems;
  • the existence of copyright in works created by AI, and who it should belong to;
  • copyright protection for AI software

Responses came from a range of interested parties including copyright owners, the creative and technology industries, licensing bodies, legal representatives, and academics.

This section considered:

  • how AI systems use copyright works and data;
  • infringement of copyright and database rights by AI;
  • whether the copyright framework should make it easier for AI to use protected content.

Infringement and liability

Overall, copyright owners felt that the current law adequately covers scenarios where copyright-protected material is used to train or develop AI software. Examples were given of licensing activity in this area. There were some requests for greater clarity about the operation of exceptions, such as the data mining exception, and how they interact with licensing. But, generally, the copyright framework was seen as adaptable and applicable to new technologies as they develop.

However, some respondents sought greater clarity about who is liable when copyright is infringed in relation to AI. Copyright owners acknowledged that AI itself is not a legal person capable of being an infringer and that AI processes are often developed and controlled by a range of different people and organisations. They felt that liability for infringement should apply to all those deriving benefit from copyright infringement.

Respondents from the technology industry stated that there was already sufficient clarity in the current legal framework to identify the entities responsible for infringement but acknowledged a need for more education and awareness on this matter to help make the process easier.

Some respondents raised an issue relating to the identification of infringed works and a need for early identification of works to be used in the machine learning process, to ensure permission is sought. One proposed a mechanism for identifying the works ingested by AI systems based on detailed datasets and reporting capabilities.

Some respondents highlighted a need to ensure the UK’s enforcement regime allows copyright owners to act against large-scale infringement, where AI developers have used a large number of copyright-protected works without authorisation.

Many copyright owners argued that the current copyright exceptions do not apply to the use of copyright works in the AI machine learning process. They said that such use requires the express permission of right holders who should be able to determine whether to license their works for such use and set appropriate terms. This could be done through collective blanket licensing of works or specific licensing depending on the works being used. Respondents from the creative industries also observed that AI is already being used to some extent by their sectors, whether as a creative tool, a research service or a way to analyse or promote business.

A number of respondents gave views on text and data mining (TDM) as a methodology which can support AI in a variety of ways. Many copyright owners understood the rationale for the present copyright exception which allows TDM for the purposes of non-commercial scientific research but expressed concerns about moving towards an exception that would allow commercial TDM. They believed that this could prejudice their legitimate interests and argued that licensing for TDM is already used across their sectors and can provide greater certainty. They argued that any new exception allowing more permissive use would shift the balance unfairly against creators.

Most copyright owners felt that a voluntary licensing model would best balance their need to be remunerated for the use of their works against the need for access to copyright works by the AI sector. They suggested that the government should take steps to better understand how licensing operates in this field.

Many users of copyright material for AI, including technology firms, entrepreneurs and researchers, argued that easy access to works is crucial for teaching AI systems. They argued that such activities do not conflict with existing markets for work and, in fact, they may create new ones. While some already have licences in place in order to carry out TDM at volume, they also highlighted potential drawbacks of a licensing-only model.

For example, works being mined can be restricted by curatorial bias, only mining what is available under licence, rather than what would be most useful for the purposes of the AI.

Furthermore, licensing costs may be affordable for established or large businesses, but may be prohibitive for start-ups, micro and small businesses. It was argued that this could risk the UK losing any competitive edge in AI and innovation, and “frustrate the government’s objective of making “the UK a global centre for AI and data-driven innovation.”

A number referred to international approaches to TDM exceptions as they relate to AI – in particular the Japanese and EU data mining exceptions and US fair use provisions. They argued that these should be better understood in the UK, and that similar approaches may benefit innovation.

Many respondents, including all copyright owners, support licensing to manage the use of copyright works by AI. However, many also called for licensing mechanisms to be improved or for data mining exceptions to be expanded. Easier access to materials was advocated both from an innovation perspective, and in order to mitigate bias.

The government agrees that a better understanding of the licensing framework in relation to AI would be helpful. It also wants to better understand the merits of TDM exceptions in this area, including the approaches taken in other countries. The government remains committed to ensuring that a fair balance is struck between the needs of copyright owners and users. In view of this we will:

Review the ways in which copyright owners license works for use with AI, and consult on measures to make this easier, including improved licensing or copyright exceptions, to support innovation and research.

Protecting works generated by AI

This section considered the following points:

  • the extent of human involvement in AI creativity;
  • how the law treats AI-generated works;
  • originality and computer-generated works;
  • whether AI-generated works should be protected by copyright;
  • other protection for computer-generated works

Many respondents from the technology industry took the view that content generated by AI should be eligible for protection by copyright. They thought the copyright should reside with the owner or user of the AI system and not with the AI system itself. Respondents from this sector generally took the view that AI-generated works are already within scope of the definition of computer-generated works under Section 9(3) Copyright, Designs and Patents Act 1988 (CDPA), and that this protection should continue.

However, other respondents – in particular those from the creative industries – thought the way in which AI works are currently protected is not ideal. Some noted that the present law on computer-generated works is unclear, given the lack of understanding of what originality means in relation to such works, and the case law linking originality to human creativity.

Some respondents from academia argued that the present “author’s own intellectual creation” standard for originality did not support copyright protection for AI works, and that it would be better to look at new approaches based on the creative output or the creative process to judge whether such works should be protected.

Most creators and creative industry stakeholders said that copyright should prioritise human, not machine, creativity. They thought it vital that the UK copyright framework maintains human incentives for creation and dissemination. They generally felt that works created by humans using AI as a tool were protected under ordinary copyright law. But most thought that content generated exclusively by AI (without human creative expression) should either be ineligible for copyright protection at all, or that the way we protect them should be reconsidered.

The primary argument against protection was a utilitarian one - that machines do not need the same incentives or rewards to create as humans. What might take a human author weeks or months of skill and labour to create, might take a computer program a fraction of a second. But many also cited the inherent value that copyright places on human creativity. For example, the British Copyright Council said that granting copyright protection to machines would devalue the fundamental reason for copyright – “to protect human endeavour and spirit”.

Some creators argued that the implications for human creative endeavour could be devastating if human creators are unable to compete with AI content created at scale and at negligible cost. They also felt that fear of infringing rights in AI-generated content could be a deterrent to human creation.

Some from the creative and technology sectors argued that AI-generated works should be protected by a distinct right rather than copyright. This could be modelled on those available for sound recordings and broadcasts. Such protection could give an investment incentive to those creating and using AI technology. Others, particularly creators, argued that AI-generated works should not be protected at all.

A number of creative industry stakeholders and licensing bodies argued for there to be a clearer distinction between human created (including AI-assisted) works and those works created solely by a computer. Some raised the prospect of confusion between human and AI-generated works, including a risk that humans may falsely attribute AI-generated works to themselves in order to claim protection. Proposals included identification systems such as watermarking to identify AI-generated works.

The majority of respondents raised the need for further considerations to be taken in this area given the rapid development of AI technology. It was also felt that further guidance for AI system developers and users on the use of content that is subject to copyright is important for ensuring that an AI compliant industry develops, providing certainty for all market participants.

Government response - protecting works generated by AI

The government agrees that the current approach to computer-generated works is unclear, and that there is a case for reconsidering it. We agree that ordinary copyright appears to offer adequate protection where a creator uses AI as a tool and the work they create expresses human creativity. But where a work is created by a computer without human creative input, the threshold for originality is unclear and the rationale for its protection – in particular the incentive effect – is not the same as that for human-authored works. In view of this, there may be a case for more limited protection, or no protection at all.

We also appreciate concerns that mass-produced works generated by AI could devalue human creators and agree that we should not undermine copyright’s central role in rewarding artistic expression and talent.

In view of this we will:

Consult on whether to limit copyright in original works to human creations (including AI-assisted creations). In tandem with this, consult on whether or not to replace the existing protection for computer-generated works with a related right, with scope and duration reflecting investment in such works. Also consider whether action should be taken to reduce confusion between human and AI works, and the risk of false-attribution.

There were two areas for consideration in this section of the Call for Views: whether current law provides sufficient protection for AI software and whether the licensing of AI software is appropriate.

Summary of responses:

The majority of respondents thought that copyright does not create any obstacles to creating or using AI software and overall considered the protection of software itself sufficient. Where organisations saw a need for greater protection for software, this was not through copyright but other rights such as patents.

Respondents generally felt that the way that copyright protects software as a “literary work” is clear. As software is developed by human authors through writing source code, then the owners of copyright in software are its developers. Copyright therefore provides incentives and rewards to creators of AI software.

On licensing, the majority of respondents felt that licensing can provide legal certainty for AI developers and does not inhibit the growth of AI technologies. There was agreement that systems that keep copyright content from being used without authorisation should be in place to protect human creators. Furthermore, respondents acknowledged that copyright licences have a proven track record of being suitable for previously unknown forms of exploitation - for example, streaming, video gaming, online education – so licensing should be appropriate for AI applications.

The responses to this section confirm our understanding of the current situation. Copyright provides adequate protection for software, allowing its creators – including those creating AI software – to benefit financially from their work. In view of this, we do not plan to do further work on how copyright protects software, or its licensing.

Trade Marks

The trade marks section of the call for views considered how the increasing prevalence of AI technology in consumers’ lives and its role in the marketplace may impact the trade mark legal framework.

The questions under trade marks covered two themes:

  • potential impact on trade mark law, including the concept of the “average consumer”;
  • infringement and liability

Responses that addressed the trade mark questions came from legal representatives (groups and individuals), industry and academia.

Trade mark law and the average consumer

The call for views discussed AI in relation to the traditional concepts within trade mark law. These are founded upon human participation in purchasing and brand interaction. Focus was drawn to the legal concept of the “average consumer” to measure the likelihood of confusion between signs. AI systems are already making purchasing suggestions to predict what someone might want to buy. This section asked whether AI systems were likely to affect trade mark law concepts, including the “average consumer”.

Summary of responses:

Respondents acknowledged that AI could reduce consumer involvement in purchasing decisions leading to less brand interaction. There were differing opinions regarding the likelihood of this and its impact on trade mark law.

Some felt that AI technology in e-commerce, automatic restocking, and “smart” home devices could reduce human interaction in purchasing decisions. Such technology may appear to be operated by consumers but is controlled by AI in the background. Therefore, only a limited range of personalised product suggestions are offered. Such control of product information may reduce the interaction between brands and consumers. Still, the overwhelming view was that AI was unlikely to replace humans in purchasing decisions in the immediate future.

Others felt this was a moot point, due to the improbability of AI technology ever being the primary or sole purchaser. The legal inability of AI to enter into contractual relations was also noted. One respondent highlighted that, even if an AI were deemed the primary purchaser in a transaction, a degree of human input would remain. A human may still control or influence the settings and direction of AI purchasing and remain able to purchase without using AI. Furthermore, humans will remain the ultimate target and end users for most products. With an understanding of brand origin, they will continue to be a core part of the advertising and marketing of a brand, and thus trade mark law.

In terms of the existing principles within trade mark law, the consensus was again that AI is not yet developed enough to have an impact on these legal concepts. The “average consumer” is still human and unlikely to change for the foreseeable future. AI is not a consumer in its own right. It is a technology tool, assisting consumers (a human) to make purchase decisions.

Other legal concepts were raised, including “imperfect recollection” and “likelihood of confusion” and the reduced relevance of these to AI. Though one respondent flagged that capability could still vary across AI systems. Another respondent posed a hypothetical question regarding whether sales to the purchasing AI could generate reputation and goodwill. Others raised the potential for infringement cases to shift from confusion at initial point of sale towards an increased emphasis on post-sale confusion or harm.

Many of the above considerations were still deemed speculative at this stage, due to the continued role of humans in purchasing. However, further points were raised linked to the general increasing use of AI. Respondents suggested that, as voice assistant technologies develop further, the phonetic and aural comparison of marks may play a more important factor in the future. Also, if AI takes a bigger role over consumers’ purchase decisions, trade mark law might have to rebalance the importance given to visual and conceptual similarities when considering likelihood of confusion between conflicting trade marks.

Infringement and liability

The second trade mark theme focused on infringement and liability. The Trade Marks Act 1994 (TMA) refers to “a person” who commits trade mark infringement. The questions explored if AI actions amount to trade mark infringement. If so, who would be liable and whether the actions of AI could be considered “use in the course of business”.

Summary of responses:

Respondents believed that AI systems currently lack legal personality and that human influence remains predominant. Responses agreed that AI is a tool and possible medium for infringement, but not an infringer. Therefore, there should be no impact on the drafting of infringement provisions of the TMA. There was universal agreement that AI cannot be liable itself for trade mark infringement as it is not considered a legal entity.

Examples of infringing actions included a suggestion that voice assistants could present a consumer with various options of a product to purchase, when in fact the consumer asked for a specific brand of the product. It was suggested that this amounts to consumer misdirection and should be treated differently from web search engine providers. Other respondents suggested that AI systems could be programmed to create online advertising material and trade marks very similar to existing ones, which could amount to infringement.

While respondents believed that the actions of AI may infringe a trade mark and that liability lies with a legal person, there were various suggestions regarding who this legal person may be. These included the operator or user of the AI, the provider of data or brand owners. One idea was that liability for trade mark infringement by AI would be determined by the context. Various considerations were raised relating to this, including “contributory infringement”, the act of infringement versus the act of “inducing” infringement, and possible contractual provisions between different parties, for example the AI supplier and operator.

Two respondents addressed the need to ensure the use of AI technology could not become a means for a third party to escape or avoid liability.

The consensus was that legislative changes are not required at present, but that this should be monitored as AI technology evolves. One response specifically caveated that if AI development raised any doubts regarding infringement through AI-mediated actions, it may be necessary to clarify section 10 of the TMA.

One respondent raised a conceivable grey area where potential infringement is caused by an AI system operating a website offering consumers infringing products from various jurisdictions. The products may not otherwise infringe in their “home” territory but could infringe in the consumer’s territory. Liability may be hard to determine where the seller, the AI enabling the sale, and the ultimate recipient, were all in different jurisdictions, especially where the recipient effectively becomes an importer of those goods. However, this example was provided in relation to a decentralised or independent AI system operating without apparent control. This appeared to be more of a theoretical scenario at this stage.

Government response – Trade Marks

The government has considered carefully all responses received and recognises the complexities in the relationship between trade marks and AI technologies. We acknowledge the clear steer provided by respondents that AI is not yet developed enough to impact the core legal concepts within trade mark law. In light of these responses, current trade mark legislation appears fit for purpose.

The government accepts the view provided by respondents that AI systems may infringe trade marks but that AI should be regarded as a tool being deployed under human direction and that liability lies with a legal person. Consequently, we accept the views shared that there is no or limited impact on infringement provisions in trade mark law. We also recognise the role of the courts in exploring these issues on a case by case basis.

Looking to the future, we recognise stakeholders’ expectations that human interaction will remain a key feature within purchasing, brand interaction and product consumption, and the recommendation that trade mark law should still be viewed through this lens. We also note the suggestion from respondents that, as voice assistant technologies develop, phonetic comparisons of marks might play a more important role in the likelihood of confusion in the future.

The government recognises the strong recommendation that trade mark law does not require changes at present, but should be monitored as AI technology develops in the medium to long term. We will therefore keep the matter under review, and remain receptive to further views from stakeholders on trade marks and AI.


The aim of the designs section was to:

  • understand the implications of AI in the design sector,
  • seek stakeholder views on ownership and authorship and infringement and
  • whether legislation may need to change to take account of AI becoming more common in future

Responses came from a range of interested parties including umbrella organisations, legal professionals, an academic and businesses involved in the AI industry.


This section asked for stakeholder views on:

  • the law relating to the authorship and ownership of designs,
  • how current legal provisions apply to AI systems and
  • whether the law needs changing to recognise AI as the author of a design

Summary of responses:

Most respondents agreed that current legislation does not allow AI to “own” a design, as it does not have legal personality. Furthermore, they were of the view that AI should not be able to have such rights.

One respondent argued that, as design rights seek to encourage and reward innovation, and AI does not require these incentives, it would be inappropriate for AI to own a design. Others were concerned that AI systems should not replace human designers. One response said it would be undesirable and potentially dangerous to allow AI to own designs, and it would make the complicated design law system more complex.

Another response said that if changes are made to allow AI to be an author of a design, only fully autonomous and self-sufficient systems could be recognised as such. However, it was considered too early to consider this in detail as currently AI cannot produce designs without human involvement.

Many responses referred to the provisions in the Registered Designs Act 1949 (RDA) relating to computer-generated works. The law says that where a design is generated by a computer, the person who made the arrangements necessary for the creation of the design will be considered the author. Some respondents felt this provision was adequate to deal with AI-generated designs. Others suggested that it could result in confusion about who owned the design created – the supplier of the AI system or the person who provided the data which resulted in the design. Some stakeholders suggested that government should keep these provisions under review as AI technology progresses.

Views were mixed on the provisions of the Community Design Regulation. Most responses to this question considered that the current provisions are flexible enough to encompass designs created by AI with human input. Other responses felt that there was the potential for legal uncertainty in the absence of specific provisions, especially for designs autonomously generated by AI.


This section asked for stakeholder views on whether the legal test for deciding whether infringement has occurred and who should be held liable for infringing a design can be applied to AI systems.

Summary of responses:

Most respondents suggested that whilst AI may be capable of carrying out some infringing acts - for example offering, putting on the market and making, other infringing acts require the involvement of an individual with legal personality and so could not be done by AI. Several respondents suggested that AI could be a tool for infringement rather than being the infringer.

There was general agreement that liability should fall on the operator or those individuals or organisations most closely involved in the infringement or using AI as a tool to infringe. This would address the concern raised that, as AI does not have legal personality, it cannot be liable for infringing acts. One response considered that AI should only be liable for infringement if it also qualifies for IP protection. Another response suggested that it could be difficult to assign liability in some cases, and that new structures may be needed to provide recompense to the owners of infringed designs - for example insurance taken out by the AI owner or operator, or a reasonable royalty approach. A further response suggested that infringement be considered under the general law of tort and new provision specific to IP should not be created.

Respondents generally agreed that the “informed user” test is broad and flexible and that the presence of AI should not impact the use of this well understood and objective legal concept. One response stated that as the informed user is a “legal fiction”, no situation could be envisaged in which the informed user could be deemed to be an AI system.

In addition to comments on these general themes, the following points were also made:

  • the nature of infringement is different for unregistered designs – which requires proof of copying – and registered designs, which does not. This response also raised the issue that it is not clear how it could be shown that AI carried out secondary infringement of UK design right, which requires knowledge or reason to believe an act is infringing
  • when considering infringement, the definition of “product” as “industrial or handicraft items” in the RDA could become problematic as AI commercial activities are more likely to be virtual products, and the definition may need amending in due course to reflect this
  • when assessing individual character, a human observer may say that the designs do not create the same overall impression, when an AI has been programmed to specifically ensure that products are similar. The response suggests research be carried out to determine whether this is an issue and whether guidance should be developed on use of AI agents to assist in assessment of similarity

Government response - Designs

The government has carefully considered the responses received and recognises that the use of AI in generating designs is a developing area of technology. We acknowledge the view of stakeholders that current legislation is generally fit for purpose but that there is a need to monitor the situation as the AI systems used in the design process develop.

On authorship and ownership, we accept the views of stakeholders that an AI system should not be considered the author or owner of a design. On infringement, we accept the view provided by respondents that AI systems may be used to carry out certain infringing acts, but AI should be regarded as a tool being deployed under human direction and that liability lies with an individual. We acknowledge the widely-held view that the “informed user” test remains applicable to designs which have been generated by AI.

Looking to the future we recognise that, as AI systems develop and move towards full autonomy, we will need to consider whether current designs legislation remains relevant. There does not appear to be a need to act now in relation to designs, but the government will keep the matter under review.

Longer-term questions include, but are not limited to: provisions relating to designs generated by a computer and who is considered the author of a design so generated; whether there is a need to further establish an individual to whom liability for infringing acts should attach; and ensuring that established legal tests continue to remain relevant to AI-generated designs as case law develops.

We note that the approach that designs law takes toward computer-generated designs is similar to that taken by copyright law for computer-generated works. We will ensure that any review of copyright provisions takes this into account.

Although we do not intend to take any specific actions in relation to designs and AI, we will keep these issues under review, and will continue to engage with interested stakeholders. 

Trade secrets

This section asked for stakeholder views on the importance of trade secrets to the AI sector, in particular their role in protecting material not protected by formal IP rights.

Summary of responses

The general consensus of respondents was that trade secret protection is important for the AI sector. Trade secrets were valued for their flexibility and simplicity, and innovations protected in this way were often difficult for others to reverse-engineer.

Patent protection, was considered unsuitable in certain contexts, owing to the need to disclose details such as algorithms. Secrecy is often important in an expanding and competitive AI sector.

The responses to the consultation indicate that there are no changes required for trade secret protection in relation to AI.

Respondents noted some disadvantages of using trade secrets as a form of protection for AI. One point was that individuals may be unable to identify which aspects of large volumes of information could be confidential. It was also mentioned that trade secret protection may not be suitable where disclosure of details is required. For example, for data protection or for health and safety reasons.

For information to benefit from trade secret protection, additional precautions are required - some of which can be time-consuming as well as costly. Some respondents stated that they would not support the use of trade secret protection for inventions derived from AI. This is because it could cover up harmful outcomes, such as a threat to health and safety. Respondents generally said that trade secrets are unlikely to create problems for ethical oversight of AI inventions, and that trade secret law is not intended to regulate ethical issues.

Government response: Trade secrets

The replies to this section of the consultation suggest that urgent change is not needed. As such, the government does not plan any action to amend trade secrets protection. However, we note the importance of trade secret protection for technologies such as AI and will monitor future developments. In particular, we note the relationship between patents and trade secrets, and will take this into account as we take forward work in this area.

  1. Note: throughout this document, a reference to a ‘legal person’ means an individual or corporation which has legal personality.