Guidance

Guidance to civil servants on use of generative AI

Updated 29 January 2024

This page has been superseded by the Generative AI Framework for HMG

Generative AI is a broad label used to describe any type of artificial intelligence (AI) that can be used to create new text, images, video, audio, or code. Large Language Models (LLMs) are part of this category of AI and produce text outputs.

ChatGPT and Google’s Bard are publicly available web based versions of generative AI, that allow users to enter text and seek a view from the system, or to ask the system to create textual output based on a given subject. They allow individuals to summarise long articles, get an answer of a specific length to a question, or have code written for a described function.

This guidance also covers the above, and other forms of generative AI, including systems such as DALL-E which generates images based on text and BLOOM which generates computer code.

As part of the Transforming for a Digital Future roadmap, all central government departments made a commitment to systematically identify and capture opportunities arising from emerging technologies. You are encouraged to be curious about this new technology, expand your understanding of how they can be used and how they work, and use them within the parameters set out within this guidance. For all new technologies, we must be both aware of risks, but alive to the opportunities they offer us.

If you would like to find out more, the Central Digital and Data Office (CDDO), in partnership with DSIT, has several cross government forums and working groups who explore use cases, risks and opportunities that new technologies offer. Work continues to develop our understanding including DfE AI in Education, and NCSC articles on the subject.

This guidance outlines the expectation for how civil servants should approach the use of Large Language Models. New tools are emerging all the time. If civil servants see something that they think has potential to improve our work in government then please contact cddo@digital.cabinet-office.gov.uk.

Summary of guidance:

Never put sensitive information or personal data into these tools.

  • With appropriate care and consideration generative AI can be helpful and assist with your work. However, you should be cautious and circumspect in your approach, noting the guidance provided here.
  • You should never input information that is classified, sensitive or reveals the intent of government (that may not be in the public domain) into any of these tools. You should have regard to the principles of GDPR.
  • Output from generative AI is susceptible to bias and misinformation, they need to be checked and cited appropriately.

This guidance will be subject to a review after six months, to address emerging practices and better understanding of the use cases for this technology.

This guidance covers general principles for civil servants, how these apply to use of LLMs, the practicalities of using LLMs, and the government’s wider approach to generative AI.

As with all digital systems, users are responsible for their own actions when using such tools and are reminded of their obligations under GDPR.

Examples of how to use and how not to use generative AI in your role are included below.

General principles for Civil Service working

Civil servants should be inquisitive about new technologies, including generative AI tools. However, we should always exercise caution when using and sharing sensitive information or information which contains personal data. This includes being cautious about the sort of information that is entered into LLMs like ChatGPT.

The following general guidance always applies to the systems we use:

  1. You should always be aware of what information you have access to, what rights and restrictions apply to that information (from either private or government sources) and the conditions under which that information can or should be shared.
  2. You should always be mindful of any systems into which you enter information. Does that information need to be entered in that system? What will be done with the information once it has left your possession? What rights are being given away in placing the information elsewhere?

How this applies to generative AI

Generative AI tools are evolving at pace. We have experience of already using predecessors of this technology in both the government and NHS. However, new products such as the latest version of ChatGPT and Google’s Bard product are a leap forward in publicly available generative AI tools.

We encourage you to explore this technology and consider the implications for your organisations and the services you provide. However, there are some ground rules you should keep in mind.

You should never put sensitive information or personal data into these tools. Beyond existing data protection laws, government has no oversight over how data, which is entered into web-based generative AI tools, is then used. Therefore, you should not put information into generative AI tools that, if compromised or lost, could have damaging consequences for individuals, groups of individuals, an organisation or for government more generally.

When using generative AI, consider the three Hows:

  • How your question will be used by the system. These systems learn based on the information you enter. Just as you would not share work documents on social media sites, do not input such material into generative AI tools.
  • How answers from generative AI can mislead. These tools can produce credible looking output. They can also offer different responses to the same question if it is posed more than once, and they may derive their answers from sources you would not trust in other contexts. Therefore, be aware of the potential for misinformation from these systems. Always apply the high standards of rigour you would to anything you produce, and reference where you have sourced output from one of these tools.
  • How generative AI operates. A generative AI tool, such as a LLM, will answer your question by probabilistically choosing words from a series of options it classifies as plausible. These tools can not understand context or bias. Always treat with caution the outputs these tools produce and challenge the outputs using your own judgement and knowledge.

Practicalities of using generative AI in your role

  • References: Whether using the outputs from generative AI either verbatim or with minor alterations, it is important to make clear to those reading that one of these tools has been used. To do this the tools should be cited in a footnote, with its URL and any sources used as inputs.
  • Accounts: When using generative AI for one of the appropriate uses described in this guidance, you can use your gov.uk email address, but be aware of what you are entering (based on the content of this guidance).

Government approach to generative AI

Generative AI is one amongst many leaps forward in technology which will have significant implications on the way we work. The creation of a new Department for Science, Innovation and Technology (DSIT) is in recognition of the importance of these technological developments for the country, and the importance of us seizing the opportunities they will bring.

This technology has great potential. Future versions of these tools will be refined to give even greater quality of output. The forthcoming AI regulation white paper will set direction for regulation of AI, including generative AI in a manner that is proportionate and pro-innovation, while protecting people and our fundamental values.

DSIT is leading on linking up the various cross-government working groups that are looking at generative AI, and AI in general. For more information or to get involved please check out the DSIT Website.

ANNEX Examples of Civil Service use of generative AI in government.

Appropriate (general) examples of using generative AI

Example 1: Using generative AI for research

Generative AI can be used as a research tool to help gather background information on a topic relating to your policy area that you are unfamiliar with. For example you might be interested in how a country’s greenhouse gas emissions are measured. Generative AI will be able to quickly provide you with an overview of this, to aid your understanding.

When using generative AI in this way, the three ‘Hows’ need to be considered:

  • How your question is being used by the system: Does the information you have put in your question reveal a particular government policy interest that is not public, or the intent of government? If so, you should not enter it into these tools.
  • How answers from generative AI can mislead: Before using this information (both formally and informally) you should verify all facts that it is reporting with other reliable sources that can be referenced/cited. Conversational tools such as ChatGPT aim to replicate human communication styles, and some do so very effectively. Information from these sorts of systems is often presented in a tone that to the reader appears confident, trustworthy and convincing. False information can appear at any point and all facts and assertions must be cross checked, no matter how authoritatively they appear to be presented.
  • How generative AI operates: Consider what context the tool might have missed. Do not use generative AI outputs as your only source of information on any one topic.

Example 2: Summarising Information

Generative AI can be used to summarise publicly available information such as a relevant academic or news article on a policy area, that could be added to an annex of a briefing. This could save time when producing briefings.

When using generative AI in this way the three ‘Hows’ need to be considered:

  • How your question is being used by the system: Is the article being summarised from a source that is publicly available? Practically, you will also need to check that the generative AI you are using has the ability to work with longer inputs depending on the size of the article you are looking to summarise.
  • How answers from generative AI can mislead: Consider whether the summary is actually an accurate summary or whether it is missing key information.
  • How generative AI operates: Consider what context the tool may miss. Does the summary reflect the overall sentiment of the article?

Specific (specialist) examples of using generative AI

Specialist example 3: Developing code

A front end or full stack developer may wish to use an generative AI to create a front end interface to a website, that will be released to the public, and use the outputs to speed up the work involved in design and build. This will save time coding and provide coding functions which the developer may not be aware of.

When using generative AI in this way the three ‘Hows’ need to be considered:

  • How your question is being used by the system: Are you providing insight into your code or system, which should not be in the public domain or which gives information on the security posture of your application?
  • How answers from generative AI can mislead: You should double check the output. Has the output created the code you wanted, and only that?
  • How generative AI operates: Does the code give you the best answer to your question and is it reflective of new features available to you in the environment?

Specialist example 4: Textual Data Analysis

Generative AI can be used by data scientists and machine learning specialists as a data mining tool to read and analyse large qualities of text-based information, to try to find anomalies, patterns and correlations that lead to a greater understanding of a problem space. This is an appropriate use case in which experts use generative AI for their intended purpose. They would not generally use publicly available tools for this type of work.

For example: A recent exercise looked at reports of obstetrics data, collected from healthcare providers. Using the non-patient identifiable data, an analysis was undertaken using a tool hosted on a government server. The answers revealed trends and clusters of circumstances which could be used as indicators of risk.

When using generative AI in this way the three ‘Hows’ need to be considered:

  • How your question is being used by the system: Is the information safe? For example, in the above use case, the information was entered on a government hosted tools which was not publicly available and no patient identifiable information was used. For that context, that was a safe use of information.
  • How answers from generative AI can mislead: How can the outcomes be pressure-tested? For example, in the above instance, the team worked with clinicians and specialists to examine the output and identify outliers or anomalies.
  • How generative AI operates: Does the data tell the whole story? For example, in the above instance, the team worked to build the output into a wider investigation into the subject.

Inappropriate uses:

Example 1: Do not use for authoring messages and summarising facts to others

Generative AI has the ability to produce written outputs in various styles and formats. It is technically possible for one of these tools to write a paper regarding a change to an existing policy position. This is not an appropriate use of publicly available tools such as ChatGPT or Google Bard, because the new policy position would need to be entered into the tool first, which would contravene the point not to enter sensitive material.

  • Considering the first of the three ‘Hows’ demonstrates why:
  • How your question is being used by the system: The prompts needed to create this paper will reveal the government’s intent that is not yet publicly known. This would mean sensitive information had been disclosed. You should not use generative AI for this purpose.

Generative AI is able to perform a degree of numerical analysis. Therefore, it would be technically possible to use a publicly available tool to analyse a data set you are looking to present in a government paper. This is not an appropriate use of a publicly available generative AI tool.

  • Considering the first of the three ‘Hows’ demonstrates why:
  • How your question is being used by the system: Is this data publicly available (eg as open data), or is the owner of this data happy for it to be input to LLMs? In this instance, consent has not been given and you should not proceed. You should not use generative AI in this instance.