Transparency data

A review of the Council for Science and Technology

Published 30 November 2021

A comparative case study examining the development and use of recommendations made by the Council for Science and Technology between 2016-2018.

Government Office for Science
September 2021

Executive Summary

Objectives:

The Council for Science and Technology (CST) is an Expert Committee that advises on strategic science and technology issues that cut across the responsibilities of individual government departments. The CST’s advice is ordinarily outlined in a letter that is sent to the Prime Minister. Neither the impact of the CST’s advice nor its ways of working have been subject to formal review for several years. This review sought to address this gap by drawing on relevant literature and theory to review a subset of previous CST letters to better understand the development and use of recommendations.

Methods:

A comparative case study design was employed to address the review’s aims. Three case studies were selected. To be eligible for inclusion, letters had to be published between 2016-2018 and make recommendations for government to act on. Thereafter letters were prioritised that i) were published in different years, ii) made more specific recommendations, and iii) were perceived to have achieved varying levels of impact, either from recommendations in the same letter or between letters.

After the aforementioned criteria were applied, the following letters were selected for inclusion: Reforming the Governance of Technological Innovation, Harnessing Technology to Meet Increasing Care Needs, and Improving Entrepreneurship Education. Semi-structured interviews were conducted with CST and secretariat members, wider Government Office for Science employees, policy leads, and other individuals who were responsible for the development or implementation of recommendations outlined in case studies selected. All interviews were audio-recorded and transcribed verbatim. The data was analysed using framework analysis.

Results:

Twenty-four individuals participated in the review. Eight factors emerged from the data that differentiated the enactment of recommendations within and between case studies. These were: resources, vision, horizon scanning, trust building, specificity of recommendations, actionability of recommendations, follow-up translation activities from the CST, and ownership for implementation of recommendations. Participants identified two further actions, not taken in any of the case studies, that they perceived would have further increased the impact of recommendations. These were: wider visibility of CST letters, and greater transparency about stakeholder engagement activities.

Discussion:

The factors identified from this review may be used to interpret data suggesting an association or lack thereof, between the CST’s recommendations and subsequent government decision-making in the three case studies included in the review. The challenges associated with evaluating the impact of the CST’s activities and outputs are discussed, and practical recommendations for the CST to consider are outlined (see Annex G). These relate to how the CST may consider developing and following up on advice once delivered.

Introduction

Evidence-based decision-making in government

Various structures have been established within and across governments in recognition of the value of evidence-informed policy-making.[footnote 1] Scientific advisory committees and councils (‘SACs’) often make up a part of a government’s scientific advisory landscape, alongside other bodies (for example, university policy engagement groups). While SACs differ in their characteristics and the environments in which they operate,[footnote 2] they broadly function to “help government departments (and other executive public bodies) access, interpret and understand the full range of relevant scientific information, and to make judgements about its relevance, potential and application”.[footnote 3] Despite the growing number of SACs operating worldwide[footnote 4], there is a weak evidence base to determine their effectiveness.[footnote 2] This represents a considerable knowledge gap and suggests that SACs “may not be operating as effectively as they could be”.[footnote 5]

The Council for Science and Technology

In the United Kingdom UK there are approximately 70 SACs currently operating across government departments.[footnote 6] The CST is one such SAC. Academics have defined councils like the CST as ‘supra-SACs’ as they are “directly linked to the Prime Minister’s office or equivalent” and “take an all‐of‐government and all‐sectors approach”.[footnote 5] Similar supra-SACs include ‘the Council for Science, Technology and Innovation’ in Japan, the ‘President’s Council of Advisors on Science and Technology’ in the United States, and the ‘High Council for Science and Technology’ in France. The CST aims to advise on “strategic science and technology issues that cut across the responsibilities of individual government departments”.[footnote 7] It is supported by a secretariat within the Government Office for Science (GOS). The CST has two Co-Chairs: an independent Co-Chair and the Government Chief Scientific Adviser (GCSA), who is appointed ex-officio. The CST’s 19 members are appointed either directly by the Prime Minister or ex-officio due to the position they hold, as President of a National Academy or as Chair of ‘UK Research and Innovation’ (UKRI).

The work of the CST ordinarily culminates in a letter that is sent via email to the Prime Minister, copies of which are sent to the relevant policy team leads for that topic in the Prime Minister’s office. The CST now routinely receives a response from the Prime Minister in the form of an emailed letter. Written communication between the CST and the Prime Minister is published on the CST’s webpage, along with the minutes of quarterly meetings and any detailed evidence reviews commissioned to inform the CST’s work. Further information on the CST’s ways of working and their Terms of Reference is available elsewhere.[footnote 7] Following the publication of the CST’s letter, the Co-Chairs typically invite the relevant Minister or other government actors to discuss the recommendations in greater detail. In line with the CST’s guiding documents, the Co-Chairs may request justification from (the) relevant Ministers if subsequent policy decisions are not consistent with the CST’s recommendations.[footnote 8] Nevertheless, Co-Chairs and members are cognisant that scientific evidence is just “one influence on policy amongst many”.[footnote 9]

History of CST evaluations

The most recent evaluation of the CST was conducted in 2013.[footnote 10] The evaluation was conducted as part of a Triennial Review and led by a Deputy Director within the Department for Business, Innovation and Skills. The aim of the review was to assess if the CST was still needed as a non-departmental public body and to assess if the CST was complying with principles of good corporate governance.[footnote 10] In 2016 the CST was subsequently classified as an ‘Expert Committee’, defined as a “committee of independent specialists, administered from within a department”.[footnote 11] As a result of this new classification, it was no longer “subject to the same levels of review or scrutiny”.[footnote 11] In more recent years the CST’s “activities and achievements” have been published in the GOS Annual Report. The most recent Annual Report, published in 2018, provided an overview of the CST, listed the names of its new members, and highlighted some of the guests who had attended meetings that year. The report signposted readers to where they could find further information on the CST’s outputs and provided data on the number of views the CST’s webpage had recorded that year.[footnote 6]

Current mechanisms to review the CST’s operations and impact of outputs

There is an on-going need to assess a SAC’s operations to ensure the efficient use of resources and scientific expertise to maximise effectiveness. Neither the CST’s practices nor outputs have been subject to review since 2013. The Co-Chairs and the Secretary of the secretariat conduct informal interviews with members and government observers on an annual basis. The interviews provide an opportunity for members to:

  • share their views on future topics that the CST may address

  • discuss the impact of CST advice and the ways in which the CST could improve its ways of working (for example, meeting style, secretariat support)

  • discuss any other comments or Council business (for example members may receive feedback on their contribution or discuss appointment extensions)

While these interviews offer the opportunity for an informal discussion between Co-Chairs and members, they produce a relatively weak evidence base to guide changes to the CST’s ways of working or evaluate its outputs. Interviews are led by individuals known to members, questions are atheoretical, the findings are not subject to any form of qualitative analysis, and interviews are limited to actors on one side of the science-policy interface (customers of CST advice are not consulted).

Academic literature on scientific advisory committees and councils

There is a large body of academic literature on science advice and government decision-making.[footnote 12][footnote 13][footnote 14] Until recently however there has been a limited body of research on SACs specifically.[footnote 2] The emergence of literature in recent years in this area may be used by groups such as the CST to guide evaluations of their practices or outputs. A brief overview of the relevant literature base is outlined below.

SACs as boundary organisations

To date, academics have largely conceived SACs as ‘boundary organisations’ that “create objects and standardised packages, involve actors and mediators from both sides of a boundary (for example, disciplines, sectors, cultures etc.)”, and operate “at the frontier of politics and science with distinct lines of accountability.”[footnote 15][footnote 16] Within that conceptualisation, a SAC’s effectiveness, which may be defined as “the ability to influence the behaviour of intended audiences by enhancing their knowledge of the consequences of their decisions”[footnote 17] is hypothesised to be determined by successful translation of knowledge across a given science-policy boundary. Evidence suggests that for this to occur, the information exchanged at the boundary must be perceived by the customer as credible, relevant, and legitimate (‘CRELE’)[footnote 18]. In this context, credibility refers to “the [perceived] quality, validity and scientific adequacy of the knowledge exchanged at the interface”. Relevance refers to “the responsiveness of the SAC to policy and societal needs”. Legitimacy refers to “the [perceived] fairness and balance of the SAC’s processes, including inclusiveness of other stakeholders, transparency, [and] fairness in [the] handling of diverging values, beliefs and interests.”[footnote 17]

A central challenge for knowledge translation is that actors on multiple sides of a boundary often “perceive and value credibility, relevance, and legitimacy differently.”[footnote 18] Hence, for SACs to influence the behaviour of intended audiences, a SAC’s target customers must be satisfied that the information is credible, relevant, and legitimate from their own perspective. This challenge of satisfying multiple stakeholders by creating credible, relevant, and legitimate information for successful knowledge translation is an ongoing and iterative process; SACs must continuously invest resources into developing and maintaining relationships to “build trust and mutual understanding”, whilst balancing the many trade-offs in operating at the science-policy boundary.[footnote 19] This task is further complicated by operating in an ever-changing political landscape, with non-stationary actors. Although the ‘CRELE +IT’ (iterativity) model has been contested by some,[footnote 20] it remains the dominant model underpinning the study of SACs and their relative effectiveness,[footnote 21] and has received support from scholars studying science-policy interfaces in multiple fields.[footnote 22][footnote 23]

Design features associated with effective science advisory committees and councils

More recent research suggests that certain institutional design features may increase the credibility, relevance, and legitimacy of a SAC,[footnote 24][footnote 25] which, in turn may improve a SAC’s effectiveness. For example, an overview of systematic reviews identified certain design features as being predictive of a SAC’s effectiveness. These features included membership size (6 to 12 members being optimal), disciplinary membership composition (heterogeneity over homogeneity), established consensus development methods (for example the Delphi technique), member and secretariat onboarding and training processes, in which roles and communications practices are clearly defined, and public consultation exercises to inform a SAC’s recommendations.[footnote 24] The authors of the umbrella review noted that findings were limited by the number of systematic reviews that met inclusion criteria, the quality of reviews included, and the lack of evidence available beyond the health sector.[footnote 24]

Given the variety of contexts in which SACs operate, and the number of characteristics which differentiate SACs,[footnote 2] some caution should be exercised in wholly applying the aforementioned evidence base to guide the design or evaluation of all SACs. There is a paucity of research on supra-SACs specifically,[footnote 5][footnote 26] and calls within the science-policy field to adopt a more co-ordinated and systematised approach to the study of SACs using a taxonomy[footnote 2] have yet to be answered. Hence, further research is needed to determine the generalisability of this evidence base to the various sub-categories of SACs that exist, including supra-SACs. Nevertheless, the literature and theory available may be used to focus evaluation efforts and explore the development and use of outputs.

2021 review of the CST

The publication of relevant literature in recent years presents an opportunity to conduct a more rigorous review of both the CST’s practices and the use of its outputs. The aforementioned body of research on SACs suggests that a greater understanding of the CST’s credibility, relevance, legitimacy, and iterativity from the perspective of actors on both sides of the science-policy interface may help to address an important knowledge gap for a ‘boundary organisation’ such as the CST.[footnote 18] This may in turn help to interpret data suggesting an association or lack thereof, between the CST’s recommendations and subsequent government decision-making.

This review aimed to:

  1. assess the CST’s credibility, relevance, and legitimacy from the perspective of multiple stakeholders involved in the development and use of CST recommendations

  2. explore associations between these features and perceptions regarding enactment of recommendations

Methods

COREQ statement

The Consolidated criteria for Reporting Qualitative research checklist[footnote 27], developed to promote explicit and comprehensive reporting of qualitative studies, was used to guide reporting on the review author, study context and methods, findings, analysis and interpretations (see Annex A).

Data management, ethical considerations, research governance

The UK Health Research Authority’s research decision tool identified that ethical approval was not required for the purposes of this review. Nevertheless, suitable safeguards were put in place to ensure participants’ data and anonymity were protected, and that the Government Social Research standards were upheld.[footnote 28] All data management processes and participant documents were discussed with and approved by GOS information management specialists and a Deputy Director within GOS (see Annex B). Data was stored, analysed, and managed safely and securely in line with UK GDPR guidance.

Design

A comparative case study design[footnote 29] was employed to explore the development and use of recommendations outlined in CST letters. A comparative case study design “involves the analysis and synthesis of the similarities, differences and patterns across two or more cases that share a common focus or goal.”[footnote 30] This methodology enabled exploration of the review’s questions and produced data about causal questions in the absence of a control group.

Defining criteria for case study eligibility

Three CST letters were purposefully selected as comparative case studies. The number of case studies explored was limited to three to achieve a balance in terms of breadth and depth of exploration within and across cases. The eligibility criteria and associated rationale are outlined below.

In the first instance, letters had to be delivered between 2016 to 2018 (inclusive); more recently published letters were not included to allow time for government to have acted upon any recommendations. They then had to have made some recommendations for government to act on (the CST’s letter entitled ‘Science and technology in the new government’s programme’, sent 21 July 2016, was not eligible as this letter mostly introduced the CST as a resource to the new government).

After these criteria were applied, eight possible letters for inclusion remained:

  • International Research and Innovation Collaboration (30 October 2018)
  • Reforming the Governance of Technological Innovation (27 September 2018)
  • Harnessing technology to Meet Increasing Care Needs (05 October 2017)
  • Advice on the Industrial Strategy Challenge Fund (08 August 2017)
  • Science and Technology for Economic Benefit (19 July 2017)
  • Improving Entrepreneurship Education (21 October 2016)
  • Robotics, Automation and Artificial Intelligence (21 October 2016)
  • Industrial Strategy: Important Questions to Address (20 October 2016)

Letters were then grouped by year of publication (see Table 1). One letter was selected from each year where possible. The aim of this was to reduce the likelihood of findings focusing on a particular actor or historical event that was responsible for an observed outcome or lack thereof, and to try and explore broader barriers to and enablers of CST ‘impact’ across contexts (though noting there are still some overlapping factors such as CST members, Co-Chairs and political actors in the final case studies selected).

Table 1. CST letters published 2016-2018, eligible for case study inclusion, grouped by year of publication

2018 2017 2016
International Research and Collaboration Harnessing Technology to Meet Increasing Care Needs Improving Entrepreneurship Education
Reforming the Governance of Technological Innovation Advice on the Industrial Strategy Challenge Fund Robotics, Automation and Artificial Intelligence
Science and Technology for Economic Benefit Industrial Strategy: important questions to address  

Within each year, letters with more specific recommendations were prioritised to enable greater comparison between CST advice and outcomes. Although CST recommendations are often purposefully less specific or prescriptive, such advice is more difficult to compare outcomes against, hence this criterion.

Finally, after letters within years that were more specific were identified, the final sample of recommendations had to have varying levels of perceived traction to allow for between-letter/outcome comparison. Their perceived traction was based on information that the secretariat had compiled in March 2021 regarding government activity related to CST recommendations.

Case Study selection

After the aforementioned criteria were applied, the following three letters were selected:

Case Study 1 (Reforming the Governance of Technological Innovation; 2018) provides an example of where all recommendations made by the CST were enacted and the CST was specifically cited as a responsible actor for the outcome (such as in 2019 White Paper: Regulation for the Fourth Industrial Revolution.

Case Study 2 (Harnessing Technology to Meet Increasing Care Needs; 2017), in comparison, provides an example of where one recommendation made by the CST was not enacted (recommendation four) although the remaining three were.

Case Study 3 Improving Entrepreneurship Education; 2016) is representative of a letter in which recommendations received varying degrees of traction (for example recommendation two was enacted, recommendation four was partially enacted, and recommendation six was not).

The three selected letters were sent to the Prime Minister within two years, each approximately 12 months apart (27 September 2018; 05 October 2017; 21 October 2016). Individually, the letters met the above criteria and combined provided sufficient scope, in terms of their varying degrees of ‘success’ to allow for comparison for the purposes of the review’s aims.

Data collection

Participant recruitment and selection

Participation was sought from individuals, who, at the time of the letter in question, were CST members leading on the letter, members of the secretariat, GOS advisors, wider subject knowledge stakeholders, who advised on the evidence to inform the content of letters, ‘target’ government customers, charged with championing CST advice, and individuals responsible for implementing recommendations (for example policy leads). A combination of purposive and snowball sampling was employed. Suitable participants were initially identified by searching relevant GOS files on Microsoft SharePoint. During interviews, participants were also invited to identify other individuals who may be useful to speak to in relation to the development or impact of that same letter or recommendation within a letter. The names were cross-checked against a list of potential interview candidates the author had developed from searching through relevant GOS files. In some instances, participants also provided email introductions to other potential interview candidates.

All participants were approached via email. They were informed about the nature and scope of the review and provided with an information sheet and privacy notice for further information. No incentives were offered for participation. Non-respondents were sent a follow-up email two weeks later. Participants indicating interest in the review were invited to outline times convenient to them to be interviewed and sent a consent form to be completed prior to the interview.

Interview setup

All interviews were conducted online using Microsoft Teams except one; Zoom was used in this instance. Evidence suggests that there are only modest differences between the quality of in-person and online interviews.[footnote 31] No individuals were present for any of the interviews except for the author and the participant. Prior to each interview, the author introduced themselves and their background, and reminded participants of the aims and scope of the review. Participants were invited to ask the author any questions before starting the interview. If the consent form had not been returned to the author prior to the interview, an oral consent process was used; participants were asked to indicate oral consent to each of the items outlined in the consent form. All interviews were audio recorded. The author also took notes throughout interviews as a reminder of follow-up questions or clarifications. This also served to function as backup if the audio recording failed.

A semi-structured approach was used to guide interview questions. Participants were asked about the development of recommendations, and their perceptions of if and why recommendations informed government decision-making. Participants were asked about the letter as a whole and individual recommendations therein. The order and framing of the questions varied slightly depending on the participant’s role in the letter in question. The questions were based on relevant literature and piloted with GOS employees. See Annex C for a copy of the semi-structured interview guide.

Audio recordings were transcribed verbatim, facilitated by the Microsoft Word ‘Dictate’ function. Automated transcripts were checked and corrected against the audio files to ensure accuracy. All participants were given a unique identifier number and transcription texts were pseudonymised in the process. Only the review author had access to the pseudonymisation codes. Participants were asked to indicate if they would like to be contacted to approve anonymised quotes from their interview to be included in this report and were subsequently contacted accordingly.

Qualitative data analysis

A best-fit framework method[footnote 32] analysis was employed to analyse the data. This approach offered a highly structured, largely data-driven method to organise and analyse the data. This method is not aligned with any epistemological, philosophical, or theoretical approach. It facilitates constant comparison of data within and between case study interviews across the resulting matrix. It also does not demand consideration of conventions of dialogue transcription, which can be difficult to read, as the content, rather than the interpretation, is what is of primary interest to the researcher.[footnote 33] It enabled exploration of the review’s questions and in the context of a limited time frame, this method permitted a rapid, data-driven approach, resulting in a more transparent and rigorous analysis of the data. This approach involves five steps: familiarisation, identification of an appropriate thematic framework, indexing, charting and mapping, and interpretation. These are outlined below.

Familiarisation:

Familiarisation with the interviews was achieved by re-listening to all and then parts of the audio recording, transcribing the data, and then re-reading the transcripts before coding.

Identification of an appropriate a thematic framework:

A brief literature review was conducted using a combination of key search words to identify a suitable framework. Sarkki and colleagues’ framework[footnote 19] for analysing science–policy interfaces was identified from the literature as a relevant, recent and rigorous framework to code the data against. The authors posit that 14 features, outlined in Table 2, influence a scientific advisory committee’s credibility, relevance, legitimacy, and iterativity. Further information about the framework, and evidence used to develop it are outlined elsewhere.[footnote 19]

Indexing:

Indexing involves the systematic application of codes from the agreed analytical framework to the whole dataset. The author re-read the transcripts line by line and coded the transcripts deductively based on the codes outlined in the framework. While each of the codes were defined by Sarkki and colleagues, which facilitated coding, and helped to stay as close to data as possible, some revisions and clarifications for some of the codes were required. Keywords were developed for each code to improve the coding reliability. Any data that was not captured by the framework codes was initially coded as ‘other’. These ‘other’ codes were labelled based on the data. Several re-readings and coding of the transcripts were required until no new codes emerged. A second coder checked 10 percent of the coded transcripts, selected at random. This second coder checking involved the second coder familiarising themselves with the codes, definitions, and keywords, and critically examining, and then verifying or challenging the first reviewers’ coding of the transcripts.

Charting and mapping:

An Microsoft Excel spreadsheet was used to sort the coded transcript data into a framework matrix. The framework matrix provided an overview of all the data that had been coded, and this was entered by codes (original and new), case studies, and participants.

Interpretation:

Finally, similarities and differences within and between the case studies was explored by examining codes across the matrix. This facilitated an exploration of the review’s aim and provided an interpretation for between-outcome differences.

Table 2. Sarkki & colleagues’ framework to assess the influence of science-policy interfaces, used to analyse interview data[footnote 19]

Categories Codes Definition
Structures Independence Freedom from external control, neutrality or transparency about possible bias, range of membership
  Participation Range of relevant expertise and interests included; competence of participants; openness to new participants
  Resources Financial resources, human resources (e.g. leadership, champions, ambassadors, translators), networks, time
Objectives Vision Clarity, scope and transparency of the vision; transparency of the objectives of the science-policy interface
  Balancing supply and demand Demand-pull from policy: mandates; supply-driven promotion of research; emerging issues
  Horizon scanning Procedures to anticipate science and policy developments
Processes Continuity Continuity of science-policy interface work on the same issues; continuity of personnel; iterative processes among science-policy interface participants
  Conflict management Strategies such as third-party facilitation; allowing sufficient time for compromises
  Trust building Possibilities to participate in discussions; clear procedures; opportunities for informal discussions; transparency about processes and outputs
  Capacity building Helping policy makers to understand science and scientists to understand policy making; building capacities for further science-policy interface work
  Adaptability Responsiveness to changing contexts; flexibility to change
Outputs Knowledge transfer (*Referring to output itself) Timely in respect to policy needs, accessible, comprehensive; addressing users’ information needs
  Quality assessment Processes to ensure quality, comprehensiveness, transparency, robustness, and management of uncertainty
  Translation (*Referring to dissemination of output) Continuous efforts to convey messages across different domains and actors and making the message relevant for various audiences via different formats.

‘Structures’ refers to the institutional arrangements that have been set up and developed to achieve the objectives or functions of a science-policy interface.

‘Objectives’ refers to the stated aims of the science-policy interface, and in some cases also ‘realised’ functions that depart from the stated objectives. Objectives provide basis and scope for science-policy interfaces to influence selected target audiences.

‘Processes’ refers to the actions and interactions through which science-policy interface produces outputs and endeavour to influence behaviour.

‘Outputs’ refers to the specific products developed through the processes, including reports, recommendations, meetings, scenarios, indicators, databases, websites, press releases, and so on.

Results

Participants

Thirty-two individuals were invited to participate in the review. 27 responded to the email invitation (84 percent response rate), 24 of whom (89 percent) accepted the offer, provided consent, and participated in the review. Among respondents who did not wish to participate (3 respondents), all felt that they would not be able to provide valuable information, due to their inability to recall useful details about the letter in question because of the amount of time that had elapsed since or because they felt they had limited sight of the broader development or use of the letter. Non-respondents and non-participants were more likely to be female, candidates involved in older case studies, and were more likely to have held a policy position, rather than a position within GOS or the CST.

Eight individuals per case study were interviewed. Participants for each case study were comprised of a combination of CST and secretariat members, other GOS employees, and policy leads. Some participants played a role in more than one case study. In that instance, the focus of the interview remained the case study for which they were contacted. They were invited to comment on other CST letters at the end of the interview. See Table 3 for an overview of participants; further specifics were removed to protect the anonymity of participants. All interviews were conducted between May to August 2021 inclusive. Interviews ranged in length between 18 and 44 minutes. No repeat interviews were carried out.

Case study recommendations and subsequent government activity

Recommendations in case studies 1, 2, and 3 and subsequent government activity are outlined in Annexes D, E, and F respectively. The outcomes listed in the summary tables are based on information compiled by the secretariat in March 2021 and supplemented by activity known to individuals participating in this review.

Factors differentiating the enactment of recommendations

Six codes emerged from the data that differentiated the enactment of recommendations within and between case studies. These were: resources, vision, horizon scanning, trust building, knowledge transfer, and translation. Two distinct sub-themes within knowledge transfer were identified: specificity, and actionability of recommendations. One additional theme emerged from open coding that was not captured by the original framework: ownership for implementation of recommendations.

Each of the themes are discussed below with reference to and in order of the larger categories of: Structures (resources), Objectives (vision and horizon scanning), Processes (trust building), Outputs (knowledge transfer and translation), and Other (ownership for implementation of recommendations).

Table 3. Summary of participants

Interview number Interview date Role
  DD/MM/YYYY CST member CST secretariat Other GOS employee Policy lead Other
1 14/05/2021   x      
2 20/05/2021     x    
3 20/05/2021         x
4 20/05/2021         x
5 27/05/2021         x
6 16/06/2021       x  
7 21/05/2021   x      
8 24/06/2021 x        
9 09/07/2021     x    
10 12/07/2021     x    
11 22/07/2021 x        
12 04/08/2021 x        
13 20/07/2021       x  
14 28/07/2021         x
15 10/08/2021       x  
16 05/08/2021         x
17 17/06/2021   x      
18 15/06/2021     x    
19 15/06/2021         x
20 17/06/2021   x      
21 07/07/2021       x  
22 09/07/2021 x        
23 14/07/2021 x        
24 22/07/2021       x  

“Role” denotes official role of participant during time of case study, though noting some participants moved roles during the development of letters or implementation of recommendations.

“Other GOS employee” includes GOS Deputy Directors, and employees from teams other than the secretariat within GOS who supported the development and/or implementation of the letter.

“Other” includes individuals collecting and providing evidence to inform the letter content, individuals who held positions in resulting structures that the CST advised the establishment of, and Chief Scientific Advisors affiliated with the letter in question.

Structures

Resources: Adequate secretariat support was a factor that was identified as important across all case studies.

I think it’s absolutely essential that you have a good secretariat to work with you. Without any doubt, one reason why this letter was a success was bringing on board key secretariat assistance from the wider Go-Science team

Differences emerged between case studies in terms of comments regarding the level of secretariat support available at the time that letters were being developed. In the case of some letters, members felt that they had adequate support from the secretariat and that the secretariat worked effectively.

I think the people in GO-Science were really excellent. They were so much better than civil servants I met in some other areas

In other case studies, it was felt that the secretariat was insufficiently staffed, and this was identified as being problematic.

I think the secretariat was rather underpowered at the time. As we were developing this, there was a change, and the secretariat wasn’t able to support us, and I think that was potentially a bit of a problem.

Where there was not sufficient support available within the secretariat for a given letter, employees were pulled in from other teams across GOS for some of the case studies.

I think, at the time, CST’s secretariat was somewhat underpowered. It didn’t really have capability to think about the issues they were writing letters on. They did the paperwork and logistics for CST quite well. But actually, as soon as you tried to get into substance of what CST had to say, there wasn’t really any capability to do that. It often came to someone else in GO-Science.

I was brought in because [the secretariat] were short of resource at that particular point. So, I don’t know how long they might have known that they needed the information but certainly for me, it was all done to a very short time scale.

Participants who were recruited on this basis felt that the secretariat work wasn’t a priority for them, and that the short time frame may have compromised the quality of their work produced.

I think the short time scale was probably an issue. I don’t know whether the quality of what I put forward was compromised by the short time scale, but I can imagine that it could be

I can imagine it may have just been one thing I was picking up when I was… I was generally busy with other things, which, you know, that would be one reflection, whether that is the best way of doing things, is people just being parachuted in to do a short piece of work. I know some people are in favour of that, and it does allow the person doing it to have a taste of something different, outside of their day job, but whether that’s going to get you the best information, and the most informed response…I’m not sure about that.

Participants felt that greater forward planning and keeping the subject matter focus of CST letters as close to expertise available within the membership as possible could have helped with easing some of this resourcing pressure.

I think we had enough, but it could have been more sort of systematic in a way, we made provision to have that resource as and when we needed it, rather than cobbling end together from other teams.

So CST is essentially a set membership and they will all have an area of specialism and… the… frankly the danger is that once they’re off their area of specialism, they’re all incredibly intelligent and well-informed people but they’re no longer specialists… so depending on the issue you could have very little real specialism on CST on quite an important issue, so it puts even more weight onto the GO-Science team to make sure its sensible.

Objectives

Vision:

Early clarification and communication of the objectives of the letter was highlighted as being important.

I think the things that lead to success are not …I guess I’d say they’re not that surprising. I think early clarity on what’s the problem you’re trying to solve. What exactly is it that you are trying to achieve? That is really important.

I think the publication of a letter is fine, but I think in terms of what CST might be trying to do, there is the question that says: once the letter is published, what then? You know, is there a kind of programme? Are you clear what change you want to see?

One of the really important things for CST is to kind of… know what it wants the government to achieve

Participants differentiated case study 1 from other CST letters with respect to the focus of the letter:

it [was] probably unusual in that sense (*a CST member providing a detailed proposal) because usually there’s just kind of a “I think it would be interesting to look at something vague

Participants commenting on the earlier stages of case study 2 noted that the letter lacked a clear focus. They felt that this was problematic and contributed to the letter taking longer to develop:

I think the letter potentially could have lacked a lot of focus.

Some of the earlier iterations were more academic; ‘there’s something interesting going on there in ageing and demography’ piece. You can see from an academic perspective, understanding that dynamic is really important. And I think government does need to be able to think strategically about that. But CST struggled to articulate in a letter ‘Dear Prime Minister, this is what you need to do’. It was only actually until we had this narrower frame and involvement from policy officials that we got that. The value came from experts and policy officials both contributing.

Horizon scanning:

Horizon scanning efforts were perceived as being important in achieving good outcomes for recommendations.

Part of getting a good outcome is recognising the context in which the letter is landing.

One of the old Cabinet Secretary’s used to say that sort of being a civil servant, generally, if you had to find a perfect analogy for it, would be something like being a sailor, cause it’s kind of like… you kind of just have to find the exact moment the wind is going in the right direction, and then you kind of jump in and do that

Some participants felt that GOS, as an organisation, was particularly well-placed to anticipate policy developments.

If you want politicians to pay attention to what you’re saying, you’ve got to hit their… hit their attention scanning system (*laughs) at the right point, and you’ve got to know what they’re thinking, and what they are preoccupied about in order to do that, and it… it’s really part of your strength as an organisation.

But many participants commented on members needing more support from GOS to be informed of relevant developments:

I think it would be helpful for CST to think about that in future, ‘cause I think they’re kind of not politically… they don’t have a great political background, you know, that’s not where they’re from…. and that’s probably where they need most help

Comments on horizon scanning efforts distinguished case study interviews. Participants commented on the amount of resources invested into horizon scanning activities. This was evident in case study 1, where participants from GOS noted that a concerted effort was made to invest resources into horizon scanning activities

At the time, there was a lot of focus on… using other government departments and liaising with other government departments to see what government was doing.

Conversely, a lack of engagement with policy officials in case study 2 contributed to an inability to frame the letter earlier in the process.

I think the policy engagement hadn’t got to the point, in that early version of the letter, where we knew where the letter could land, and how to frame in a way that government would find it helpful. So maybe that’s what was missing early on

Participants in case study 3 commented that Higher Education was moving out of the Department for Business, Innovation and Skills during that period, and the details or consequences of that move for the CST’s recommendations were not outlined to the group.

I mean it was just clear that it was all a bit of a mess… I think would be the way I would describe it. So nobody was quite sure what was going on.

Processes

Trust building:

Stakeholder engagement activities emerged as critical elements of the process, and participants across case studies commented on the longer-term benefits of ongoing engagement with government stakeholders to maximise the prospects of implementation.

I think if you’ve got someone who has really bought into the process, you’re much more likely for something to come of it.

In relation to discussing the processes that led to the enactment of recommendations, participants discussed the importance of investing time into mapping who the correct stakeholders were:

So obviously at the outset a certain amount of time was spent identifying the right people to engage because that’s really important.

In speaking about recommendations that were not enacted, participants felt that more time could have been spent on stakeholder mapping activities:

I think if I was doing this today, having spent longer… I would want to really try to understand who were some of the potential decision-makers… or owners of some of the levers of change, and have the secretariat help, you know, really think that through a little harder, you know, and probably set up meetings and other such things with some of those people…

The amount of time then spent interacting with relevant stakeholders also differentiated the development of recommendations that were and were not enacted.

For example, in case study 1 stakeholders were brought into conversations with the CST as they were developing their recommendations, and kept up to date with the process. They reflected positively on the experience:

From that point on, it was kind of the start of a beautiful relationship in many respects, in terms of a good conversation with the secretariat about how the findings were going to be developed, how they might land, what we thought would be achievable, what we thought might be difficult, pretty open and frank sharing of our evidence base with the secretariat, to help inform their conclusions

In the development of other recommendations, stakeholder engagement activities were far more limited. > It was probably towards.. the lighter end […], talking to a few key people…

Stakeholders who did not enact CST recommendations, characterised their interactions with the CST and the secretariat as brief, and perceived the CST as being unengaged in details regarding implementation:

I suppose I would characterise the overall process as, you know, fellow travellers … on a journey, briefly interacting for a short while on that journey, broadly leaving, you know, happy and aligned, but not really having… that much of an interaction or contact

The main lesson I took from it was: there’s an important government body that is interested in this kind of data and they have some ideas about what sort of data might be needed. They’re not particularly well informed about exactly how we’re going to be able to do that, and they’re not really engaging with us directly to help make that possible

Outputs

Knowledge transfer:

Two sub-themes within ‘knowledge transfer’ emerged: specificity of recommendations and actionability of recommendations. They are each outlined below.

Specificity of recommendations

The importance of specific recommendations that policy teams could action was highlighted as something that the CST should consider when consolidating evidence into recommendations:

In particular, how to move from, sort of, generic “wouldn’t it be a nice idea if people had [this]” which is, you know, a fine and useful thing for people to say, and it’s probably true, but coming up with some more specific and concrete actions…

Participants perceived differences in terms of specificity of recommendations within case studies:

We had three really tight recommendations that that policy team had really bought into, and wanted to deliver, and in fact had helped us come up with, if not written for themselves. Whereas on that fourth one, we didn’t have a clear recommendation.

Policy leads commented on the impact of less specific recommendations and identified this as being a barrier to implementation.

After all industry itself is a very fluid term. Who is industry? You’re going to tell Google and Apple to sit down, and have a sort of…? I mean I wouldn’t know really where to start there.

They felt that implementing non-specific recommendations demanded more effort and resources on their part, which were already limited:

But, you know, for an organisation that at the time didn’t have a policy function at all, you know, having to sort of draw some lines. You need bright lines really, from the policy intent, to definitional materials, and the letter doesn’t really give that, so I imagine if it had said something like “for those working for a company, what company it is, according to the definitions here”, we might have said “oh right, let’s have a look at the questions, and see if we can get that in”, but small, medium, large, if I ask you: the company you just started, is it small medium or large? You know, “I’ve got five employees, I literally don’t know.

Like having those kind[s] of directives without definitions. Definitions are what make our world go round, and we see a lot of people saying, you know, “This should happen. Jolly, jolly good, this should happen” you know, lots of good… good intentions, but very rarely does someone say “here is a definition of a thing that we believe exists, and we’d like to capture data on”, and that didn’t really come across in the letter, and that would have been the kind of intervention that would have been most helpful for us, I think, …. that and to kind of give us… give us language that we could use, to collect better data, so we were kind of on own a little bit on that

‘Actionability’ of recommendations

Participants felt that some of the recommendations were not ‘actionable’ and felt this may have also helped to explain the lack of success for some of the recommendations.

Policy leads drew comparisons between recommendations within letters, and felt that part of the reason some of the recommendations were enacted, was because they were actions that the government could take:

They sound like they’re more like starting with government kind of action, actions that the government can take

I think it was a really good recommendation, because it’s… it was something that was tight enough to be able to be actioned, from a sort of policy team perspective, if that makes sense. It’s something within if you like [a] civil servants’ gift to do it

Conversely, in speaking about recommendations that were not enacted, policy leads felt that other recommendations were beyond the control of the government:

I think recommendation four required…. it was more out of government control.

As I say, any…. any lack of momentum in the implementation was more just because they weren’t all within government’s control

Translation

CST follow-up

Participants across all case studies and positions commented that letters did not come up in any conversations in their circles following publication.

The need for CST members to communicate the content of the letter following publication to relevant audiences was highlighted as being important for progress:

I think if you can have an expert able [and] willing to champion it from beginning to end, then that’s an important process.

Have you looked to see if it has happened, that change you want to see? If the change hasn’t happened that you wanted to see… might you… how would you know? And if you did know, would you care? And [if] you did care, what might you do? And so I think there is a question that says: the letter has been published, is that the end of the story, or is that the start of the journey?

Follow-up activities from CST members distinguished case study interviews. In speaking about recommendations that were enacted, participants spoke about CST recommendations being championed by members of the CST or GOS employees and translated for different audiences.

He brought [the GCSA], who spoke to the letter, and gave, kind of everybody around the table the challenge of why this needed to be taken forward

I was in and out of Cabinet Office and No10 quite a lot though those few months. Helping them understand what the letter said, and what we actually meant behind it, how that could be translated into policy. And then helping the delivery teams in government get their arguments right, for example for more funding to do it.

In other instances, policy leads felt that there was no one to discuss the content of the letter, and this resulted in lack of progress. > When it came out, I didn’t have anyone to talk to about it, so… so essentially I saw it. I thought ‘that’s nice, that’ll do’, you know, and then [I] was on to the next thing really, so yeah, it was a ….. (*long pause) not a thing, I guess.

Some policy leads suggested that if the CST followed-up on recommendations, then this could act as a stimulus for progress in terms of implementation.

There could be a bit more, I guess, bite, if there was a follow up letter to the PM, you know that said, “We thought this was really important. Well done on doing one, two and three. You never did four.” and “Hasn’t momentum, kind of, waned on this a bit? What are you doing?” Because, you know, as you say those policy windows, sometimes you need to create the policy windows, and I think if it’s something that they felt very strongly about, that sort of reminder would be really helpful.

Other

Ownership for implementation of recommendations

Finally, ownership for recommendation implementation emerged as a key theme from open coding across case studies. In the case of some letters, recommendations could be implemented by a single team.

The substance of the recommendations was implemented entirely by BRE

Participants felt that where responsibility for implementation lay with a single stakeholder or group that this enabled uptake of recommendations.

Partly it was successful because actually all the previous work that GO-Science had done through Foresight to create the conditions for this letter to work. That meant there was a policy team who wanted the letter and had worked with us for a few years. We were credible to them, and we didn’t have to go cross government and try and pull everybody together to think about ageing and CST’s recommendations. We had one team whose job it was to deliver all of ageing policy, partly because we’d argued for that.

Responsibility for implementation was more diffuse for other recommendations:

I think this is a topic that sits sort of awkwardly and so it’s a pretty tricky letter, in that it doesn’t have a natural home. It’s a place where there’s a very diffuse set of stakeholders

Recommendations that required multiple government departments or teams to come together to implement a recommendation was perceived as being problematic for implementation by many participants:

Because it was so broad and so many people needed to act together to make them happen, it meant that there was perhaps a slight lack of… ownership on anyone’s behalf and that was probably the most problematic thing of the whole, you know, the whole process really

This is problem of ‘who was supposed to do it?’, I think. So, there’s no obvious owner for it, I think. And to be honest, the kind of recommendations that have had most success in, are the things that are very easy for government to do

Some policy leads felt that CST mistakenly assumed that other stakeholders would initiate coming together themselves, and that this may have been an oversight:

CST saw it, I think, as the government having to be responsible and take sort of.. take … take ownership to an extent and facilitate and bring people together

However, they felt that the CST would either need to identify a clear owner for implementing a recommendation or take a co-ordinating role for progress to be observed where ownership for implementation of a recommendation was diffuse:

In terms of this particular issue (*implementation), more of a recognition of the incredibly cross cutting nature of the recommendations and either being much more directive about who was responsible for each one, and, you know, therefore if that… yeah if that body welcomed that recommendation, then they were then responsible for updating and, you know, making sure there was progress, so [that] there was a clearer… clearer sense of responsibility and accountability, or alternatively CST taking a facilitative role in bringing together the different groups on a regular basis to… to sort of drive that progress themselves

Further considerations

The previous section highlighted differences between case studies distinguishing outcomes. Two further themes emerged regarding actions that the CST could have taken, across all case studies, that participants, across positions and case studies, perceived would have further strengthened impact of recommendations. These actions related to dissemination of CST letters, and transparency about stakeholder engagement activities. These are outlined below, with reference to illustrative quotations from interviews.

Wider visibility of the CST’s outputs

Participants across case studies felt that further efforts could have been made to disseminate and publicise the CST’s recommendations.

They commented that the CST’s outputs have a low profile:

At the moment it is rather discreet advice and so far as I’m aware, most of it doesn’t see the light of day

Unless you physically go on the gov.uk website, it’s not quite clear what channel they have for pushing out messages once CST delivers something

Stakeholders felt that the low profile of letters meant that they forgot about using or promoting the letters, and that this in turn reduced their impact:

If I had understood, or remembered, that the CST letter to the Prime Minister was public information, we might have stuck it on the website, a year ago, when we first created this space

Therefore, I didn’t use that letter…. the existence of that letter in any policy work as supporting evidence publicly

Members further noted that the low profile of the letters also likely reduced the chance of policy leads referring back to older CST letters to address on-going issues.

My guess is that they don’t always know [the letters] exist. I’m not sure that’s people’s go-to behaviour – “oh let me see what CST had to say about this”, you know, I don’t think it’s a natural kind of activity.

Special Advisors (SpADs) felt that greater awareness of CST letters amongst politicians would also increase the impact of recommendations for policy outcomes: > Probably there’s going to be really big decisions that are made last minute with a close coterie of advisers, or an argument between two Cabinet-level politicians, and so the CST might have a formal role, where it sits then as a proper council with the Prime Minister, but actually the key influence will be becoming known to all the people around it, so that everyone kind of agrees about the information.

Transparency of stakeholder engagement activities

The other common action that participants across case studies identified which could further have improved the impact of the CST’s recommendations was greater transparency regarding which stakeholders they had engaged with to inform their recommendations.

The evidence gathering and review phases were not transparent to any of the stakeholders across case studies.

If I’m honest that side of things was… was reasonably opaque. I knew when the council was meeting, and I knew the ideas that were being put for them. I didn’t know… who was driving the content of the discussions, and I didn’t know what, if any, external engagement they were doing

Right so I don’t know what processes they used

Yeah, I’m not sure if it’s a kind of formal methodology, I knew what it was

Participants felt that greater transparency regarding how recommendations were developed, in terms of who CST had engaged with, could have improved the credibility and subsequent uptake of recommendations.

I mean thinking back to what it’s like being a SpAD, when you’re trying to win an argument, you’ve got an asset like that, you want to know how they are… going around, pushing it round the place, cause you never know who is gonna be significant in a meeting, and you become aware of this more, even as you move further away from government, but there’s going to be a decisive moment at some point, and you have no idea how that decisive moment will occur, it can be trivial or it can be vague, worked out, and so you want to know that somebody is doing that work (*stakeholder engagement) effectively

Discussion

Review of aims

This review examined a subset of CST letters published between 2016 to 2018 to explore the development and subsequent use of recommendations using a comparative case study approach.

Summary of findings

Eight factors emerged from interview data that differentiated the enactment of CST recommendations within and between case studies. These were: resources, vision, horizon scanning, trust building, specificity of recommendations, actionability of recommendations, follow-up activities from the CST or GOS, and ownership for implementation of recommendations. One theme emerged from the data that was not present in the original framework: ownership for implementation of recommendations. This may be explained by the cross-cutting nature of the CST’s remit; a potential additional factor that supra-SACs, such as the CST, may wish to consider when making recommendations to maximise the prospects of uptake.

Two further actions emerged that participants felt could have further increased the impact of the letters across case studies: wider visibility of CST outputs and greater transparency of stakeholder engagement activities. A number of recommendations are outlined in Annex G for CST members and the sponsor department (GOS) to consider which may help to address the issues highlighted by participants in this review.

Addressing these factors demands resources, and should be considered in tandem with the other factors outlined in Sarkki’s framework[footnote 19] (for example participation, conflict management and so on) to develop credible, relevant, legitimate recommendations.

Strengths and limitations

This review makes several novel contributions to the CST’s understanding of its operations and outputs. First, any known evaluations of the CST’s practices or outputs to date have been atheoretical.[footnote 10] This review drew on relevant theory and literature to conduct a more robust evaluation of the CST. Second, perspectives from actors on both sides of the science-policy interface regarding the development and use of CST recommendations were obtained, enabling a more comprehensive assessment of the CST’s processes and outputs. Finally, this review addressed the limitations associated with existing reviews (exclusively those published or translated into English) of supra-SACs that were identified,[footnote 26] by collecting primary data, thereby overcoming biases associated with document analysis (for example incomplete or insufficient data available to address the review’s aims).[footnote 34]

This review is subject to limitations which must be acknowledged to interpret results. First, there were resource limitations. A single researcher was responsible for designing, conducting, and writing up this report within a six-month period, 50 percent FTE. This had practical implications for the breadth and depth of data collection and analysis that was feasible. A rapid qualitative approach had to be employed to analyse transcripts in a timely manner. Evidence suggests that rapid qualitative analysis methodologies are comparable with more in-depth analyses,[footnote 35][footnote 36] and a second coder was used to ensure that a consistent and reliable approach to coding was employed. However, the amount of time available to identify the best available framework did not allow for a systematic review of the literature. Overall, the author sought to minimise the impact of resource limitations where possible and strived to be as transparent as possible in the reporting of this review to enable an informed interpretation of the findings.

Second, this project was reliant upon participation from individuals involved in the case studies included in the review between April and August 2021. Data generated from interviews was biased to what participants could recall and were willing to divulge. Therefore, participation was sought from a wide range of actors, to triangulate findings and minimise the biases associated with retrospective memory and self-report. Nevertheless, no incentives were offered to candidates, and non-respondents and non-participants were more likely to be candidates who held a policy position, rather than a position within GOS or as a member of the CST. Thus, the final sample may underrepresent the views of the customers of CST advice. The review also focused on the impact of CST recommendations within the UK. Therefore, any potential international impact of CST letters was beyond the scope of this review and not addressed.

Finally, while this review aimed to assess the use of CST outputs by collecting and triangulating data from multiple sources and actors, it was not possible to assign causality between CST outputs and observed outcomes. There are many actors that shape the policy-making process and the majority of the policy documents relating to the case studies were poorly cited. In instances where the CST was cited in a document, the relative impact of that CST letter on the content in the document or the existence of the document itself was contested between participants. These factors impeded the ability to isolate or assign causality of observed outcomes to CST activity or outputs.

Future directions for the evaluation of SACs

There are many well-documented challenges associated with evaluating the impact of boundary-spanning activities; impacts can be wide-ranging, over or underreported, and cannot be understood in a vacuum.[footnote 37] At present there are no standardised metrics that boundary organisations may employ to evaluate their activities. To the author’s knowledge, there are also no established tools available to assess whether or not a government customer has legitimately considered a piece of evidence. Despite these challenges, changes to the landscape since the 2016 to 2018 period may enable more rigorous evaluations of SACs going forward.

In 2019, the UK government made a commitment to being more transparent about the use of evidence to inform public policies.[footnote 38] The improvement of policy documents in this respect would greatly facilitate SACs in tracking the use of their outputs to inform policy. It is still worth noting however that the longer-term impact of those policies for the end-users of science advice will still be dependent upon if and how the policy itself is implemented.[footnote 39][footnote 40] There is also now greater public and academic interest in science advice in government. This interest may increase the quantity and quality of literature that SACs specifically may draw on when reviewing their operations. SACs can also capitalise on this interest and the wealth of existing expertise by facilitating independent reviews of their activities. This would reduce the organisational burden and biases associated with SACs evaluating their own practices, and enable ongoing knowledge exchange opportunities between public policy scholars, SAC members, and wider government actors.

Given the breadth of possible impacts that may arise from boundary-spanning activities, the limited term of CST Co-Chairs and members, and the lengthy periods often required for government to act on recommendations, SACs may benefit from pre-defining specific short-, medium- and long-term outcomes of interest at the time of developing their advice. This would help to focus the scope of any future impact assessments, enable follow-up on outcomes envisaged at the time that recommendations were delivered, and avoid the biases associated with retrospectively selecting outcomes of interest.

About

This review was conducted as part of a UKRI internship within GOS. Mairead Ryan joined GOS on March 1st for a six-month placement. She spent 50 percent of her time updating the Code of Practice for Scientific Advisory Committees and Councils and 50 percent of her time conducting this review.

Mairead is an interdisciplinary PhD student at the MRC Epidemiology Unit and Faculty of Education, University of Cambridge. Her PhD aims to identify features of effective school-based physical activity interventions.

Prior to her PhD, Mairead worked in the Department of Behavioural Science and Health at University College London (UCL), evaluating approaches to increase informed uptake of national cancer screening programmes. Mairead holds an MSc in Health Psychology (UCL) and a BA in Psychology (Trinity College Dublin).

She is funded by an ESRC Doctoral Training Partnership award (ES/P000738/1) and the Medical Research Council (MC_UU_00006/5).

Mairead has no conflicts of interest to declare.

Contact: CST secretariat: cstsecretariat@go-science.gov.uk Mairead Ryan: mairead.ryan@mrc-epid.cam.ac.uk

Acknowledgements

This review would not have been possible without the help and support of many individuals who I would like to acknowledge below.

Supervision: First and foremost, I am very grateful to Beth Hogben for this opportunity and for the endless support and encouragement she provided throughout the development of this review. She was a pleasure to work with and learn from.

Participants: A huge thank you to all the participants for their time and for the insights they shared on the development and use of the CST’s recommendations.

Subject matter experts: I would also like to thank several subject matter experts for their guidance on the literature, suitable methods and/or for signposting me to other researchers within the field, all of which helped to shape this review:

  • Dr Justin Parkhurst, London School of Economics

  • Dr Simo Sarkki, University of Oulu

  • Dr Hannah Baker, University of Cambridge

  • Dr Unni Gopinathan, Norwegian Institute of Publish Health

  • Professor Steven Hoffman, York University

  • Professor John Arne-Røttingen, Norwegian Institute of Foreign Affairs

  • Dr Catrin Penn-Jones, University of Cambridge

  • Siobhan Dickens, University of Cambridge

Drafts: I would also like to acknowledge the individuals below who provided feedback on the draft of this report:

  • Deirdre Ryan

  • Professor Annette Boaz, the London School of Hygiene and Tropical Medicine and the Government Office for Science

The secretariat: Finally, a huge thank you to the secretariat for a very enjoyable and interesting six months.

  • Tenaz Bacha

  • Dan Barkass-Williamson

  • Iain Hughes

  • Jasmine Payne

  • Andrea Smith

  • Matilda Taylor

Annexes

Annex A: Consolidated criteria for Reporting Qualitative research checklist

A checklist of items that should be included in reports of qualitative research. You must report the page number in your manuscript where you consider each of the items listed in this checklist. If you have not included this information, either revise your manuscript accordingly before submitting or note N/A.

Topic Item No. Guide Questions/Description Reported on Page No.
Domain 1: Research team and reflexivity      
Personal characteristics      
Interviewer/facilitator 1 Which author/s conducted the interview or focus group? 39
Credentials 2 What were the researcher’s credentials? E.g. PhD, MD 39
Occupation 3 What was their occupation at the time of the study? 39
Gender 4 Was the researcher male or female? 39
Experience and training 5 What experience or training did the researcher have? 39
Relationship with participants      
Relationship established 6 Was a relationship established prior to study commencement? 16
Participant knowledge of the interviewer 7 What did the participants know about the researcher? e.g. personal goals, reasons for doing the research 16
Interviewer characteristics 8 What characteristics were reported about the inter viewer/facilitator? e.g. Bias, assumptions, reasons and interests in the research topic 16
Domain 2: Study design      
Theoretical framework      
Methodological orientation and Theory 9 What methodological orientation was stated to underpin the study? e.g. grounded theory, discourse analysis, ethnography, phenomenology, content analysis 17
Participant selection      
Sampling 10 How were participants selected? e.g. purposive, convenience, consecutive, snowball 15-16
Method of approach 11 How were participants approached? e.g. face-to-face, telephone, mail, email 16
Sample size 12 How many participants were in the study? 21
Non-participation 13 How many people refused to participate or dropped out? Reasons? 21
Setting      
Setting of data collection 14 Where was the data collected? e.g. home, clinic, workplace 16
Presence of nonparticipants 15 Was anyone else present besides the participants and researchers? 16
Description of sample 16 What are the important characteristics of the sample? e.g. demographic data, date 22
Data collection      
Interview guide 17 Were questions, prompts, guides provided by the authors? Was it pilot tested? 16
Repeat interviews 18 Were repeat interviews carried out? If yes, how many? 21
Audio/visual recording 19 Did the research use audio or visual recording to collect the data? 16
Field notes 20 Were field notes made during and/or after the interview or focus group? 16
Duration 21 What was the duration of the interviews or focus group? 21
Data saturation 22 Was data saturation discussed? N/A
Transcripts returned 23 Were transcripts returned to participants for comment and/or correction? 17
Topic Item No. Guide Questions/Description Reported on Page No.
Domain 3: analysis and findings      
Data analysis      
Number of data coders 24 How many data coders coded the data? 18
Description of the coding tree 25 Did authors provide a description of the coding tree? 18
Derivation of themes 26 Were themes identified in advance or derived from the data? 18
Software 27 What software, if applicable, was used to manage the data? 18
Participant checking 28 Did participants provide feedback on the findings? 17
Reporting      
Quotations presented 29 Were participant quotations presented to illustrate the themes/findings? Was each quotation identified? e.g. participant number 22-33
Data and findings consistent 30 Was there consistency between the data presented and the findings? 22-33
Clarity of major themes 31 Were major themes clearly presented in the findings? 22-33
Clarity of minor themes 32 Is there a description of diverse cases or discussion of minor themes? 35

Developed from: Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007. Volume 19, Number 6: pp. 349 – 357

Annex B: Interview information materials

Information sheet

Title: A mixed-methods review of a UK scientific advisory committee Investigator name: Mairead Ryan Email: mairead.ryan@go-science.gov.uk Phone: +44 7901 244049

This review is being conducted as part of a UKRI Internship within the Government Office for Science.

The aim of this review is to better understand if and why recommendations made by scientific advisory committees are used to inform government policy decisions.

Participation in this review is voluntary. You may choose not to answer any question or to withdraw from the interview at any point up until data analysis. All data obtained from interviews will be pseudonymised.

Findings will be disseminated to relevant stakeholders for discussion and may be used to inform operations or guide future monitoring and evaluation efforts.

Participants in this review are not randomised to different groups, the review does not demand changing practice from accepted standards, and findings are not intended to be generalisable. As a result, this review is not be defined as ‘research’ as per the NHS Health Research Authority guidance; hence no ethical approval was sought.

A privacy notice has been supplied to you separately to inform you about the types of personal information we will collect and how we are going to use it.

If you are satisfied with the above, you will be asked to indicate your consent to each of the below prior to the interview.

I agree to be interviewed for this review

Y ☐ N ☐

I agree for this interview to be recorded

Y ☐ N ☐

I agree to be quoted anonymously

Y ☐ N ☐

I would like to be contacted to approve specific quotes

Y ☐ N ☐

I am happy to be contacted for any follow-up questions or clarifications

Y ☐ N ☐

My questions have been answered by Mairead Ryan

Y ☐ N ☐

I am happy to be contacted for any follow-up questions or clarifications

Y ☐ N ☐

My questions have been answered by Mairead Ryan

Y ☐ N ☐

Participant name: _________

Participant’s signature: ______

Date: ______

Privacy notice

Privacy Notice: A mixed-methods review of a UK scientific advisory committee

Date of Privacy Notice: 26 August 2021

This privacy notice sets out how we will use your personal data, and your rights. It is made under Articles 13 or 14 of the UK General Data Protection Regulation (UK GDPR).

This notice can be updated at any time, and we will inform you if this occurs.

It is important that you read this notice, so that you are aware of how and why we are processing your information.

The Government Office for Science is the Data Controller for the use of personal data in this privacy notice.

Why we are collecting your information

We are collecting your information to document perspectives of actors on both sides of the science policy interface about the processes and outputs of scientific advisory committees.

Your information is collected by:

Name: Mairead Ryan Contact e-mail: mairead.ryan@go-science.gov.uk Phone number: +44 7901 244049

The type of personal information we collect

We collect and process the following information:

  • Name

  • Email

  • Phone numbers

  • Recording of conversation

How we get the personal information and what we use it for

We get your information primarily from you during the interview.

We use the information that you have given us for the purposes outlined in the information sheet.

Lawful basis for processing

Under the UK General Data Protection Regulation (UK GDPR), the lawful basis we rely on for processing your information is:

(a) Consent: “the individual has given clear consent to process their personal data for a specific purpose”

How we store, share and securely destroy your personal information

Your information is securely stored on BEIS IT (Information Technology) infrastructure, in accordance with government security policies and frameworks. It will be shared with our data processors Microsoft and Amazon web services.

Your information will be shared with:

Your personal data will be collected and processed by Mairead Ryan, a UKRI Intern within the Government Office for Science.

All interviews will be digitally recorded, transcribed verbatim and pseudonymised in the transcription process. Your personal information will be retained for no longer than is necessary and will be securely destroyed after the transcription process. Only Mairead Ryan will have access to the pseudonymisation code. Your name and any other identifying information will not be revealed in any publication or handed to third parties and will be kept confidential. You may be quoted anonymously in the report, if and only if consent for this has been specifically provided by you.

Your information will not be shared or transferred to third parties unless we are required to do so by law, for example by court order or to prevent fraud or other crime.

Your data protection rights

Under data protection law, you have rights including:

Your right of access - You have the right to ask us for copies of your personal information.

Your right to rectification - You have the right to ask us to rectify personal information you think is inaccurate. You also have the right to ask us to complete information you think is incomplete. You have the right to ask us to delete any information that is not necessary for our outlined purpose.

Your right to erasure - You have the right to ask us to erase your personal information in certain circumstances.

Your right to restriction of processing - You have the right to ask us to restrict the processing of your personal information in certain circumstances.

Your right to object to processing - You have the the right to object to the processing of your personal information in certain circumstances.

Your right to data portability - You have the right to ask that we transfer the personal information you gave us to another organisation, or to you, in certain circumstances.

You have the right to withdraw consent to the processing of your personal data at any time. You have the right to request a copy of any personal data you have provided, and for this to be provided in a structured, commonly used and machine-readable format.

You are not required to pay any charge for exercising your rights.

Contact details - How to get in touch

If you have any concerns about our use of your personal information, you can contact us at:

Data Protection Team
Government Office for Science
8th Floor
10 Victoria Street
London
SW1H 0NN

Email: contact@go-science.gov.uk

Complaints

You can also complain to the ICO (Information Commissioner) if you are unhappy with how we have used your data.

The ICO’s address:

Information Commissioner’s Office
Wycliffe House
Water Lane
Wilmslow
Cheshire
SK9 5AF

Helpline number: 0303 123 1113

ICO website: https://www.ico.org.uk

Consent form

Title: A mixed-methods review of a UK scientific advisory committee

Investigator name: Mairead Ryan Email: mairead.ryan@go-science.gov.uk Phone: +44 7901 244049

This review is being conducted as part of a UKRI Internship within the Government Office for Science.

The aim of this review is to better understand if and why recommendations made by scientific advisory committees are used to inform government policy decisions.

Participation in this review is voluntary. You may choose not to answer any question or to withdraw from the interview at any point up until data analysis. All data obtained from interviews will be pseudonymised.

Findings will be disseminated to relevant stakeholders for discussion and may be used to inform operations and/or guide future monitoring and evaluation efforts.

Participants in this review are not randomised to different groups, the review does not demand changing practice from accepted standards, and findings are not intended to be generalisable. As a result, this review is not be defined as ‘research’ as per the NHS Health Research Authority guidance; hence no ethical approval was sought.

A privacy notice has been supplied to you separately to inform you about the types of personal information we will collect and how we are going to use it.

If you are satisfied with the above. Please indicate your consent below.

I agree to be interviewed for this review

Y ☐ N ☐

I agree for this interview to be recorded

Y ☐ N ☐

I agree to be quoted anonymously

Y ☐ N ☐

I would like to be contacted to approve specific quotes

Y ☐ N ☐

I am happy to be contacted for any follow-up questions or clarifications

Y ☐ N ☐

My questions have been answered by Mairead Ryan

Y ☐ N ☐

Participant name: _______

Participant’s signature: _______

Date: ______

Annex C: Semi-structured interview guide

Interview guide for participants who held positions within the CST member, secretariat, or GOS:

Introduction: Thank you for your time. In (2016/2017/2018) the CST published a letter about (subject matter of CST letter in question). Could you tell me a little bit about your role and responsibilities in relation to this letter?

Background: I wanted to start out by asking you a little bit about the earlier stages of this letter. Could you talk me through, from your perspective, how the CST decided on this as a topic? Were there any particular events, groups or individuals, or other factors that you think played a particular role in the CST deciding on this as a topic?

Legitimacy: Could you tell me a bit about the evidence gathering and review phases? Overall, did you think they were appropriate, and seen to be appropriate? Is there anything else that you think should have influenced how the CST developed these recommendations…that may have further strengthened the outcome or impact? This might involve inclusion of certain evidence reviews, or involvements of specific actors, or ways of working that could have been more transparent. Did you think the CST had adequate secretariat support on this project?

Credibility: Next, I wanted to ask you a little bit about your broader perceptions about the CST’s credibility at the time. Did you perceive the CST to be a credible scientific advisory council with adequate expertise to advise on (*subject matter of CST letter in question), acting sufficiently independent from politics and acting sufficiently independent from private interests?

Relevance: Could you tell me a little bit about the stakeholder engagements you were involved in? Thinking about the government customers of this letter, from your perspective, were sufficient efforts made to assess if the recommendations were:

  • timely

  • relevant/applicable to the policy questions at hand at that time

  • accessible (in terms of language)

  • comprehensive in addressing the policy options

  • constructed in ways that were useful to policy makers

What were the key stakeholder engagements that led to this letter being received in the way that it was?

Impact: The letter was sent to the PM on (DD/MM/YYYY). The CST received a response was received on (DD/MM/YYYY). After the letter was sent, were you involved in any follow-up activities? Could you say a little bit about that? Did you perceive any outcomes as a result of the letter? Other than (*stated outcome), I wondered if you wanted to comment on any other/wider impacts of the letter, that may have gone undocumented? Does this letter ever come up in conversations? As having had some sort of an impact?

The CST secretariat routinely tracks government activity related to previous advice. They last did this in March of this year. Based on this review of government activity, certain recommendations received more traction that others. Do you have any thoughts on why certain recommendations received more traction that others? 

Close: Is there anything that we haven’t discussed that you think is important in relation to the development and impact of this letter? Do you have any broader suggestions about how the CST may operate more effectively as a scientific advisory committee? Are there any other individuals that you think I should speak to in relation to this letter? Thank you for your time.

Interview guide for participants who held positions as policy leads/government customers or other stakeholders outside of GOS:

Introduction: Thank you for your time. In (2016/2017/2018) the CST published a letter about (subject matter of CST letter in question). I understand that you were one of the people in (department) who the CST was in contact with about the letter. Could you tell me a little bit about your interactions with CST in relation to this letter? Could say a little bit about the (*department’s) interest in engaging with the CST?

Impact: The letter was sent to the PM on (DD/MM/YYYY). The CST received a response on (DD/MM/YYYY). Were you involved in helping to inform the PM’s response or implement the CST’s recommendations? Could you tell me a little bit about that? Did you track the impact of letter over the years?

  • What outcomes did you observe as a result of this letter CST?

  • Are there any particular groups or individuals that you think the CST’s letter had an impact on?   

  • Any intangible outcome?

  • Has the letter or its effects been mentioned to you since (your interactions with CST)?

The CST secretariat routinely follows up on previous advice to see whether recommendations have received traction. They last did this in March of this year. Based on CST or GOS perspectives, certain recommendations received more traction that others. Do you have any thoughts on why certain recommendations received more traction that others?

Credibility: Did you perceive the CST to be a credible science advisory council with adequate expertise to advise on (*subject matter of CST letter in question)? Any concerns about their independence (from political/private interests)? Or the ways in which they worked to develop advice? 

Legitimacy: Were you aware of the CST’s ways of working at that time & how they arrived at their recommendations? Did you think their processes were appropriate, and seen to be appropriate by your colleagues? Did this have an impact on how you and your colleagues at perceived the CST’s recommendations? 

Relevance: Did you/your team perceive the CST’s advice as being: * relevant and useful  

  • timely  

  • comprehensive in addressing the policy options

  • constructed in ways that were useful to policy makers

  • relevant and applicable to the policy questions at hand at that time

  • accessible (in terms of language)

Other: Are there any other factors that you think enabled/prevented the CST’s recommendations from being used to inform (*outcome)?

Close: Are there any individuals that you think I speak to in particular about the impact of this letter? Do you have any suggestions about how the CST could have operated more effectively as a scientific advisory committee? Finally, is there anything we haven’t discussed that you think is important related to the development and impact of this letter? Thank you for your time.

Annex D: Reforming the Governance of Technological Innovation

Advice sent September 2018

Available here.

Summary

Recommendation Activity
Recommendation 1: Government should establish a technology horizon-scanning function for regulation in the Better Regulation Executive (BRE) to bring ‘foresight thinking’ into the strategic planning activities of regulators and their sponsors in Government. This should build on horizon-scanning done by others, including the Government Office for Science. This function should alert and advise government and regulators on advances in science and technology and their broad regulatory implications, including identifying ethical and other issues that may require expert examination or merit public engagement. We suggest that priority areas for consideration include: the use of data and AI in medicine and advanced biotechnologies such as synthetic biology and genome editing. The Regulatory Horizons Council (RHC) was established in 2020 to advise the government on regulatory reform needed to support the rapid and safe introduction of technological innovation. The RHC published a number of reports in 2021, including a report on the regulation of fusion energy, a report on the regulation of medical devices, a research paper on the future socio-economic context within which technological innovations will be delivered and a report on genetic technology in agriculture.
Recommendation 2: The work that Government is undertaking to promote innovation-friendly regulation should consider as a matter of course the role of guidance, codes and standards alongside formal regulation. In developing new regulation, particularly in fast-moving areas, Government and regulators should consider in advance the potential need for future adaptation. For example, principles-based approaches (such as approaches based on fidelity to well-defined and well-regarded principles) that are not overly rigid may be appropriate in some circumstances. Government and regulators should also plan to review relevant governance frameworks as technologies and their applications develop. The UK has well-regarded standards-setting bodies, such as the British Standards Institution (BSI) and the National Physical Laboratory (NPL), and they should be brought into the strategic discussion as appropriate. Measures in the 2019 White Paper for Regulation for the Fourth Industrial Revolution included pilot of an innovation test so that the impact of legislation on innovation is considered as it is introduced, implemented and reviewed.
Recommendation 3: To improve the access to information and guidance offered to innovators and investors by regulators and by Government, Government should ensure that innovators and investors are provided with a ‘one-stop-shop’ for regulatory enquiries. This service should be coordinated so that innovators are provided a good service across regulatory boundaries. This support should enable innovators to consider the implications of a technology or innovation they are developing and then to navigate the regulatory and legal landscape effectively. White Paper was published in June 2019 on Regulation for the Fourth Industrial Revolution. This included measures to consult on a Digital Regulation Navigator (DRN): a new digital interface to help businesses to find their way through the complex regulatory landscape and engage with the right regulators at the right time on their proposals. The DRN project passed an alpha assessment in April 2021. Due to a dependence on Open Regulation Platform-developed data, BRE have now paused the Beta development stage of the DRN. BRE anticipate planning for Beta in early 2022 or 2023. Measures to enhance co-ordination between regulators to ensure that innovations are guided smoothly through the system was also consulted on in the White Paper.
Recommendation 4: Government should establish a coordinated programme to improve the evaluation of traditional and emerging innovative approaches to the governance of applications of new technologies, such as regulatory sandboxes used in fintech, in order to ensure that the design of future regulation is beneficially informed by such learning. Alongside this, an innovation network for regulators should be set up to promote faster adoption of best practice across the regulatory landscape. While the Regulators’ Pioneer Fund has been launched recently, Government should consider broadening its scope in future, for example to address the issues identified in this letter, and to ensure funding commensurate with the number of high-quality bids received. White Paper on Regulation for the Fourth Industrial Revolution included a review of the Regulators’ Pioneer Fund, which backs projects that are testing new technology in partnership with the regulators in a safe but innovative environment.

Please note that outcomes listed in Annexes D, E and F are based on information compiled by the secretariat in March 2021 and supplemented by activity known to individuals participating in this review April – August 2021. The outcomes are not necessarily as a result of CST recommendations.

Annex E: Harnessing Technology to Meet Increasing Care Needs

Advice sent October 2017

Available here: https://www.gov.uk/government/publications/harnessing-technology-to-meet-increasing-case-needs

Summary:

Recommendation Activity  
Recommendation 1: We recommend that UKRI develop a Healthy Ageing challenge within the Industrial Strategy Challenge Fund. This should invite bids to demonstrate new, place-based applications of technology to support independence or delivering care, with a focus on ensuring scalability. Challenge Fund on healthy ageing: https://www.ukri.org/innovation/industrial-strategy-challenge-fund/healthy-ageing/. Harper and colleagues at the University of Oxford developed the ISCFHA for UKRI: Developing the Industrial Strategy Challenge Fund Healthy Ageing (ISCFHA): a technologically enabled ecosystem for healthy ageing.  
Recommendation 2: We recommend the establishment of a National Centre of Excellence in Ageing and Design, bringing together academia and industry to embed inclusive, age friendly design in the development of mainstream technology. The application of social and behavioural sciences to understanding people’s interaction with technology will be an important element of this. Establishment of the Design Age Institute – UKRI RED funded a collaboration between Royal College of Art, Oxford Institute of Population Ageing and NICA May 2020. Design Age Institute, Royal College of Art (rca.ac.uk). The Design Age Institute is the UK’s national strategic unit for design and the healthy ageing economy: [The Design Age Institute Oxford Institute of Population Ageing](https://www.ageing.ox.ac.uk/research/programmes/the-design-age-institute/). Newcastle Research centre has a theme on inclusive design: https://www.ncl.ac.uk/nica/about-us/. Design Council has a programme on transforming ageing: https://www.designcouncil.org.uk/what-we-do/transform-ageing. The government will be working to develop a number of regional Digital Innovation Hubs. These hubs will support the use of data for research purposes within the strict parameters set by the National Data Guardian.
Recommendation 3: We recommend that Government review the support provided to citizens and care providers who are looking for assisted living products. This should include how to better curate evidence on what works and ensure those who do not meet the means test are able to access consistent and good quality advice. The ambition should be to ensure that everyone is able to purchase assistive products with confidence. What works centre for Ageing better: https://www.ageing-better.org.uk/. The Adult Social Care Green Paper should cover these recommendations but has been repeatedly delayed. The House of Commons library has produced a briefing paper on the issues likely to be addressed in the Green Paper and discusses reasons for the delay.  
Recommendation 4: We recommend that the Government encourage industry to develop data standards and APIs that allow care providers to use and share the data generated by smart home and assisted living devices. This process should also involve NHS providers so that care data can be better fed into clinical decision making, and vice versa. HACT have been looking at future technologies for assisted living and data standards: https://www.hact.org.uk/DataStandard. BRE and RIBA were working together on smart housing standards for assisted living. The Whole System Demonstrator (WSD) programme was set up by the Department of Health to show what telehealth and telecare is capable of. Innovate UK and NIHR have funded work with DHACA (industry organisation dedicated to improving tech interoperability and reducing duplication in health and care systems) https://dhaca.org.uk/ on delivering assisted living lifestyles at scale: https://dhaca.org.uk/dallas-information/about-dallas/. The Industrial Strategy’s ‘Data to early diagnostics and precision medicine’ programme, explores the application of data for better, more innovative health and care.  
Need for systems approach: Provision of care is a complex interlocking “system of systems”. No single action, in isolation, will ensure the UK can make the most of technology to deliver care. The full potential of technology in care can be realised only if we act on this system as a whole. The Government’s Industrial Strategy and forthcoming consultation on reforming care and support are the vehicle for systems approach. Mission under the Grand Challenge on Ageing established as part of the Industrial Strategy: https://www.gov.uk/government/publications/industrial-strategy-the-grand-challenges/missions#healthy-lives. Minister of State for Care responsibilities include adult social care, health and care integration, workforce, dementia, disabilities and long-term conditions. Integrated care systems (ICSs) have been established by some NHS commissioners, providers and local councils work collaboratively, taking collective responsibility for resources and population health, but these have no basis in law and rely on strong local leadership.  

Please note that outcomes listed in Annexes D, E & F are based on information compiled by the secretariat in March 2021 and supplemented by activity known to individuals participating in this review April – August 2021. The outcomes are not necessarily as a result of CST recommendations.

Annex F: Improving Entrepreneurship Education

Advice sent October 2016

Available here: https://www.gov.uk/government/publications/improving-entrepreneurship-education

Recommendation Activity
Recommendation 1: Universities should consider how to incorporate entrepreneurship education in their core curriculum, particularly for undergraduates of STEM subjects with the lowest participation rates. Quality Assurance Agency published guidance in 2018 on the UK Quality Code for Higher Education, enabling HE providers to understand what is expected of them and what to expect from each other. Enterprise educators UK have also developed guidance and metrics. Advance HE guidance and best practice includes Enterprise and Entrepreneurship Education Framework.
Recommendation 2: The National Academies should lead work to provide coordinated guidance to universities on entrepreneurship education. This should bring together best practice in educational materials. It should specifically include guidance for STEM undergraduates with the lowest participation rates. A joint follow-up workshop between RAEng, the Royal Society, the British Academy and the Academy of Medical Sciences occurred in June 2017 to bring together Fellows of each Academy with key stakeholders in entrepreneurship education in order to formulate a response to this recommendation.
Recommendation 3:** Innovate UK, the Catapults and their business networks build on existing initiatives, including their links with Local Enterprise Partnerships, to provide opportunities for: students to gain direct experience of entrepreneurship through internships at innovative businesses and Catapults; entrepreneurs to participate in teaching entrepreneurship at universities, alongside academics; university researchers with commercially-promising ideas to access schemes that help build entrepreneurial skills and validate their ideas in the marketplace. BEIS and the Department for Communities and Local Government should identify how other parts of their innovation infrastructure can also encourage the development of entrepreneurial skills (both through teaching and direct experience). This might include encouraging entrepreneurs who have benefited from publicly funded initiatives to volunteer their expertise at universities (for instance, those who benefit via apprenticeships, Innovate UK support and University Enterprise Zones).  
Recommendation 4: Higher Education Statistics Agency (HESA) destinations data should capture additional information, including: for those who have started a business, what kind of business is it (innovation-driven or otherwise); for those working for a company, what kind of company is it (large, medium, small, start-up). This was partially addressed by HESA: https://www.hesa.ac.uk/data-and-analysis/graduates/activities/work
Recommendation 5: Universities, working with HESA and the Government, should evaluate the impact of their entrepreneurship education to better understand how to tailor their offer. This should assess whether graduates who have participated in formal or informal entrepreneurship education go on to: form new businesses; take jobs in early growth-stage companies; select jobs in large companies (or a combination of the above over the life course of their careers). Enterprise educators UK have reviewed across the sector.
Recommendation 6: The process for assessing higher education teaching should include a metric that clearly signals the value of entrepreneurship to students and universities, by recognising and including its particular career benefits. The Teaching Excellence Framework is currently paused. It does not yet include an entrepreneurship metric.

See also Assessment of the sector in WonkHE: https://wonkhe.com/blogs/building-teaching-and-learning-back-better-means-scaling-up-enterprise-education/

BEIS commissioned research on the impact of entrepreneurship training: https://www.gov.uk/research-for-development-outputs/the-impact-of-entrepreneurship-training-programmes/

Annex G: Summary of recommendations for the CST to consider

Category Important factors Actions to consider
Structure Secretariat support Sponsor department to ensure adequate knowledge or understanding of the topic is available in the secretariat. The aim should be to ensure there is an ‘intelligent actor’ for translating policy issues into questions for the committee to consider, to commission evidence and support members in drafting advice. Note: needs sufficient lead time for projects to allow for recruitment/secondment/partnership arrangements.
Objectives Vision Project leads (could be either secretariat or members) to work with the Chair to develop a vision statement for recommendations. These can be used to focus the letters and develop corresponding problem statements to guide sub-group or wider discussions. Vision statements may also outline expected outcomes of the letter (what does the CST expect to happen as a result of the advice)
  Horizon scanning In addition to consulting CST members and the national academies, GCSA should consider an annual session with CSAs and other relevant academics for horizon scanning of emerging issues and to gather ideas for future topics. Note: this could also be used as an opportunity to highlight and disseminate recent CST advice where relevant.
  Horizon scanning The Chairs may wish to request regular updates from government observers and sponsor department colleagues on emerging policy challenges relevant to the CST’s terms of reference. This could be used to create a timetable of key government decision points on potential topic areas in early stage of project scoping.
Processes Trust building The secretariat to support members in identifying relevant stakeholders during the scoping phase of new projects (e.g. using established tools such as an ‘importance vs. interest’ grid). Note: the secretariat can also work with relevant colleagues from the sponsor department to review the list of stakeholders identified and address any perceived gaps.
  Trust building Where time allows, the secretariat can commission evidence reviews to inform recommendations. The CST should publish its approach to how evidence reviews are commissioned and where stakeholders can find open calls. Any protocols and/or analysis plans should be registered on relevant platforms (for example Open Science Framework).
  Trust building Following publication of advice, the names of any organisations and stakeholder groups who were consulted during the development of advice should be published on the CST’s webpage alongside a complete list of journal articles, evidence reviews, grey literature etc. used to inform recommendations. Note: for transparency purposes, the list of organisations and references should also be shared with stakeholders, where possible, as recommendations are being developed.
Outputs Knowledge transfer (Actionable, specific recommendations) Project leads to allocate time to discuss with relevant policy leads the specificity and ‘actionability’ of recommendations before they are finalised. Note: the purpose should be to inform members’ thinking, not to restrict the independence of CST’s advice.
  Translation (CST short-term follow-up) Project leads/Co-Chairs to allocate time after advice has been sent to allow for discussion and translation of recommendations with relevant Ministers or policy leads responsible for implementation. Note: the purpose should be to ensure the advice and underlying rationale for recommendations is understood, and offer an opportunity to discuss potential approaches to implementation.
  Translation (CST mid-to long-term follow-up) Co-Chairs and members to determine criteria for when to follow-up on previous advice to maximise the prospects of implementation and avoid wasted resource.
Translation (Wider dissemination)   The secretariat to develop a targeted and proportionate dissemination strategy, with support from the GOS communications team, to ensure website, twitter, and other channels are used effectively to raise awareness of CST’s advice. Note: CST is an independent expert committee. Any proposed changes should be agreed with the sponsor department to ensure that CST is not seen as an advocacy group.
’Other’ Ownership for implementation (single vs diffuse) Project leads or Co-Chairs to allocate additional time to facilitate and convene stakeholders for follow-up discussions where ownership for implementation is diffuse (rather than concentrated in a single department or team).

Please note that outcomes listed in Annexes D, E & F are based on information compiled by the secretariat in March 2021 and supplemented by activity known to individuals participating in this review April – August 2021. The outcomes are not necessarily as a result of CST recommendations.

References

  1. Maasen, S., & Weingart, P. (2006). Democratization of expertise?: exploring novel forms of scientific advice in political decision-making (Vol. 24): Springer Science & Business Media. 

  2. Groux, G.M., Hoffman, S.J., & Ottersen, T. (2018). A typology of scientific advisory committees. Global Challenges, 2(9), 1800004.  2 3 4 5

  3. ‘Code of Practice for Scientific Advisory Committees’, Government Office for Science, 2011, Available from: https://www.gov.uk/government/publications/scientific-advisory-committees-code-of-practice 

  4. Glynn, S.M., Cunningham, P.N., & Flanagan, K. (2003). Typifying scientific advisory structures and scientific advice production methodologies (TSAS). 

  5. Røttingen, J.A., & Ottersen, T. (2018). Supra‐SAC: Need and Role for an All‐of‐Government Scientific Advisory Committee. Global Challenges, 2(9).  2 3

  6. ‘Government Office for Science Annual Report 2017/2018’, Government Office for Science, 2018, Available from: https://www.gov.uk/government/publications/government-office-for-science-annual-report-2017-to-2018  2

  7. ‘Council for Science and Technology: ways of working’, Government Office for Science, 2020, Available from: https://www.gov.uk/government/publications/cst-ways-of-working  2

  8. ‘Principles of scientific advice to government’, Government Office for Science, 2010, Available from: https://www.gov.uk/government/publications/scientific-advice-to-government-principles/principles-of-scientific-advice-to-government 

  9. Parkhurst, J., Ettelt, S., & Hawkins, B. (2018). Evidence use in health policy making: an international public policy perspective: Springer. 

  10. ‘Triennial Review of the Council for Science and Technology’, Department for Business, Energy and Industrial Strategy, 2014, Available from: https://www.gov.uk/government/publications/council-for-science-and-technology-triennial-review-2013-to-2014  2 3

  11. ‘Classification of Public Bodies: Guidance for Departments’, Cabinet Office and Efficiency and Reform Group, 2016, Available from: https://www.gov.uk/government/publications/classification-of-public-bodies-information-and-guidance  2

  12. Cairney, P., & Oliver, K. (2017). Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health research policy and systems, 15(1), 1-11. 

  13. Gluckman, P., & Wilsdon, J. (2016). From paradox to principles: where next for scientific advice to governments? Palgrave Communications, 2(1), 1-4. 

  14. Parkhurst, J. (2017). The politics of evidence: from evidence-based policy to the good governance of evidence: Taylor & Francis. 

  15. Guston, D.H. (1999). Stabilizing the boundary between US politics and science: The role of the Office of Technology Transfer as a boundary organization. Social studies of science, 29(1), 87-111. 

  16. Gustafsson, K.M., & Lidskog, R. (2018). Boundary organizations and environmental governance: Performance, institutional design, and conceptual development. Climate Risk Management, 19, 1-11. 

  17. Sarkki, S., Niemelä, J., Tinch, R., Van Den Hove, S., Watt, A., & Young, J. (2014). Balancing credibility, relevance and legitimacy: a critical assessment of trade-offs in science–policy interfaces. Science and Public Policy, 41(2), 194-206.  2

  18. Cash, D.W., Clark, W.C., Alcock, F., Dickson, N.M., Eckley, N., Guston, D.H., . . . Mitchell, R.B. (2003). Knowledge systems for sustainable development. Proceedings of the national academy of sciences, 100(14), 8086-8091.  2 3

  19. Sarkki, S., Tinch, R., Niemelä, J., Heink, U., Waylen, K., Timaeus, J., . . . van den Hove, S. (2015). Adding ‘iterativity’to the credibility, relevance, legitimacy: A novel scheme to highlight dynamic aspects of science–policy interfaces. Environmental Science & Policy, 54, 505-512.  2 3 4 5

  20. Dunn, G., & Laing, M. (2017). Policy-makers perspectives on credibility, relevance and legitimacy (CRELE). Environmental Science & Policy, 76, 146-152. 

  21. Hoffman, S.J., Ottersen, T., Baral, P., & Fafard, P. (2018). Designing scientific advisory committees for a complex world. Global Challenges, 2(9). 

  22. D’Souza, B.J., & Parkhurst, J.O. (2018). When “good evidence” is not enough: a case of global malaria policy development. Global Challenges, 2(9), 1700077. 

  23. Tangney, P. (2017). What use is CRELE? A response to Dunn and Laing. Environmental Science & Policy, 77, 147-150. 

  24. Behdinan, A., Gunn, E., Baral, P., Sritharan, L., Fafard, P., & Hoffman, S.J. (2018). An overview of systematic reviews to inform the institutional design of scientific advisory committees. Global Challenges, 2(9), 1800019.  2 3

  25. Gopinathan, U., Hoffman, S.J., & Ottersen, T. (2018). Scientific Advisory Committees at the World Health Organization: A Qualitative Study of How Their Design Affects Quality, Relevance, and Legitimacy. Global Challenges, 2(9), 1700074. 

  26. Evans, K.M., & Matthews, K.R. (2018). Science Advice to the President and the Role of the President’s Council of Advisors on Science and Technology. Rice University’s Baker Institute for Public Policy.  2

  27. Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International journal for quality in health care, 19(6), 349-357. 

  28. ‘GSR Ethical Assurance for Social and Behavioural Research’, Government Social Research Profession, 2021, Available from: https://www.gov.uk/government/publications/ethical-assurance-guidance-for-social-research-in-government 

  29. Yin, R.K. (2009). Case study research: Design and methods (Vol. 5): sage. 

  30. Goodrick, D., Comparative case studies: Methodological briefs-impact evaluation no. 9. 2014. 

  31. Krouwel, M., Jolly, K., & Greenfield, S. (2019). Comparing Skype (video calling) and in-person qualitative interview modes in a study of people with irritable bowel syndrome–an exploratory comparative analysis. BMC medical research methodology, 19(1), 1-9. 

  32. Carroll, C., Booth, A., Leaviss, J., & Rick, J. (2013). “Best fit” framework synthesis: refining the method. BMC medical research methodology, 13(1), 1-16. 

  33. Gale, N.K., Heath, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC medical research methodology, 13(1), 1-8. 

  34. Bowen, G.A. (2009). Document analysis as a qualitative research method. Qualitative research journal. 

  35. Taylor, B., Henshall, C., Kenyon, S., Litchfield, I., & Greenfield, S. (2018). Can rapid approaches to qualitative analysis deliver timely, valid findings to clinical leaders? A mixed methods study comparing rapid and thematic analysis. BMJ open, 8(10), e019993. 

  36. Vindrola-Padros, C., & Johnson, G.A. (2020). Rapid techniques in qualitative research: A critical review of the literature. Qualitative Health Research, 30(10), 1596-1604. 

  37. Posner, S.M., & Cvitanovic, C. (2019). Evaluating the impacts of boundary-spanning activities at the interface of environmental science and policy: A review of progress and future research needs. Environmental science & policy, 92, 141-151. 

  38. ‘UK National Action Plan for Open Government 2019-2021’, Department for Digital, Culture, Media and Sport, 2019, Available from: https://www.gov.uk/government/publications/uk-national-action-plan-for-open-government-2019-2021/uk-national-action-plan-for-open-government-2019-2021#com3 

  39. Hudson, B., Hunter, D., & Peckham, S. (2019). Policy failure and the policy-implementation gap: can policy support programs help? Policy design and practice, 2(1), 1-14. 

  40. Sabatier, P., & Mazmanian, D. (1979). The conditions of effective implementation: A guide to accomplishing policy objectives. Policy analysis, 481-504.