Linear Infrastructure Planning Panel: A Collaborative approach to AI governance

The Linear Infrastructure Planning Panel was established in March 2023 to develop good practice in the use of new technologies like AI in the planning of the major infrastructure critical for the delivery of national goals such as net zero, resilience and nature recovery.

Background & Description

The Linear Infrastructure Planning Panel was established in March 2023 with a clear purpose: to develop good practice in the use of new technologies like AI in the planning of the major infrastructure (e.g. energy grids) that are critical for the delivery of national goals such as net zero, resilience and nature recovery. The Panel is an example of a multi-stakeholder deliberative engagement process that has sought to build consensus in a challenging area. It is a collaboration between social and environmental NGOs, data and planning experts and government / regulatory bodies.

The Panel was kicked-off with seed corn funding from AI start-up Continuum Industries. However, it is independent. The Chair and Panel members sole duty is to forward the purpose of the Panel. In the last 12 months the Panel has carried out deep dives into: trustworthy new tools and approaches; the social, environmental, and economic metrics used in new tech; and how new techniques can support engagement. It has published all its findings and widely shared its work with key decision makers, project developers and tech companies over the last year.

Safety, Security & Robustness

It is clearly vital that AI used in critical infrastructure planning is safe, secure, and robust due to the nature of this infrastructure, its costs, the timescales, and asset lives involved (typically 50 years+) and the potential risks when things go wrong. The collaborative Linear Infrastructure Planning Panel approach was important to address concerns in this area on two fronts:

  1. Context: There is currently a lack of co-ordination and coherence between existing regulations in this space. Economic regulators (e.g. Ofgem and Ofwat etc) have an interest in technical systems planning and planning bodies (Planning Inspectorate, Environment Agency, local authorities, statutory consultees etc) have an interest in spatial and environmental planning. Without taking this context into account, any new AI regulation could amplify existing regulatory fragmentation or lead to regulatory arbitrage which could reduce the safety and robustness of the AI tools in question. The Panel’s deliberative engagement approach has helped to bring these diverse groups of stakeholders together in a way that has not happened before to develop good practice. By shining a light on the fragmented roles and responsibilities of key decision makers and procurement processes, the Panel’s deep dive on trust helped build a framework to give key actors a clearer idea around roles and responsibilities in this area and the confidence to do things differently.
  2. Types of harm: Much of the existing discussion and emergent assurance mechanisms around the safety, security and robustness of AI are focused on how these issues relate to individual harms. When it comes to AI Infratech, it is vital to also consider harms to groups of people, communities, and the environment (i.e. not just the risk of individual harms and not just short-term impacts), and the specific cyber security risks of critical infrastructure. The range of social and environmental expertise on the Panel has helped provide this wider view which is not available in many of the existing assurance products and processes.

How this technique applies to the AI White Paper Regulatory Principles

Appropriate Transparency & Explainability

The Linear Infrastructure Planning Panel process focused approach was important to address concerns in this area on two fronts:

  1. Appropriateness: For AI spatial planning tools to be seen as trustworthy, it is vital that they use relevant social, environmental, and economic data. There are significant data gaps in these areas and issues around data standardisation and sharing and the prioritisation of different metrics. The Panel’s deep dive into the appropriate social, environmental and economic metrics that should be used in linear infrastructure planning Infratech has helped decision makers to understand stakeholder views around what goes into AI models in this area, transparency around how AI models rank and weight different factors and the need to be clear around what can and can’t be standardised in this space (e.g. visual amenity impacts of infrastructure around World Heritage sites).
  2. Transparency and explainability: Infrastructure planning can be highly contentious, and it is vital that decision making is aligned with the democratic development consent process and is seen as legitimate. New technologies have the potential to offer new collaborative opportunities, visualisations, and public engagement platforms to support decision making in this area. However, if governance frameworks in this area aren’t developed in partnership with key stakeholder groups (e.g. community planning networks, social and environmental NGOs etc) and aligned with societal views, not only will their insights and expertise be lost, but they may well view new technologies with suspicion. A concern that technology is seen as ‘the’ answer to planning challenges, and that legitimate reservations are brushed to one side in the race to decarbonise, exacerbates this risk. All this could lead to legal challenge around the use of new tools which could delay the delivery of national strategic goals such as net zero. The Panel’s deep dive into what trustworthy AI looks like in this space, and how new tech like AI can support engagement processes, considered these issues in detail.

Fairness

The Panel’s membership deliberately covered the interests of a wide range of stakeholder groups who have an interest in the use of AI and other technologies in national infrastructure planning. It included: social and environmental perspectives (these can sometimes be seen as being opposed in infrastructure planning cases); perspectives from across England, Scotland, and Wales (vital for spatial planning issues); future generation views (one of the panel members was deliberately selected as they were earlier on in their career); and technical and data ethics expertise.

This diversity helped ensure that issues of fairness were implicitly considered in panel recommendations. For example, in its engagement deep dive, the Panel considered the impact of AI enabled engagement approaches on people who are not digitally engaged. In its metrics deep dive, the Panel considered the issue of the distributional impacts of new infrastructure which can extend between communities’, regions, nations and intergenerationally. In its trust deep dive, the Panel considered the issue of open data and how data sharing can be important to create a level playing field and give all actors greater confidence in the process.

A more mechanistic AI assurance mechanism would be unlikely to be able to identify and incorporate such rich insights. Incorporating these factors in AI governance frameworks in this area is essential if stakeholders are to have sufficient confidence in the use of new technologies for infrastructure planning and consenting / permitting.

Accountability & Governance

The Panel’s collaborative multi-party approach has helped to address some of the challenges around the fragmented nature of existing infrastructure planning processes. Issues around roles and responsibilities in this area are already highly complex, uncertain, and undergoing reform. And the need for legitimacy in decision making in this area is paramount to build public support for new infrastructure. Introducing new AI assurance and regulation on top of this, without being aware of these context specific issues, first addressing existing regulatory coherence/co-ordination issues or going with the grain of the wider direction of travel, could be counterproductive.

In its deep dive on trustworthy new tools and approaches, the Panel considered how AI assurance mechanisms in this area need to be aligned with existing democratic public consenting processes. It also considered the spectrum of governance that is needed in this area ranging from full legal control at one end to informing stakeholders at the other – with a range of co-ordination, data sharing and collaborations in between. The Panel’s deep dive on this topic also considered the importance of getting the right risk management culture around the use of AI and other digital technologies. This is a particular issue in regulated utilities which, for good reason, are often highly risk averse.

Contestability & Redress

Contestability is a significant issue in infrastructure planning where community and environmental impacts can be significant, and the Development Consent Order regime can lead to a highly adversarial process. Ensuring that AI doesn’t add to these problems, and that any risks around this were identified and the appropriate mitigations were put in place, was seen as a key issue in the Panel’s work.

These issues were examined in the Panel’s deep dives on engagement and on trustworthiness. In both deep dives, the key point that Panel members made was that new tech like AI was only a decision support tool. It should not make decisions itself. These should still be made by those legally responsible for new infrastructure build through the established democratic consenting process.

Why we took this approach

The Panel’s deliberative and collaborative engagement process has helped to bring diverse groups of stakeholders together in a way that has not happened before to consider how they best address a specific outcome: how to develop good practice in the use of AI and other advanced technologies in an already highly complex and contested area. By creating a consensus on how AI can transform infrastructure planning, the Panel has provided a forum to accelerate the journey towards national goals such as net zero and resilience.

The issues the Panel has considered have spanned government departmental and regulatory vires, the responsibilities of the different UK nations (much of planning is a devolved issue) and the interests of project developers, promoters, and consultancies. This has led to a collective action problem in this area where different organisations have tended to work in silos. Tackling the associated issues in the round has been vital to start to unlock the potential for AI and other advanced tech to address some of the key barriers to infrastructure planning and delivery.

The Panel has worked in an iterative way to build support and consensus. Panel members agreed a theory of change and the topics for deep dives at its first meeting. For each session, an outline of the issues to cover was circulated to members and observers for comment in advance. Continuum Industries, who had kick started the Panel’s work and provided seed corn funding to get it going, gave insights into some of the practical challenges they had faced on that topic.

A draft briefing paper was produced on the back of this to guide panel conversations. This was amended following the deep dive to take Panel comments on board. Final copies of briefing papers were then placed on the Panel website and the Chair used these as the basis for bilateral discussions and presentations to project developers, tech companies, government departments, regulators and professional networks and associations – in the UK and globally (e.g. through the Global Infrastructure Hub). This wider outreach was seen as essential to help change the frameworks and cultures which will shape the use and uptake of AI in this area and to get wider insights into upcoming challenges and opportunities.

Benefits to the organisation using the technique

As a result of their involvement with the Panel government actors will be able to unlock the potential of AI that can address planning skills and capacity challenges and help deliver faster, more popular, and greener infrastructure. Regulators will be able to help unleash and embed innovation by ensuring new tools are trustworthy and aligned with stakeholder views. Consumers and communities will be able to see how new tools can enable democratic engagement in planning before major technical decisions are locked in.

Project developers will have greater confidence that the digital tools they use are trustworthy so they can radically shrink the time of technical planning work increasing certainty, reducing costs, and freeing time to focus on the key sticking points where human judgement is needed. Tech companies will be able to seize the opportunity to help create and grow a trustworthy market in this area. And investors will be able to place reliance on new tools and approaches to help them de-risk their investments and demonstrate delivery of their ESG goals with more meaningful social and environmental impact reporting.

Limitations of the approach

The Panel was established as a ginger group. To date, its relatively informal structure has meant it has been able to act quickly and nimbly. However, being outside a recognised public body (e.g. a government department, regulator, innovation catapult or professional association) has meant that it has sometimes struggled to get heard by decision makers and has had to continually assert its independence.

Having proved the concept of collaborative work in this area, the Panel is now exploring how to ‘dock into’ / be hosted by a larger recognised public body that can give it credibility and enable it to raise the modest funds needed to cover its costs. This is proving challenging given the collective action and silo decision making problems identified earlier. Although the Panel has an identified forward work programme, there is a risk that unless a new host and funding can be found its work could come to an end.

Further AI Assurance Information

Published 9 April 2024