AI, open code and vulnerability risk in the public sector
Guidance for safely publishing source code in the open, and reducing the risk of AI-accelerated vulnerability discovery.
Technology leaders are asking whether AI-accelerated vulnerability discovery means that public sector departments should stop publishing source code ‘in the open’ by default.
User research suggests that the primary driver of exploitation risk is the presence of weaknesses in systems - including unpatched vulnerabilities, insecure implementation, and unsafe configuration or deployment - and the inability to remediate them quickly. Publishing source code does not create those weaknesses, but it can modestly reduce attacker uncertainty and speed up analysis (an effect that may increase with AI assistance), especially where maintenance is weak and fixes are slow. This guidance reinforces the minimum operational capability already assumed for safely operating publicly-accessible services.
Recommendations
-
Meet the minimum standard for publicly-accessible systems. Ensure clear ownership, secure-by-design practice, automated hygiene, and credible remediation capability (privacy should not be used as a substitute control).
-
Keep open by default. Making everything private adds additional delivery and policy costs, and can reduce reuse and scrutiny. Openness should remain the default posture, with closure used sparingly and deliberately.
-
Make exceptions explicit and reviewable. Where code should be closed, require a short threat model that states the attacker, what publication adds, and the realistic path to harm. Keep exceptions narrow, time-bound, and re-approved periodically.
-
Strengthen remediation capability. Assume shorter discovery-to-exploit windows by setting patch SLAs, automating dependency and vulnerability management, and ensuring teams can respond quickly to inbound reports. This is essential whether code is open or closed.
Current published advice on open source
Across government, guidance is consistent that code produced with public money should be open and reusable by default, with limited, justified exceptions. This is reflected in the Service Standard, the Technology Code of Practice, the Secure by Design policy and supporting standards such as GDS Way, Home Office and HMCTS.
In plain terms, this guidance is about making reuse easier, keeping delivery transparent, avoiding duplicate builds and supplier lock-in, and benefiting from external scrutiny, while embedding security from the outset and throughout the service lifecycle.
Government guidance assumes teams do not treat source code visibility as a primary security control, and instead follow standard practice that secrets are never committed to any repository (public or private), which include credentials, API keys or tokens, and private keys. This guidance assumes public repositories do not include security-sensitive implementation details that would materially increase exploitability if published, such as internal hostnames or IP ranges, admin endpoints, and security controls or thresholds. Even with these controls, where publication would create a specific, credible route to harm, teams can keep code closed as a justified exception, not a new baseline.
Advances in AI-enabled vulnerability detection
AI-assisted software analysis is improving quickly. The UK AI Security Institute reports that recent frontier models show materially-stronger cyber capability in controlled evaluations. Some examples of this are the AI Security Institute frontier AI trends report, December 2025; and the AISI Work Blog, Our evaluation of Claude Mythos Preview’s cyber capabilities, 13 April 2026. The implication for departments is a shorter window between discovery and exploitation.
AI changes the speed and scale of analysis. This tends to compress the time between a weakness existing and being exploited, making remediation capacity more important than ever. Access to source code can give attackers an advantage by reducing uncertainty and enabling faster, more targeted review, which is an advantage that may grow with AI assistance.
In practice, that advantage is usually incremental relative to the underlying presence of weaknesses and the speed of patching and mitigation. The leadership judgement is therefore not whether source access matters at all, but whether the additional advantage is significant enough, in practice, to justify moving away from open by default. Attackers can already find weaknesses without source code, such as by probing running services, fuzzing, or analysing binaries and dependencies, and defenders can use the same tools to review and triage faster.
Recent public reporting about organisations restricting access to public repositories due to AI-enabled code analysis illustrates how quickly leaders may reach for blanket closure in response to uncertainty. This guidance presents an alternative, to stay open by default, but to make publication a deliberate decision backed by a minimum maintenance and remediation standard.
In practice, production risk is primarily driven more by secure-by-design architecture and implementation, and by deployment, configuration, dependency hygiene and access control, than by the visibility of application logic through published source code. Many serious vulnerabilities are logic flaws in code, but attackers can often discover and exploit them through other means.
The point is that source visibility usually changes time-to-discovery and attacker uncertainty, rather than being the dominant determinant of whether a weakness exists or how quickly it is mitigated. Teams should focus on an in-depth defence approach that prioritises secure-by-design delivery, separation of secrets from code, strong environment controls, monitoring, and rapid remediation. These are the controls that are effective whether code is public or private.
Additional considerations
The recommendations above set the default posture. The considerations below provide additional context for leadership decisions, highlighting common pitfalls of ‘closing by default’, and practical factors that affect real-world risk whether code is public or private.
1. Private repositories can create a false sense of security.
Making a repository private can encourage security-by-obscurity thinking, and can reduce the urgency to fix underlying weaknesses.
2. Closing code after publication may not remove exposure.
Where code has been developed in the open, making a repository private later may not remove access for a capable adversary as popular repositories are often mirrored or forked, and even low-profile repositories may already have been indexed or cloned by researchers or attackers.
3. Closure can become a one-way door.
Private repositories reduce reuse and external scrutiny, and over time teams diverge. That makes it harder to make the code public again, because the work required to publish safely and confidently increases.
4. The same tools can be applied to defence.
As discovery accelerates, defence must rely on continuous review, testing and remediation. Openness reinforces this discipline, while avoiding scrutiny does not remove defects and can allow weaknesses to persist.
5. Openness can surface issues earlier.
Public code allows issues to be identified by a wider set of reviewers, including across government and the supplier ecosystem. Closing code concentrates discovery within delivery teams and operational monitoring.
6. Precedent matters.
Broad ‘AI’ justifications for closure are easily copied and, once normalised, they undermine cross-government coherence on reuse and standards.
Minimum standard for publicly-accessible systems
Leaders should treat the following as a minimum bar for making a repository and associated system public. This guidance does not introduce new security expectations, instead it makes explicit the minimum operational capability already assumed by existing open‑by‑default and secure‑by‑design guidance.
As a minimum, you must have:
-
a named owner and maintenance plan, including a current service or team owner who is accountable for the repo, its dependencies, and how long it will be supported, visible through CODEOWNERS files or similar
-
a security contact and intake route, with clear instructions for reporting vulnerabilities (for example, a SECURITY file and monitored mailbox) and a defined triage process
-
no secrets or sensitive operational detail, and enforced controls to prevent committed secrets (tokens or keys), as well as removal of environment-specific hostnames, IP ranges, admin endpoints, or security thresholds where these would materially increase exploitability
-
a secure-by-design baseline, with evidence that the system follows these principles (for example, in threat modelling, safe defaults, least privilege, and hardening of publicly accessible endpoints), as operational hygiene can not compensate for unsafe architecture or configuration
-
automated hygiene, with dependency update tooling, vulnerability and secret scanning, and trunk protections (such as prevent force‑push and require CI checks) so that fixes can be applied quickly without introducing new risk
-
patching expectations with agreed timelines for addressing critical and high vulnerabilities, including the ability to demonstrate that you meet them
-
a safe posture for unmaintained code where, if a repo is not actively maintained, it is clearly marked and archived, and any deployed service is either decommissioned or has an explicit owner and patching route
Making code private is not an appropriate mitigation for lack of ownership, patching capability, or operational assurance, so systems that cannot be safely maintained should be remediated or retired.
If a team cannot meet the minimum standard, leaders should address the underlying operational gap. In practice this means leaders should either staff the capability (often via shared services) so the system is safely-operated or retire or decommission systems (and archive associated repositories) that are no longer required, while ensuring that no live service remains without an explicit owner and patching route.
Only once a system meets the minimum operational standard, leaders may use the existing exception rule where publication of otherwise well‑maintained code would create a specific, credible route to harm.
Avoid ‘private by default’
The intent is to avoid a private by default drift that masks under‑resourced maintenance. Moving code from public to private as a substitute for investment in secure-by-design delivery, ownership and remediation is a warning sign because it reduces sharing and scrutiny, can slow coordinated improvement across government and suppliers, and does not remove the underlying weaknesses in a running service.
Departments should treat privacy as an exception control for specific, credible routes to harm, and not as a compensating control for inadequate capability.
Additional resources
For further information, you should refer to:
- NCSC advice on protecting code repositories to ensure these are sufficiently secure
- DSIT guidance on open configuration and security-enforcing code to determine which data and code should remain open or closed
-
DSIT security guidance for coding in the open to implement mitigations against hostile activity targeting source control systems
-
DSIT Software Security Code of Practice and accompanying NCSC implementation guidance, to ensure a structured approach to integrating security throughout the software development life cycle, including secure coding practices, data management, and configuration management
-
NCSC guidance to protect your code repository to improve and evaluate the security of development practices
- NSCS secure development and deployment guidance, to support secure-by-design delivery, ongoing review, and effective remediation as systems evolve