Breadcrumbs navigation
Event summary: First BISA Policy Lab on governance and artificial intelligence (AI)
In April 2026 we held our first BISA Policy Lab, an event to facilitate conversations on the issue of AI governance between policymakers and academics. In attendance were analysts from the FCDO and Cabinet Office, along with academics from diverse fields including cyber-deterrence, energy security and law, creating lively debate. In this article, MA student, Sophie Stringfellow, gives a summary of the discussions that took place.
As a current student of International Security, it was a privilege to be involved in an event at the cutting edge of policy and research, powerfully highlighting to me the importance of close collaboration between the worlds of academia and governance.
The timing of the Policy Lab was particularly apt, coinciding with the launch of the UK sovereign AI fund by the Technology Secretary, and followed a day later by news coverage of the advanced AI model Mythos.
It was palpable from discussion throughout the day that concerns surround AI and its relationship with democracy. From the power of antidemocratic ideology in Silicon Valley and the individuals that control many major AI models, to the advantages held by authoritarian states in mobilising money and infrastructure to develop AI, the potential for artificial intelligence to erode democratic governance was emphasised throughout the three discussion sessions.
I was particularly struck by the conversations on AI and ontological security. While there is often a perception that traditional security and policy approaches dominate policymaking, the use of ontological security approaches to explain state behaviour highlighted the importance of both critical and classical approaches in effective policy creation.
Session 1: The erosion of domestic-international boundaries in technology governance
Session one focused on conceptualising the boundaries of AI, their erosion and creation. Ideas of boundary erosion between domestic and international spheres are far from new, with the push for globalisation and technology diffusion breaking down international and public/private divides in trade and governance. In the technology sector, there have historically been moves to delegate governance to the private sector. However, growing concerns with weaponised interdependence and protectionism mean that such actions are increasingly unlikely.
The race for AI dominance is regularly framed as a bipolar competition between the US and China. Speakers highlighted the possibility of evolution in the US-China race, with potential for a division of powers where the US produces frontier models while China focuses on technology diffusion. However, bipolar approaches erase many actors within the issue of AI governance, and there is a need to consider the position of states balancing their position between the US and China.
Some concerns surrounding AI may be ontological in nature. The US has been the world’s leading technological power for the last century, and a weakening of that position through the rise of China represents a challenge to its identity and self-image, leading to reactionary fear politics.
While AI proliferation and dependency on a small number of powerful corporations have created concerns around boundary erosion, speakers concluded that a high degree of boundary creation is also apparent. Concerns with the erosion of boundaries implies that solid, identifiable boundaries existed beforehand, an assumption that must be questioned. As such, a transformation of boundaries may be occurring in a dialectical cycle. We need more work to identify all actors involved, as well as the forces impacting the diffusion of AI across borders.
AI is composed of physical and intangible aspects, such as data centres and algorithms. The materiality of artificial intelligence directly impacts the development of models, but also its boundaries. Boundaries may be horizontal between states, but also vertical, with data hyperscalers eroding vertical boundaries through expansion into infrastructure provision, such as Google’s undersea cable program. These diverse boundaries create issues of jurisdiction, control and oversight.
Emphasis on technological and data sovereignty, and provision of ‘sovereignty as a service’ by tech corporations, further indicates a movement for control and boundary creation. However, the involvement of US companies in the provision of technology may impact the degree to which ‘sovereignty’ is possible. In the Global South, sovereign technology has long been out of reach, but reinforcement training for LLMs is often carried out in countries such as India, creating issues surrounding the control exerted by Silicon Valley over different groups of people. China is exporting its AI products at low prices to the Global South in return for support in the adoption of specific technical standards, making DeepSeek the world’s most commonly used LLM.
Governance of AI has two main approaches: a securitised approach concerned with managing risks, and a liberal democratic approach allowing for debate. Regions and states are trying to become more attractive to AI developers through governance approaches, such as proposed legislation in Illinois to absolve liability for damages related to frontier AI systems.
Speakers emphasised during the discussion the risks to states of being a regulatory outlier, and the need for greater coordination. The lack of European firms in the Anthropic-led Glasswing consortium may be an unintended consequence of the EU’s AI Act, highlighting that firms that do not wish to comply with regulation may choose not to work within the EU, or limit the release of their models. In governing AI, speakers noted a need to learn from past attempts at technology governance, highlighting the slow progress and dissatisfaction associated with the legislation of Lethal Autonomous Weapons.
AI has a strong ideological component, affecting its governance and control. Speakers noted the ideological components of LLM training models, affecting the information received by users and allowing the import of authoritarian tendencies. Silicon Valley has historically been strongly libertarian, and the ability of AI to concentrate power within one firm or government clashing with market incentives may explain the contradictory behaviour of some actors. Rising antidemocratic sentiment amongst some US technology actors may further complicate the relationship between AI, authoritarianism and democracy.
Session 2: Strategic dependencies in AI adoption
Session two emphasised important facets of AI technology, noting its ability to provoke a tolerance paradox between disinformation and freedom of expression, as well as sentiments of fear and anxiety.
AI development relies on a triad of data, compute and algorithms, with a strategic dependency on US companies through their ownership of compute power and infrastructure. While complete strategic autonomy is not achievable, reliance on external provision entails potential security vulnerabilities and may impact domestic innovation. AI hype has also created economic dependency through the expansion of a speculative bubble.
This session featured perspectives from a variety of states: Greece, Romania, Canada and the UK. Speakers noted that the effects and strategic considerations surrounding AI are highly contextual. For poorer EU states such as Greece there are concerns with the domestic impact of EU artificial intelligence legislation and the cost of applying EU standards, leading to growing euroscepticism and deadlock. There is a need for bloc powers such as the EU to reevaluate the place of weaker states in agenda-setting, as current approaches may be targeted more at external powers than internal.
A bot attack on Romanian elections highlights the impact of AI on democratic freedoms, and the difficulty for democratic constitutional powers to distinguish between AI-powered disinformation and freedom of speech. There is need for greater consideration of the security implications of AI, as there is a lack of clear pathways to tackle propaganda and the fight for civilian opinion.
Canada is shifting its approaches to AI and digital sovereignty, particularly in response to US rhetoric and greater awareness of strategic dependencies. While investment has increased, it was noted that past approaches were more liberal. The UK has taken a somewhat different approach to AI, with some advocates suggesting a need to more closely resemble the US in its approach to the technology. A weak capital market and established pathways of UK startups being purchased by US venture capital firms form an embedded structural weakness, despite the UK’s strong abilities in research and development.
Speakers suggested that the power of China in the AI sphere may be due to push factors, with the decline in US soft power making it unattractive to align with. China has the greatest ability to be autonomous, through its vertical supply chain integration and high internal independence of the CCP. It also has strong abilities to exert economic and regulatory power through its manufacturing capabilities and technology export. Not all states have the same capacity to innovate in the model and infrastructural aspects of AI, meaning that there should be a pragmatic focus on capabilities, such as in dataset creation for underserved languages, or in policy innovation through understanding of risk.
Speakers highlighted the potential for hybrid AI models, where states may exert influence to change existing models to fit their needs. The model development race may require a re-examination of IP law and TRIPS in relation to the reverse engineering of technology. In the race between the US and China, actors may need to adopt more ‘middle power’ approaches, inspired by the states in China’s periphery, as refusing engagement on political grounds may impact states’ ability to access technological innovations.
Session 3: Digital sovereignty and alternative governance models
Session 3 examined the use of the language of sovereignty as a method of signifying control, legitimacy and authority to other actors. Ideas of digital sovereignty have historically been used by Russia and China, but it has been gaining traction in the EU since around 2015.
While floating signifiers such as ‘sovereignty’ or ‘responsible use’ allow for interactions between normative, descriptive and strategic approaches to policy, it may also have negative impacts on communication and cohesiveness in domestic and interstate policy. Speakers suggested that the EU may be using the language of sovereignty to position itself as a significant player in the sphere of AI despite its lack of infrastructural strength. In light of the EU’s reliance on US firms, sovereignty language may also serve for anxiety management and ontological security, asserting independence despite the contradictions in sovereignty agendas. Other approaches highlighted the concrete erosions of sovereignty through new forms of coercion, and the potential for ideas of sovereignty to be problematic or nationalistic.
Currently states are highly dependent on AI and cloud compute service provision from private firms such as Microsoft, raising concerns that firms may be susceptible to pressure from foreign governments. Speakers noted during the discussion that it is not financially possible for all states to create their own sovereign LLM models, suggesting ideas of pooled sovereignty and greater institutional collaboration. The collapse of multilateral institutionalism since the 2008 financial crisis has led to a breakdown in state-business-society relations, limiting the ability of international organisations to moderate issues and coordinate effective collective action.
Strategic considerations from the discussion included the potential for domestic internet creation to reduce external influence on sovereign AI, and the balance of efficiency versus security considerations surrounding the creation of hyperscale datacentres. Investment into soft power and strategic research may also affect the position of states within trade networks.
Concerns with the push for sovereignty included its impact on democratic deliberation. While risk-taking, speed and medium-term planning are necessary for governance to even attempt to keep pace with technological development, technocratic governance reduces space for liberal democratic approaches. To effectively govern AI there must be accountability and responsibility among corporations, and the knowledge and information for citizens and MPs to scrutinise AI and provider firms. An actively democratic approach is essential, not only for human rights but because it affects what forms of governance are possible.
Some final thoughts
The Policy Lab was a hugely important event for fomenting conversations between academics and policy makers. Many speakers at the event noted how knowledge and expertise are often siloed within the bounds of discipline, lacking pathways for communication of the needs of policymakers, and the experience of those working in research or with the technology in question. I was particularly struck that, despite these issues, participants across the three sessions seemed to share similar perspectives on the issue of AI, hopefully serving to galvanise further action and research community-building on the subject of artificial intelligence and governance.
Following the event all participants were sent a read out and executive summary (the executive summary can be downloaded below). We also asked participants for their thoughts on how best to follow up and take forward these discussions and strengthen engagement between the academic and policy-making communities. We have collectively agreed to hold a second meeting in the autumn to build on many of the suggestions and policy recommendations put forward.