30 April 2026

Inside our AI Summit

AI Summit 2026

On April 23rd, we brought together senior leaders across public and private sectors at BAFTA 195 Piccadilly to explore how to lead effectively as AI becomes embedded in decision-making. With AI already shaping outcomes, the focus was on leadership, managing risk, defining accountability and building confidence in AI-driven environments, through practical, cross sector discussion.

Thank you to all of our exceptional guest speakers and those that navigated the travel disruption across London to join us. If you weren’t able to attend, we’ve captured some of the key insights below, you can also register your interest to join future sessions.

 

Register Interest

In a world of AI hype, complexity, and rapid change, how do organisations lead with clarity?

Over the course of the day, the conversation moved quickly beyond technology. AI is no longer a future concept or an innovation side project. It is already shaping how organisations make decisions, deliver value, and manage risk often in ways that are not always visible, but increasingly influential. This is not simply a technology shift.

It is a leadership challenge.

Bringing Together Senior Leaders

The summit featured a senior and diverse line-up of speakers, including Dr Chris Brauer, Sir William Sargent, Lord John Browne, Simon Walker, Mike Potter, Gemma Ungoed-Thomas, Dr Jenn Barth, Lesley Pink, Christophe Prince, Sat Dayal and Kirstine Dale. Bringing together perspectives from industry, government and academia. Sessions spanned AI leadership and strategy, implementation, scaling, cyber security, trust, and responsible AI. This breadth of experience ensured the discussion was both strategic and grounded, offering practical insight into how organisations are navigating the opportunities and challenges of AI today.

The summit opened with Dr Chris Brauer framing AI as an immediate leadership challenge in what he described as the “Age of Intelligence,” highlighting its rapid evolution and the tension between capability and control. Sir William Sargent explored how AI can be directed to create value, reframing content as scalable data and emphasising that impact lies in application, not just output. Lord John Browne took a system-level view, outlining how AI is transforming infrastructure, energy and scientific discovery, not just improving systems but redefining them. From a cyber and national security perspective, Gemma Ungoed-Thomas highlighted how AI is reshaping the threat landscape, lowering barriers for attackers and introducing synthetic environments where trust must be actively verified.

These themes were reinforced in a panel discussion on leadership, trust, and risk, which emphasised the increasing complexity organisations face as AI accelerates both opportunity and exposure. Simon Walker and Mike Potter focused on the challenge of scaling AI, noting that while many organisations have piloted solutions, far fewer have achieved enterprise value. The summit closed with a Responsible AI panel led by Dr Jenn Barth, alongside Lesley Pink, Kirstine Dale, Christophe Prince and Sat Dayal, which emphasised that responsibility is not a one-off framework but a continuous leadership discipline shaped through decisions, behaviours and culture.

Across the day, several consistent themes emerged.

 

“If AI is a teenager, we are the guardians and architects of its environment” – Dr Chris Brauer

The shift from experimentation to execution

The shift from experimentation to execution

Organisations are moving beyond early pilots but the real challenge lies in scaling AI in a way that is repeatable, governed and delivers meaningful enterprise value.

AI as a leadership challenge

AI as a leadership challenge

As AI becomes embedded in decision-making, accountability remains with leaders, requiring greater clarity around ownership, judgement,
and how outcomes are informed.
Trust, risk and the evolving landscape

Trust, risk and the evolving landscape

AI is reshaping cyber risk and introducing synthetic environments where distinguishing what is real is increasingly difficult, meaning trust must be actively designed, tested, and verified.

Value creation

Value creation

While productivity gains are important, the greatest opportunity lies in using AI to unlock new products, services, and operating models that drive long-term value

Prevention to resilience

Prevention to resilience

As the threat landscape evolves, organisations must move beyond trying to prevent every risk and instead build resilience ensuring they can detect, respond to and recover from disruption.

People and culture as critical enablers

People and culture as critical enablers

Successful AI adoption depends on how people engage with it, making culture, capability, and behaviours just as important as the technology itself.

Organisations are no longer asking what AI can do. They are asking how to apply it at scale, responsibly and with confidence.

While access is no longer the barrier, the real challenge lies in embedding AI effectively within complex organisations

Scaling AI
The real barrier

One of the most consistent themes was the challenge of scaling AI. Many organisations have successfully piloted AI. Far fewer have embedded it at an enterprise level. The gap is rarely technical. It is organisational, rooted in how teams are structured, how decisions are made and how initiatives are prioritised and governed.

Without the right foundations in place, even the most promising AI initiatives struggle to move beyond pilot into production. Momentum is lost, value is not fully realised and efforts remain fragmented across the organisation.

Organisations that are successfully scaling AI tend to share a number of common characteristics.

Clear prioritisation of use cases

Clear prioritisation of use cases

Alignment around measurable outcomes

Alignment around measurable outcomes

Operating models designed for repeatability

Operating models designed for repeatability

Governance and security embedded from the outset

Governance and security embedded from the outset

Trust, Risk and a
Changing Threat Landscape

A key theme across the summit was the evolving nature of risk.

AI is lowering the barrier to entry for cyber attacks, enabling more sophisticated threats to emerge at scale. At the same time, synthetic environments including AI-generated content and deepfakes are making it increasingly difficult to distinguish between what is real and what is not.

This introduces a fundamental shift where trust can no longer be assumed, it must be verified.

In response, organisations are moving beyond prevention toward resilience focusing on their ability to detect, respond and recover.

Lower barrier to threat

Lower barrier to threat

AI is lowering the barrier to entry, enabling a wider range of actors to carry out more sophisticated cyber attacks at scale.

Rise of synthetic interactions

Rise of synthetic interactions

AI-generated content and deepfakes are creating highly convincing scenarios designed to influence behaviour and trigger real world actions.

Trust Is becoming a vulnerability

Trust Is becoming a vulnerability

In an environment where interactions can be convincingly fabricated, trust can no longer be assumed, it must be actively verified.

Secure by design, resilient by default

Secure by design, resilient by default

Organisations must assume disruption will occur and focus on their ability to detect, respond, maintain operations and recover effectively.

Responsible AI in Practice

Responsible AI was a central thread throughout the day. Rather than being treated as a one-time framework, it is increasingly understood as an ongoing discipline, shaped through decisions, behaviours, and organisational culture.

AI systems reflect the data and assumptions behind them. As a result, responsibility is not fixed, it evolves over time.

Organisations that succeed will be those that embed transparency, accountability and trust into how AI is designed and applied.

 

Nearly half of AI‑using workers say they don’t understand how AI works

Only 26% of employees say that their company has a clear and formal AI strategy.

49% of the leaders in our research say they know how to evaluate the business benefits of AI investments.

64% of leaders cite governance uncertainty as a barrier to scaling AI adoption

57% of organisations are still in the experimentation phase, using AI for pilots and discrete functions only

 

Stats provided by Symmetry Research.

 

 

A Leadership Imperative

Across every session, one message stood out…

AI is not just a technology challenge, it is a leadership one.

Leaders are now operating in an environment where decisions are increasingly influenced by intelligent systems, risk is evolving faster than traditional models and trust is becoming harder to establish. Success will depend on the ability to move from experimentation to execution, balance innovation with responsibility and lead with clarity in an increasingly complex landscape.

We’ll be sharing further insights and outputs in the coming weeks, alongside a series of smaller, roundtable discussions focused on specific challenges. If you’re interested, please register below.

AI is already transforming organisations. The question is how deliberately and how responsibly we choose to lead that transformation.

 

 

Register Interest for Future Events
Supporting our Community

Alexander Devines Children’s Hospice

Fund a Nurse Appeal

We’re proud to be supporting Alexander Devine Children’s Hospice Service, a charity providing specialist care to children with life-limiting conditions and their families. We’re raising funds towards their ‘fund a nurse’ appeal. We have raised a total of £12,000 at our Summit. Please watch the video message from CEO Fiona Devine about this appeal and why they need more support. If you would like to donate, please see the link below.

On behalf of everyone at Alexander Devine and FSP, thank you.

 

 

DONATION LINK

Our AI Summit
Leadership Guide

Coming soon

Please email aievent@fsp.co if you would like to receive a copy