Paul Dongha is a pioneer in AI, Big Data and technology whose vision has driven the implementation of AI in large financial organisations. Currently Head of AI Strategy and Responsible AI at NatWest Group, he has advised boardrooms and governments on the importance of developing a global standard for AI that is both ethical and responsible – the why and the how, if you like – of this rapidly-evolving technology. Paul is a seasoned and inspiring speaker who has addressed audiences around the world on how to navigate the intersection of AI and human rights.... Read more
Dr Paul Dongha is a deep technologist who has pioneered the development and implementation of responsible AI strategies in large financial organisations. His approach brings together the troika of technological advancement, corporate integrity and the responsible use of AI. Paul is currently Head of AI Strategy and Responsible AI at NatWest Group and has extensive hands-on experience of building service-based, high-performance systems that use Big Data. Paul Dongha is the author of Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential, which was named as an FT Book of the Month in 2025. He has given keynotes on both sides of the Atlantic, including speaking at the prestigious NY AI Summit.
For over 30 years, Paul Dongha has been an expert on Agentic AI models. These are autonomous systems that use advanced reasoning to set goals, plan, reason and problem-solve with minimal human intervention. Paul believes AI should be used as a tool to augment human work rather than just replace it. His experience and expertise has guided business leaders looking to balance innovation with accountability and trust as they implement this fast-changing technology. He believes that the adoption and deployment of AI at scale is a leadership imperative that must be driven from the boardroom and not the IT department.
As a visionary leader, boardroom advisor and advocate for human rights, Paul believes that in order for AI to fully benefit humankind, business must mitigate its risks and pitfalls with the sensible implementation of guardrails. He is an active participant in international forums, policy discussions and projects aimed at setting global standards for responsible AI. As well as working with the UK Government, Paul works alongside the UK Competition and Market Authority, the UK banking regulators, PRA and the FCA.
Paul Dongha has built the framework and driven the mindset of AI policy in many large firms, including Lloyds Banking Group, where he was Group Head of Data and AI Ethics. He has also had posts at Fujitsu Research, HSBC and Credit Suisse. Paul studied for a B.Sc., Masters and PhD at Manchester University, publishing his thesis on Agentic AI in 1996 – many years before people were even aware of its existence.
Paul has also volunteered for the St Giles charity, where he provided mentorship and career advice to the adult caseworkers of repeat drug-related crime offenders. Additionally, he has mentored sixth-form and university students. He has also delivered lectures on Generative AI at Harvard, at the Business School’s MBA Program.
Adopting and deploying AI at scale, is not just an IT activity, it a leadership imperative, which demands involvement from multi-disciplinary teams. It needs to be driven from the boardroom with clear accountability and a strong mandate.
Designing Foundational Governance Frameworks: Master the blueprint for building an AI governance structure...
Bridging the Strategic Gap: Master the art of aligning AI roadmaps with core business objectives, ensuring that technology serves as a value-multiplier rather than a siloed experiment.
Architecting the AI-Ready Culture: Insights into mobilizing cross-functional leadership and creating organizational “data-literacy” required to move from initial pilot programs to enterprise-wide adoption.
...
A step-by-step roadmap to ensure even as technology rapidly shifts, you can mitigate against some of the biggest categories of risk:
accuracy and reliability – how can you measure accuracy in AI and what is ‘good enough’?
fairness and bias – how to identify and resolve issues that can alienate...
Cultivating an AI-First Mindset: Strategies for shifting organizational culture from technology-aversion to “augmented intelligence,” ensuring employees view AI as a partner in productivity rather than a threat to job security.
Strategic Upskilling and Reskilling Roadmaps: A blueprint for identifying future-critical skills and implementing continuous learning frameworks that evolve as rapidly as...
Architecting Carbon-Efficient AI Systems: Strategies for optimizing AI use (eg utilizing smaller, “lean” architectures and grid-aware computing—to reduce energy consumption without sacrificing accuracy.
Navigating the Ethical-Environmental Trade-off: Expert frameworks for balancing the “hidden costs” of AI—including water scarcity for data center cooling and the extraction of rare minerals—against the strategic business...
Demystifying Autonomous Agency: Distinguishing between scripted automation and true “agentic” behaviour, providing an easy-to-grasp blueprint to help leaders identify which systems can actually reason, plan, and execute multi-step tasks and which simply cannot.
The Reality of the “Agentic Loop”: A deep dive into the current limitations of LLM-based agents—from hallucination risks...
Navigating the Path to AGI: Decoupling the hype from technical reality by defining “Artificial General Intelligence”—from today’s emerging reasoning models to systems that match human cognitive flexibility across every domain.
The Superintelligence Divide: Strategic analysis of the transition from AGI to ASI (Artificial Superintelligence), exploring the “intelligence explosion” hypothesis and the...