As one of the world’s most in‑demand Artificial Intelligence Speakers, Daniel has earned recognition as CEO and founder of Satalia (acquired by WPP in 2021), and now serves as Chief AI Officer at WPP plc.
He holds a PhD in Computational Complexity from UCL and has advised influential bodies, including the UK Home Office, the UAE National AI Strategy, and City AI. A four‑time TEDx speaker, he has delivered compelling keynotes on AI, ethics, innovation, decentralisation and organisational design for major organisations such as Google and PwC.
In this exclusive interview with The Champions Speakers Agency, we delve into Daniel’s unique blend of academic rigour, entrepreneurial drive, and public engagement. He reflects on his journey from Morecambe to global stages, discusses the transformative potential of AI, and shares insights into how purpose‑driven organisations can harness emerging technologies for true societal impact.
Q: As the CEO of Satalia, a global AI company, how have some of the AI-integrated products and services you have developed transformed business operations?
Daniel Hulme: “Well, I have over 25 years of experience in AI and 15 years where I started a company that's been building AI solutions for some of the biggest companies in the world. One of our biggest clients is Tesco. They are delivering to 200,000 people every single day. We built all of the algorithms that power their last-mile delivery solution.
“Not only are we able to get significantly more deliveries out of their infrastructure, we're able to reduce the amount of miles driven by about 20 million miles, which is to the Moon and back 50 times. So these AI solutions, if applied in the right way, can have a massive impact on organisational carbon footprint.”
Q: Looking ahead, what do you see as the most groundbreaking applications of AI, and how might they redefine the way we produce and access essential goods and services globally?
Daniel Hulme: “I've spent over a decade educating industry leaders and politicians about the impact of these technologies on society and actually, over the past few years, developed a framework to help people understand how these technologies can be applied to transform businesses.
“But of course, any technology that has a material impact on people's lives—whether it be aerospace or automotive or pharmaceuticals—should be regulated. So actually, more recently, I've been engaging with governments to try to understand how to put the right guardrails in place to protect ourselves against the risks of AI, but also enable organisations to innovate.
“I guess what I'm excited about is being able to use AI to remove friction from how we create and disseminate goods—food, healthcare, education, nutrition, energy—to bring the cost of those goods down so cheaply that they become abundant. So imagine being born into a world where you don't have to worry about working to pay for food. It's all there, it's all free.
“But of course, these technologies can be weaponised. We could potentially create a post-truth world, we could have mass technological unemployment, we can create surveillance capitalism or even a superintelligence. So we have to make sure that we're using these technologies to make the world better and not walking into some of these risks over the next few decades.”
Q: Public perceptions of AI are often shaped by fear and media-driven narratives. In your view, is this scepticism justified, and how should we be framing the real risks and potential of these technologies?
Daniel Hulme: “I think people tend to be fearful about the unknown, and unfortunately, the media tend to propagate and leverage that fear. So that's why I spend a huge amount of my time educating business leaders and politicians about what these technologies are and aren't. I really do this as a passion. I did a TEDx talk a few years ago which was really focused on the risks associated with AI.
“You might have heard the term “singularity.” We're all linked-in AI philosophers, but singularity was adopted by the AI community to refer to the point in time where we build a superintelligence—a brain smarter than us in every single possible way. But I actually argued there are six singularities.
“I used a PESTLE framework—if you've ever done a business degree or written a business plan, you'll have come across the PEST or PESTLE framework—and I use that to actually talk about some of the risks that these technologies have: risks associated with a world where we don't know what is true, what content we're engaging with is true, or a world where we cure death.
“What would that world look like, where we might have overpopulation? So I've got a very nice framework to help people understand what the potential risks are associated with AI, but also how we can mitigate and steer society towards using these technologies for good.”
Q: While AI is often linked to profit and performance, how can businesses also harness it—and other emerging technologies—to deliver genuine social impact and purpose-led innovation?
Daniel Hulme: “My company was acquired by WPP. WPP is one of the biggest media and marketing agencies in the world. They essentially enable organisations to grow, to identify new audiences, to get the attention of those audiences, to help connect goods and services with those audiences.
“But one of the things I'm really passionate about is not just enabling organisations to grow. My hope is that organisations have a strong purpose. If you don't have a strong purpose, you're not going to attract talent, you're not going to attract customers.
“So I hope that that growth that we unlock actually enables organisations to achieve their purpose. And actually, I believe it's the collective purpose of enterprise that will make the world better.
“So we, as consumers and contributors, get to vote with our feet. We choose who to buy our goods from, we choose who to work for, and I encourage people to engage with organisations that have an incredibly strong purpose. It's the purpose of enterprise that will make the world better.”
Q: There’s growing debate around AI ethics and safety. What guidance would you give to organisations seeking to deploy AI responsibly while ensuring transparency, accountability and long-term trust?
Daniel Hulme: “I have a controversial view around AI ethics. First of all, I don't think there is such thing as AI ethics. I know that there's lots of people rebranding themselves as AI ethicists. Ethics is the study of right and wrong. What happens is that human beings create an intent. Their intent is to utilise a technology to maximise— I don't know—employee engagement or to route their vehicles efficiently or to spend money across their marketing channels to maximise attention.
“You have an intent, and you then build technologies or you apply a solution that tries to achieve that intent. Where that solution gets it wrong—it's biased or maybe it might overachieve its intent and cause harm elsewhere in the system—I would argue that's a safety problem. What you've done is designed a system that you don't know how to behave.
“I think there's a lot of confusion between AI ethics and AI safety. Ethics is the study of right and wrong, and there are already well-established ethical frameworks, standards, and procedures in place to scrutinise the intent.
“And there are more and more methodologies and technologies available to help us understand how to build AI-safe systems. That's what I've been doing for the past 15 years—ensuring the technologies that we're building are being applied in a way that is explainable and transparent and governable.”
This exclusive interview with Daniel Hulme was conducted by Mark Matthews of The Motivational Speakers Agency.
For More Information: Champions AI Speakers