Tabish Ali
6 days ago
Business

Milton Keynes local, leading in AI: Richard Foster-Fletcher says ethical tech is no longer optional

Based in Milton Keynes, Richard Foster-Fletcher is a globally recognised authority on the ethical advancement of artificial intelligence. As a leading voice in digital ethics, he brings vital insights to a region quickly becoming part of the UK’s wider tech transformation.

Richard Foster-Fletcher - Champions Speakers Agency

Named one of the Top 100 Most Influential People in AI Ethics, Richard is the Executive Chair of MKAI – the Milton Keynes Artificial Intelligence group – and a renowned adviser to global organisations and governments. He has hosted over 300 expert interviews on AI governance, inclusion, and sustainability.

One of the UK’s most sought-after Artificial Intelligence speakers, Richard regularly presents at international forums, including the United Nations AI for Good Summit. He is also a member of the EU’s Horizon Europe initiative, helping to shape responsible AI policy across sectors.

In this exclusive interview with Champions Speakers Agency, he shares his perspective on AI, trust, and global inclusion.

Richard Foster-Fletcher - Champions Speakers Agency / AI Speakers Agency
Richard Foster-Fletcher - Champions Speakers Agency / AI Speakers Agency Credit: https://ai-speakers-agency.com/speaker/richard-foster-fletcher

Q: As artificial intelligence becomes integral to business operations, how do you foresee the evolution of digital ethics and inclusivity shaping leadership priorities, particularly for everyday businesses?

Richard Foster-Fletcher: “I see three distinct categories of businesses actually working around AI and ethics.

“The first is the tech companies. They're moving at the speed of light and their challenge is to harness the latest and greatest hardware and people. So I think it's interesting for them to try and incorporate the ethics into that too, but to some extent they're working at the absolute cutting edge of what's happening in the sector. So I think they're a great challenge.

“The second group are the regulated industries—think about finance and health and so on—and for them I think the main focus is staying legal. It's understanding the regulation that's coming through and changing, and how do they run their models and manage their data in terms of privacy and security and ethics around that.

“And the third is this bucket that's everybody else. I want to talk about that specifically because that's most businesses. And they're not at the bleeding edge, they're not in regulated industries, so why do they care?

“Well, they care because we're moving into an age now where leaders need to understand digital ethics and they need to be leaders in the age of AI. And in the age of this ethical use of AI, you simply cannot ignore this anymore—it's not 'nice to have'.

“So these leaders need to be able to look at the decisions and the outcomes in a business and be able to have the kind of processes that can reverse-engineer that and say, "Wow, we didn't get what we were thinking we would get there," or, "We got something that was harmful or damaging to people or our brand."

“So how do we go back and change that? They've got to understand what's happened in terms of the data and the people and the processes and the models to an extent that they can say, "We need to modify the way we did that to get the output that we want."

“Finally, I think the inclusivity and data ethics evolution in business needs to understand that the public's perception of trust has changed.

“If we go back a decade or so, everybody put everything on social media. It's almost like we went into that with our eyes closed. But people are not going into AI with their eyes closed. They're very concerned about the data that's being shared into platforms like ChatGPT.

“So we've got a very different narrative now. We used to have people—and I've heard very high-standing people say to me in the past—"I am absolutely fine with sharing my data with large technology companies as long as it benefits me." But they're not saying that anymore.

“They're now saying questions like, "Can I trust these autonomous systems not to exploit me?" So the rules have changed. People are much more wary about what you're doing with their data because we've seen what happened in social media.

“We've seen the harms, we've seen the damage, and we don't want to live through that again—or have an extrapolation of that where it's potentially even worse with AI. So leaders have got a lot on their plate. They need to think very carefully about that.”

Q: You’ve travelled extensively, advising governments and institutions. How do you see AI transforming emerging global markets—and what are the risks for regions whose data and culture aren't represented in the models?

Richard Foster-Fletcher: “I've been travelling quite a lot recently, presenting and working with governments in places like Tunisia and Turkey.

“In Tunisia, it was quite interesting to see not only have they established an AI university from one of the management schools, but they're actually launching it in English rather than their usual French—which is an indication of how they want to connect more with the global market and the work that they're doing.

“The worst thing they can do is get left behind on these sorts of technologies. If we look at the US, 70% apparently of businesses are now using ChatGPT. But let's pause that thought for a second, because a lot of the talk that we hear in places like Tunisia and Turkey and others is about the cutting edge. They're excited about the sorts of breakthroughs that they can be a part of in areas like health, and in agriculture and climate change, and industry and manufacturing.

“But my message to those leaders is, let's not forget that when we talk about the majority of AI implementations, the overwhelming majority is going to be everyday companies—small companies—using platforms like ChatGPT, along with Gemini, along with other options like Claude and Perplexity, just to mention those as well.

“And so what are the issues around that? There's a tremendous potential uplift in productivity from those organisations jumping in and using those low-cost and no-cost tools.

“But let's look at some of the data behind that: 55% of websites are in English. 50% of all internet traffic goes to US companies.

“So it's not just asking how do we deliver cutting-edge research and AI? It's not just asking how do we get companies empowered to be using these tremendously useful platforms like ChatGPT?

“But asking, hold up—if it's been built on websites and on traffic that's got nothing to do with Tunisia, Turkey, other places—how relevant is it? How useful is it? And what are the risks?

“How could it impact our culture, our sovereignty, our morality, our customers in this country if we're using platforms that were built on data that is simply not aligned to the way that we think and the way that we work?

“So can they leapfrog? Absolutely. Can they be a big part of the AI story? Absolutely. But I think to some extent, it needs to be on their terms, and we've got to work out how to do that.”

Q: When speaking with business leaders, what are the most common misconceptions about artificial intelligence that you encounter—and how can they avoid critical mistakes?

Richard Foster-Fletcher: “I've seen three common misconceptions about AI that I think are quite troubling.

“The first is that suddenly, somehow, CEOs of companies have decided that AI is possible even though they've never fixed their data. But the benefit of this now is that they're going home, they're seeing their partner or their children, whoever, using these generative AI platforms like ChatGPT, and they're thinking, "Wow, we've got to do that in the workplace."

“So they come in, they speak to their CTO, their CDO, and the CTO/CDO can finally come back and say, "Great, if you want to do that, we've actually got to take a proactive approach to sorting out our first-party data or our third-party data."

“So it's the opportunity that people have been looking for in organisations to get the fundamentals sorted so the CEO's vision that they're seeing around large language models and generative AI can be achievable. And I say to people when they ask me, "How should I learn about AI?" I say you should learn about data. You've got to understand the fundamentals of data before you can get anywhere with artificial intelligence.

“The second thing that I see, the common misconception, is that they can stop their employees using ChatGPT. You cannot. Everybody's got second, third, fourth devices. People are working from home.

“Look, let's be very, very clear: ChatGPT is the most incredible product ever because it does your work for you. Imagine someone saying to you, "I'll give you the greatest app in the world." What's that? It's an app that literally does your work. Okay, how can they not use that?

“So your strategy must incorporate the fact they're going to be sharing data into these models, they're going to be using them at home or out of sight of you or your IT team. So have a strategy that works with that rather than against that.

“The third thing is that CEOs think they can replace their staff with AI. To some extent, of course, this may be possible and we see a lot of headlines. But my feeling is they should come at this with a completely different approach—something that's called the SAMR model.

“SAMR stands for Substitution, Augmentation, Modification, and then Replacement. But you've got three aspects before the replacement: Substitution, Augmentation, and Modification. That's the area that I think CEOs should focus on when they're looking at the power of AI for productivity and for augmenting, substituting, and modifying their workforce, rather than just replacing.”

This exclusive interview with Richard Foster-Fletcher was conducted by Mark Matthews.

For More Information: Champions AI Speakers