Want a really lovely human to build your first workflow with you?

An AI Engineer at AI Summit London 2025

Image of Hanna Paasivirta

Hanna Paasivirta

AI Engineer

·

Jun 16, 25

Share this online

Agentic Systems, Implementation Challenges, and Strategic Choices

“We’ve spent 70 years teaching humans to speak to machines. Now it’s time to flip this round.” — Sudarshan Deshmukh, AI Architect at Laya Healthcare

There’s real excitement in that statement, but the AI Summit London 2025 made it clear we’re still figuring out what it really means. The summit brings together business leaders, technologists and policymakers to explore commercial AI applications. I headed to the conference to see what’s working, what isn’t and what’s coming next. Here’s the conference through my eyes.

No. 1 hot topic: Agents

“From AI That Knows to AI That Does” — Daniël Rood, Director of AI GTM, UK/I & SSA at Google Cloud

Agentic systems were undoubtedly the most talked-about topic this year. Practically every presenter mentioned their rise and gestured toward the implications – though often it felt like they were pointing at thin air.

If you had entered the conference without knowing what agents are, you could have easily walked out just as ignorant. There weren’t many detailed use cases to help figure out what anyone is actually doing with them, but this is inevitable in an exploding field where the ideas are racing ahead of practical applications.

One exception was a brief fireside chat with Rodrigo Liang, CEO of SambaNova Systems, at the “Next Generation” stage, who offered a cross-cut view of the effects of agentic systems on not just hardware and acceleration (his company’s focus) but what to expect next.

Liang argued that agents will reach production faster than Large Language Models (LLMs) because of their more transparent nature. When an agent generates a report or completes a task, you can run more predictable tests to validate the output, making it easier to trust and deploy quickly. LLMs, he argued, serve as excellent orchestrators for these agentic systems, and the results are significantly better than what we’ve seen before.

But there’s a huge catch. Cost explodes with the introduction of agents, which Liang estimates are 100 times more expensive. And speed is equally critical: if a system triggers a sequence of 20 agents before responding, and each takes one second, that’s unacceptable. He sees computational efficiency as key to solving both problems by maximising performance per watt and enabling rapid retrieval of data and models. I do wonder if there will be other innovations too, in the ways that we build and apply agentic systems, to help avoid the problem of 20 x 1-second agents in the first place.

The in-house/outsourcing dilemma & open source

“Doing everything yourself will slow down your rollout of GenAI.” — Christian Lau, CPO at Dynamo AI

Another theme that emerged was strategy around what to do in-house versus what to outsource or buy. Every company has been spending efforts on building in-house AI copilots, email classifiers and summarisers. If they’re rolling out GenAI products, they’re dealing with the same safety and regulation complications. The majority of the companies selected for the small startup space focused at least their branding on providing security, reliability and privacy for corporate AI systems, and several offered help with guardrails and regulation.

The trade-offs of implementing LLM-based systems ourselves is something we’re constantly thinking about at OpenFn as well. The need to outsource work and get additional support was mentioned as an obstacle to open source AI in a few talks – closed source means convenience, and that outweighs concerns around privacy, adaptability and ownership in practice. For a year in AI that in my mind has been rocked by open source, open source didn’t actually feature much at the conference, and perhaps it’s because of these practical difficulties. But some point at a change: Liang highlights open source as a lasting and expanding foundation of the field, and Keegan McBride from the Tony Blair Institute disputes the premise of the session he has been invited to: “The Case for Open Source AI: Can It Thrive in a Closing Industry?” – it is an opening industry, he says, and there has been a significant volte-face in for example government in seeing open source models as key to innovation instead of security risks to constrain.

The reality around implementation challenges, the pressure to prioritise speed and ease, has been true for us too in building with AI. But I think OpenFn represents one answer to the dilemma discussed throughout the conference. You can have it both ways: complete transparency and ownership through open source, alongside the full support that makes implementation practical.

A toolkit or workforce?

“EverWorkers have a deep understanding of your business processes, applications, data, and culture, just like your best employees.” — ChatGPT or something, at EverWorker

Another underlying theme was agents versus humans. This came through a tension in some of the exhibiting companies and the strategy talks. In the former, there was talk of managing “a hybrid workforce”, and agentic AI companies promising your new 24/7 employees. This narrative doesn’t fill me with inspiration, and I wonder if this branding will feel out of touch in a few years if we’re all dealing with layoffs in one way or another.

In the talks, however, more than a few emphasised that AI is a tool for humans – that AI will allow employees to focus on the hard problems and the defining breakthroughs. From a more practical viewpoint, Dara Sosulski argued that HSBC doesn’t need a new foundational LLM in its toolkit every year. What’s going to have impact is letting employees focus on getting familiar with it and using it. Eric Bowman, CTO at King, edged a touch further perhaps, to say that AI will allow you to double your achievements whilst not doubling your workforce – though the focus of his talk was on amplifying the “human touch”.

You could say that this contrast just highlights how PR-conscious CEOs are sidestepping the kind of backlash the CEO of Duolingo recently got for stating he would replace his contractors with AI. But I think this is just basic leadership: looking at the most valuable assets you have and how to grow them, combined with an understanding of what AI actually is.

How to AI

“90% of AI projects fail.” — unclear source, but repeated by several people

The more high-level the discussion was, the more there were questions along the lines of “how do I use AI to help my business?”, always with the answer of “depends on your problem”. GenAI as we now picture it is some years old, but use cases and difficulty applying it are still really challenging. There was still a lot of noise about “needing to cut through all the noise”. In my brief ventures into the realm of creative industries, there seemed to be even more disappointment and resentment – topics centred around failed projects and tools that are not serving people. Across different contexts, there were mentions of statistics around how many AI projects at businesses fail.

These failures, to me, are not disappointing or any kind of proof against the value of AI. ML is an empirical field based on trial and error. Experimentation is inherent to it and an essential part of implementing it successfully.

What’s worth our time?

“Build it like it matters. Ask yourself, will this make sense in 10 or 20 years time?” — Kerry Sheehan, Advisor at the British Standards Institute

Experiments take time and resources. As I try to pick between the stages at the conference, deciding where to invest my time, I start picturing the venue as a career metaphor. I pick the sustainability stage for some talks and hear about the risks and failures of AI, but I keep wondering if they’re solving the environmental crisis at a hardware talk in the next room.

Jessica Chapplow from Heartificial Intelligence makes an effort to highlight constructive positive examples, like Samsung’s use of wastewater for data centre cooling. She also details a use case for deepfake technology in sensitive documentaries, emphasising the importance of developing AI applications with positive intent. She demonstrates how the same technology that generates legitimate concerns can also create meaningful benefits when thoughtfully applied.

However, the discussion immediately shifts back to familiar territory when another speaker counters with predictable cautionary examples. One speaker concludes by rehearsing a bullet list of problems, and we hear about issues ranging from endangered languages to mental health and international relations.

I’m not sure what to do with this list. Maybe one of the cautionary tales will help me prevent harm in something I’m working on one day, and I do find myself increasingly thinking about the unemployment and mental health crisis predicted by Dr Maha Hosain Aziz from NYU.

But I feel a lot more productive back at the other end of the venue, at the “Next Generation” stage. This is where the same challenges are being tackled bit by bit, rather than just raised; where I can find something I can grasp and build with – something that might actually make sense in 20 years’ time.

Image of Hanna Paasivirta

Hanna Paasivirta

Save time & scale your impact with OpenFn. Get started today!