Kruso Logo
Contact us

The leap from LLMs to AI Agents

The leap from LLMs to AI Agents

Imagine a dictionary so powerful it can not only finish your sentences but also predict the next ones. That’s essentially what a large language model (LLM) does. It generates text, answers questions, and helps you write faster. 

Now imagine something smarter. Instead of just suggesting words, it books your flight, manages your inbox, or runs a compliance check. That’s an AI agent. Not just language, but action. That’s where things get especially interesting in the world of finance, because actions carry consequences. 

Finance runs first on trust, then on tech

Banks, insurers, and payment providers all run on vast digital systems. With AI agents entering the scene, the possibilities are enormous. Think of customer service bots that resolve issues in seconds, assistants that monitor trades around the clock, or tools that can scan thousands of contracts and highlight risks almost instantly. These agents can transform efficiency and create new value. 

But the very same power comes with new vulnerabilities. What happens if an AI agent makes the wrong call? If it sends data where it shouldn’t, or simply goes offline in the middle of a critical process? That’s not just a minor IT hiccup anymore, it’s a potential threat to financial stability. That’s exactly the challenge that Europe’s new regulation, the Digital Operational Resilience Act (DORA), aims to address. 

Enter DORA

DORA is the EU’s way of saying “innovation is welcome, but resilience is non-negotiable". Since January 2025, every financial entity in Europe is required to meet the same digital resilience standards. 

At its heart, DORA is about preparedness. Can your company keep operating if digital systems fail? Do you have the ability to detect problems quickly, respond effectively, and report major incidents to regulators? Are the technology partners you depend on, whether they’re cloud platforms, data providers, or AI vendors also capable of meeting those same standards? By setting one clear framework across all member states, DORA ensures that resilience isn’t a patchwork of national rules, but a shared foundation for Europe’s financial system. 

Where AI agents fit in

AI agents are powerful precisely because they can take action. But that also makes them part of a company’s operational risk. Under DORA, they must be treated just like any other critical ICT system. That means firms need to set guardrails so agents can’t act outside of safe or compliant boundaries. It means that if an AI-driven process exposes sensitive data or triggers an error, it must be recorded and reported as a digital incident. And it means that every external AI model or API a company relies on becomes part of the third-party ecosystem that must be carefully monitored and tested. 

In other words, AI agents don’t sit outside the scope of resilience planning, because they’re right in the middle of it. 

AI as a part of the solution

There is, however, another side to this story. The same agents that create new risks can also help financial firms meet their resilience obligations. Imagine an assistant that never sleeps, monitoring systems for weak points and flagging problems before they escalate. Or one that auto-drafts incident reports for regulators, saving valuable time during a crisis. Or even one that simulates cyberattacks and tests your digital defences. In this sense, AI agents are not only a challenge under DORA, but also one of the best tools companies will have to comply with it. 

The bottom line

LLMs predict. AI agents act. And in finance, actions carry weight. DORA ensures that as the industry adopts AI and embraces new forms of digital automation, resilience and security aren’t left behind. The future of financial services won’t just be digital; it will be resilient by design.Â