Article The Maritime Edge: Issue 1, May 2026

 

The Maritime Edge

Issue 1 - May 2026

 

 

By   / 14 May 2026

Why Maritime AI Pilots Keep Stalling — and What It Will Take to Scale Them

Across the maritime sector, AI experimentation is no longer the exception. Predictive maintenance models, route optimisation tools, crew planning systems and compliance automation are all being trialled at pace. Boardrooms are engaged. Budgets are being approved. Proofs of concept are delivering encouraging results.

And yet, 81% of maritime organisations running AI pilots never move them into operational deployment*.This isn't a failure of ambition, or even of technology. It's a failure of readiness.

Insight research shows that while many maritime organisations are now experimenting with AI, 55% remain at only moderate levels of deployment, and fewer than four in ten have embedded AI into core operational workflows*. The industry is expending time, money and goodwill on experiments that never quite cross the line into production.

The sector has already moved beyond the question of whether AI can deliver value. That debate is largely settled. When designed and deployed properly, AI can materially improve fuel efficiency, maintenance planning, regulatory compliance and operational decision making. DNV's recent launch of RuleAgent, an AI tool that navigates 30,000 pages of maritime regulations in natural language, shows just how practical these applications have become.

While the value is real, the challenge is operationalising it reliably across fleets and workflows. The harder question is why so many initiatives stall.The answer, in most cases, isn't the model. It's everything around it.

The confidence gap

In maritime, trust matters more than novelty. Decisions influenced by AI affect safety, compliance and commercial outcomes. That places a higher bar on confidence than in many other industries.

While leadership teams increasingly recognise AI's potential, only 21% feel very comfortable with AI driven decisions, and just 11% are extremely confident using AI outputs in core operational processes*. In safety critical environments, that hesitation is enough to prevent pilots from scaling, regardless of how strong the technical results appear on paper.

Trust doesn't come from accuracy alone. It comes from explainability, accountability and integration. People need to understand why a recommendation has been made, where it fits in the workflow, who owns the decision, and how it is recorded and governed. Without those foundations, AI remains something that sits alongside operations rather than being part of them.

Every stalled pilot reinforces scepticism among the people whose buy-in is essential. That scepticism is expensive - it delays value, increases friction, and makes each subsequent initiative harder than the last.

An architecture problem, not a technology problem

Most AI failures in maritime aren't really AI failures at all. They're architecture failures.

We're often trying to deploy intelligent systems on top of fragmented data estates, inconsistent connectivity and infrastructure that was never designed to support machine learning at scale. Operational data remains siloed across vessels, ports and shore-based systems. Formats vary. Quality is inconsistent. Access is constrained by legacy platforms and governance models that haven't evolved.

This aligns with broader findings of a cybersecurity strategy crisis in maritime, where a 42% gap in senior level strategic skills and a 39% gap in senior design skills means security and architecture are often an afterthought rather than a foundation**.

In that context, it's unrealistic to expect AI to perform reliably, let alone earn trust.

The challenge is amplified at sea. Connectivity is variable. Sovereignty requirements are strict. It's no accident that 65% of maritime leaders now rank hybrid AI architecture as a top strategic priority, with 69% identifying edge AI as critical to future operations*. Reliable AI in maritime has to operate across ship, port and cloud, and continue to function when connections degrade or disappear.

Three barriers that keep pilots from scaling

Across the programmes we support, three structural barriers come up again and again.

First, data quality and accessibility. AI models are only as effective as the data they consume. In maritime, too much critical data is still locked away, poorly structured, or unavailable at the cadence modern AI requires.

Second, explainability and trust. Black box models struggle in environments where masters, engineers and shore-based teams retain legal and safety accountability. If a system can't explain its reasoning in operational terms, it won't be relied on.

Third, integration complexity. Even when a model performs well in isolation, embedding it into existing ship to shore workflows, decision rights and operating rhythms is difficult. That complexity isn't just technical. It's organisational and procedural, often spanning multiple vendors and legacy platforms.

Solving these problems requires more than better algorithms. It requires orchestration.

Starting with workflows, not models

At Insight, we don't start AI programmes with the model. We start with the architecture and the workflows that make AI usable.

That means building consistent data foundations that connect vessel telemetry, maintenance records, crew systems and operational logs into a governed, accessible environment. It means designing connectivity that supports inference where it's needed without introducing unnecessary risk or latency. And it means embedding AI outputs directly into the tools teams already use, so recommendations translate into actions, approvals and evidence rather than separate dashboards.

Just as importantly, it means treating AI deployment as a change and design challenge. In practice, that comes down to workflow redesign - agreeing where recommendations surface, defining who owns the decision, being clear about what can be automated versus what must remain advisory, and ensuring actions are logged for handover, audit and regulatory assurance.

The most successful implementations we see are those co-designed with the people who will rely on them. Bridge officers need to understand why a route recommendation has been made. Engineers need to validate predictive alerts against operational reality. Compliance teams need confidence that automated systems are operating within approved parameters.

AI that ignores human expertise doesn't scale. AI that augments it - does.

Sequencing for confidence, not just speed

One practical way to reduce risk is to be intentional about sequencing. Low risk, productivity focused use cases - documentation automation, planning support, knowledge search, streamlined reporting - allow organisations to harden data pipelines, prove integration patterns and build trust before moving into safety and mission critical domains like real time vessel monitoring and autonomous operational decisioning.

For CIOs planning fleet wide AI roadmaps, sequencing is only half the story. The other half is ensuring early wins can be operationalised consistently. Before commissioning another pilot, it's worth asking: do we have the architectural and workflow foundations in place to support production AI?

That includes data governance frameworks, defined decision rights, integration into core systems, and commercial models that reflect the realities of maritime operations rather than forcing rigid, shore centric licensing structures onto a voyage driven business.

From pilots to capability

The maritime sector is at a decision point. We can continue to run pilots that demonstrate potential but deliver limited operational impact. Or we can do the less glamorous work of building the foundations that make AI stick.The technology is ready. The value is proven. What's required now is the discipline to build properly, integrate deeply, and govern decisions with the rigour the sector demands.

That isn't a vendor conversation. It's a transformation conversation. And it's one the industry is ready to have.

 

About the author

Steve Hemmings is Client CTO at Insight, where he leads complex technology transformations for enterprise and public sector clients across the UK and EMEA. A tenured enterprise architect with deep experience spanning artificial intelligence, hybrid infrastructure, cloud platforms, and operational technology. Steve works at the intersection of strategy and delivery — helping organisations move beyond proof of concept into scalable, governed solutions. His current focus includes AI architecture, edge computing, and the operational challenges facing digitally maturing industries such as maritime.

 

Continue the conversation

Our latest research paper explores the state of AI adoption across the maritime sector — where the industry stands today, what's holding it back, and what the path to operational maturity really looks like.

Download the report

report thumbnail

Recent news

The News The Quick Take Article Theme Action
Kaiko Systems CEO Fabian Fussek declares the "pilot era is over," calling for disciplined implementation and daily workflow integration. 81% of pilots never scale Read More
Lloyd’s Register / Thetius Report Maritime AI hits a $4.13bn market value, but growth "hinges on investment in people and governance." Trust & Explainability View Report
Singapore Port Trials Successful ship-to-shore interface trials using the NYK Elder Leader highlight the need for hybrid infrastructure. Workflow Integration Case Study
Posidonia 2026 Survey Industry leaders describe "structured experimentation" and AI that "augments rather than replaces" human expertise. Trust & Human Oversight Survey Data
Georgia Tech Research Findings reveal a "boilerplate" approach to cybersecurity, highlighting critical gaps in IT/OT training. Skills & Training Gaps Research
DNV RuleAgent Formal launch of the AI tool navigating 30,000 pages of regulations—a benchmark for practical AI deployment. Practical AI Applications Launch Tool