It wasn’t a dark and stormy night – just a foggy one. The watchkeeper aboard the anchored Stena Immaculate wouldn’t have seen the MV Solong until it was half a mile away from piercing its jet fuel laden tanks at 16 knots. But AI did. Footage from newly installed artificial intelligence (AI) technology on a nearby vessel captured the allision – footage now seen by investigators and the public, fuelling debate over how, why, and what led to the tragedy, including the loss of a seafarer’s life.
The thermal imaging clearly shows the approach, the impact, and the fireball that followed. What’s less clear is the role this technology will play in the future.
High-definition cameras capable of seeing better than the human eye have existed for decades. So why aren’t they standard on ships? Cost is one factor. Many shipping companies operate on tight budgets, with safety and technology investments competing against financial pressures. AI-powered vision is still in its early days at sea, however similar tech is already driving cars on our roads. Now that the public has seen its potential to enhance awareness, aid decision-making, and even take control, that cost barrier may soon crumble.
Beyond cost, there are also regulatory and operational concerns. Should AI-powered surveillance be mandated, and if so, by whom? The International Maritime Organization (IMO) and classification societies have yet to establish clear frameworks for its use, leaving companies to adopt it at their own discretion. Without standardised protocols, we risk inconsistent implementation and uncertainty about accountability.
Its role in investigations is also undeniable. The Solong/Stena Immaculate footage is now evidence, despite coming from a vessel uninvolved in the incident. This raises important questions: Who owns such data? Should AI be an observer, an advisor, or a decision-maker? Could it become a tool for insurance companies, authorities, legal teams, and the public to assign blame, rather than improve safety?
Then there’s the human factor. AI’s presence on ships introduces concerns about job security, responsibility, and trust. Will AI eventually replace mariners, reducing the role of human expertise? Will it be used to critique, blame, or override human judgment? And if AI makes a mistake – if it provides misleading information or fails to detect a critical hazard – who bears responsibility? A ship’s master? The AI developers? The shipping company?
AI’s greatest potential lies in its ability to enhance situational awareness, not replace it. Bridge watchkeepers already use radar, AIS, and ECDIS, but AI could provide an extra set of “eyes,” through sensor fusion of available technologies together with FLIR and LIDAR to increase situational vigilance, analyse complex scenarios in real time, and offer early warnings of potential dangers. If implemented correctly, AI could reduce human error, support decision-making, and even assist in distress situations.
One thing is certain: AI at sea is here to stay. But technology alone is not the solution – successful implementation requires careful consideration of its role, impact, and ethical implications. If we’re serious about making AI work for mariners, let’s start by consulting those on the front lines – the seafarers. What challenges do they face? What tools would truly help them? And once we have those talking points settled, maybe then we can ask AI what it thinks it can do to help.
Captain Matt Shirley is a marine pilot and the CEO of Safe Harbours Australia