Skip to content
SonarScout

AI explained

How SonarScout uses AI — without replacing the operator

SonarScout uses object detection to highlight candidates in side-scan sonar video. The trained operator remains responsible for interpretation, confirmation, and documentation.

BBoxClassConfidenceOffline-capable

Key idea

AI reduces cognitive load by narrowing the search space. It does not make the final call.

AI

Highlights candidates with confidence.

Operator

Validates findings and owns the report.

Object detection in plain language

Every detection is a simple, transparent unit: a box, a class, and a confidence score.

Example

A detection overlay with bounding box, class, and confidence.

Bounding box

A rectangle drawn on the sonar frame showing where the model believes the target is.

Class

The type of target: currently Body and Vehicle.

Confidence

A score from 0–1 (or %) indicating how strongly the model believes the detection is correct.

SonarScout is designed so operators can see the AI output clearly and decide how to act on it.

Improvement over time

Why more training data makes the model better

Side-scan sonar varies across water conditions, bottom types, deployment speed, and manufacturers. More diverse training data helps generalize across these differences.

Better generalization

More conditions → fewer misses and fewer false positives in new environments.

Faster adaptation

New manufacturer workflows become easier to support when the training set includes varied sonar signatures.

Model iteration

We can iterate models without changing your workflow: same review UI, improving detection quality.

Multi-manufacturer adaptation, expanding continuously

We support major workflows including Garmin, Humminbird (Hummingbird), Lowrance, Raymarine, and Simrad, and we keep extending compatibility. If you use another sonar device or manufacturer, tell us—our AI domain-translation pipeline helps us adapt models across manufacturer domains faster.

Privacy and data sharing

SonarScout is built for sensitive operations: offline processing by default, and no automatic uploads.

Default mode: on-device

AI inference and report generation run locally. Your mission data stays on your workstation.

Partner data contribution (optional)

Validation partners may share anonymized sonar video under agreement for model improvement. No faces, no personal data, and no sensitive mission identifiers.

Want to discuss privacy requirements or data handling? Contact us.