1st Workshop on Maritime Computer Vision (MaCVi) 2023: Challenge Results

Nov 09, 2023

Abstract

The 1st Workshop on Maritime Computer Vision (MaCVi)|2023 focused on maritime computer vision for Unmanned|Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV),|and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS|benchmarks. This report summarizes the main findings of the|individual subchallenges and introduces a new benchmark,|called SeaDronesSee Object Detection v2, which extends the|previous benchmark by including more classes and footage.|We provide statistical and qualitative analyses, and assess|trends in the best-performing methodologies of over 130|submissions. The methods are summarized in the appendix.|The datasets, evaluation code and the leaderboard are|publicly available (https://seadronessee.cs.uni-tuebingen.de/macvi).

WACV Workshop 2023

Contributed by

Kiefer et al.

Related Research

Synthetix: Pipeline for Synthetic Geospatial Data Generation

Remote sensing is crucial in various domains, such as agriculture, urban planning, environmental monitoring, and disaster management. However, acquiring real-world remote sensing data can be challenging due to cost, logistical constraints, and privacy concerns. To overcome these limitations, synthetic data has emerged as a promising approach. We present an overview of the use of synthetic data for remote sensing applications.In this regard, we address three conditions that can drastically affect the optimization of computer vision algorithms: lighting conditions, fidelity of the 3D model, and resolution of the synthetic imagery data. We propose a highly configurable pipeline called Synthetix as part of our GeoEngine platform for synthetic data generation. Synthetix allows us to quickly create large amounts of aerial and satellite imagery under varying conditions, given a few samples of 3D objects on real-world scenes. We demonstrate our pipeline’s effectiveness by generating 3D scenes from 35 real-world locations and utilizing these scenes to generate different versions of datasets and answer the three questions. We conduct an in-depth ablation study and show that considering different environments and weather conditions increases the reliability and robustness of the deep learning networks.

02 January 2024

Post Wildfire Burnt-up Detection using Siamese UNet

In this article, we present an approach for detecting burnt area due to wild fire in Sentinel-2 images by leveraging the power of Siamese neural networks. By employing a Siamese network, we are able to efficiently encode the feature extraction process for pairs of images. This is achieved by utilizing two branches within the Siamese network, which capture and combine information at different resolutions to make predictions. The weights are shared between these two branches in siamese networks. This design allows to effectively analyze the changes between two remote sensing images, enabling precise identification of areas impacted by forest wildfires in the state of California as part of ChaBuD challenge thereby assisting local authorities in effectively monitoring the impacted regions and facilitating the restoration process. We experimented with various model architectures to train ChaBuD dataset and carefully evaluated the performance. Through rigorous testing and analysis, we have achieved promising results, ultimately obtaining a final private score (IoU) of 0.7495 on the hidden test dataset. The code is available at https://github.com/kavyagupta/chabud. We also deploy the final model as a point solution for anyone to use at https://firemap.io.

09 November 2023