QFabric: Multi-Task Change Detection Dataset
Aug 18, 2023 10 min
Motivation
Detecting change through multi-image, multi-date remote sensing is essential for developing and understanding of global conditions. Despite recent advancements in remote sensing realized through deep learning, novel methods for accurate multi-image change detection remain unrealized. Recently, several promising methods have been proposed to address this topic, but a lack of publicly available data limits the methods that can be assessed. In particular, there exists limited work on categorizing the nature and status of change across an observation period.
Related Works
Change Detection Methods
- Mou et al[1] proposed a deep patch-based architecture wherein bi-temporal patches are processed in parallel by a series of dilated convolutional layers generating features which are then fed to a recurrent sub-network to learn sequential information. In the end, fully connected layers are used to create the change prediction map. Although accuracy, such patch-based methods are very time-consuming since they need to process every single pixel of the image individually
- A number of methods use transfer learning to overcome the problem of scarcity of training data through transfer learning by using pre-trained networks to generate pixel descriptors [2,3]
- Daudt et al. [4] suggested three different fully convolutional Siamese networks based on the U-Net architecture [25] aiming to address this problem and detect accurately the regions with changes. This approach, however, lacks the appropriate modeling of the data’s temporal pattern.
- Papadomanolaki et al. [5] proposed a convolutional LSTM-based approach that takes in five dates to detect changes in the first and last date’s images. This method is resilient to seasonal changes, cloud covers, and shadows
- Daudt et al. [6] proposed a weakly supervised change detection method using guided anisotropic diffusion and iterative learning.
Change Detection Datasets
- OSCD: OSCD [7] dataset is a bi-date change detection dataset comprising 24 Sentinel-2 image pairs of two dates from 24 different cities, mostly European. Seasonal change, cloud cover, and shadows are present in some of the images. In total, there are 1769 change polygons
- OSCD-Extension: Papadomanolaki et al. [5] extended the OSCD dataset to 5 dates by introducing three intermediate dates to show that by having more dates their proposed method is resilient to false positives occurring due to seasonal changes, cloud covers, and shadows.
- LEVIR-CD: LEVIR-CD [6] is a large-scale remote sensing building change detection dataset. LEVIR-CD consists of 637 very high-resolution Google Earth image patch pairs with a size of 1024 × 1024. These bitemporal images with a time span of 5 to 14 years have significant land-use changes, especially the construction growth.
- HRSCD: HRSCD [7] contains 291 coregistered image pairs of RGB aerial images from IGS’s BD ORTHO database. Pixel-level change and land cover annotations are provided, generated by rasterizing 50 cm resolution images from Urban Atlas 2006, Urban Atlas 2012, and Urban Atlas 2006-2012 maps. Change and no-change labels are available along with six land cover classes for pixel-level LULC.
Dataset Creation
QFabric Dataset
Selection: We select 504 different locations from 100 different cities maximizing dataset coverage of all geographic and urban types. For each of these 504 locations, we sample satellite images for five different dates such that the duration between two consecutive date-pairs are almost equal. An attempt was made to introduce different months to capture seasonal patterns and have different seasons present on different dates for each location. Image selection was further restricted to images with no more than 5% cloud cover.
Processing: For our baseline, we used 3-channel RGB Images, but also examined the contributions of the near-infrared (NIR) channel. These images were enhanced with a separate, higher-resolution panchromatic (grayscale) channel to double the original resolution of the multispectral imagery (i.e., ”pan-sharpened”). Each location is a tile of size 8192 × 8192 with a resolution of 0.45m × 0.45m ground sample distance. The 16-bit pan-sharpened RGB-NIR pixel intensities were truncated at 3000 and then rescaled to an 8-bit range.
Annotation: A team of 20 annotators annotated the ground truth details in two phases. In the first stage, each location is assigned to an annotator to annotate all change polygons present in that location from first and last date images. In the second stage, each annotated polygon is used to crop out images from five date images. A buffer was introduced around the region of interest to ensure that annotators had a perspective on the surroundings, enabling them to annotate change type, change status, geographic type, and urban type.
Statistics:
Dataset Statistics
The details of the dataset can be observed here: https://engine.granular.ai/organizations/granular/projects/631e0974b59aa3b615b0d29a/overview. We also have a paper published on our dataset: https://openaccess.thecvf.com/content/CVPR2021W/EarthVision/papers/Verma_QFabric_Multi-Task_Change_Detection_Dataset_CVPRW_2021_paper.pdf
Experiments:
We split the RGB dataset into training, validation, and test sets in the ratio of 70:20:10 by randomly selecting cities. Patches of size 512 × 512 are used by taking a striding window with stride size 512, and 6 different mask files are generated for each city grid: a change type mask and 6 change status masks. We randomly applied augmentations: 90-degree rotations, X and Y flips, imagery zooming of up to 25%, and linear brightness adjustments of up to 50%, to training images. We ran our experiments for each of the problem types: Change Detection, Change Type Classification, Change Status Tracking, and Neighbourhood Classification.
PyTorch is employed to implement all networks. In order to manage our experiments we use Polyaxon on a Kubernetes cluster and use three computing nodes with eight V100 GPU each.
Results
Change Detection
Binary change detection results
Change Type Classification
Change type classification results for 6 classes
Change Status Tracking
Change status classification into 9 classes
Neighborhood Classification
Neighbourhood classification result
References
[1]: Lichao Mou, Lorenzo Bruzzone, and Xiao Xiang Zhu.Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery. CoRR, abs/1803.02642, 2018.
[2]: Arabi Mohammed El Amin, Qingjie Liu, and Yunhong Wang. Convolutional neural network features-based change detection in satellite images. In Xudong Jiang, Guojian Chen, Genci Capi, and Chiharu Ishll, editors, First International Workshop on Pattern Recognition, volume 10011, pages 181 – 186. International Society for Optics and Photonics, SPIE, 2016.
[3]: Ken Sakurada and Takayuki Okatani. Change detection from a street image pair using cnn features and superpixel segmentation. BMVC, pages 61.1–61.12, 2015.
[4]: Rodrigo Caye Daudt, Bertrand Le Saux, and Alexandre Boulch. Fully convolutional siamese networks for change detection. In ICIP, 2018.
[5]: M. Papadomanolaki, S. Verma, M. Vakalopoulou, S. Gupta, and K. Karantzalos. Detecting urban changes with recurrent neural networks from multitemporal sentinel-2 data. In IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, pages 214–217, 2019
[6]: Hao Chen and Zhenwei Shi. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sensing, 12(10), 2020.
[7]: Rodrigo Caye Daudt, Bertrand Le Saux, Alexandre Boulch, and Yann Gousseau. Multitask learning for large-scale semantic change detection. Computer Vision and Image Understanding, 187:102783, 2019.
[8]: Rodrigo Caye Daudt, Bertrand Le Saux, Alexandre Boulch, and Yann Gousseau. Urban change detection for multispectral earth observation using convolutional neural networks. In IEEE International Geoscience and Remote Sensing Symposium, IGARSS. IEEE, 2018.
Change DetectionGeospatialDataset curation