Skip to content
Seminars – 2022-2023
- Date: MONDAY, July, 10th – 11:30
- Speaker: Taïs Grippa (Postdoctoral researcher in Remote Sensing and Geomatics, ULB, Belgium)
- Room: D009 – bat. ENSIBS
- Title: Land cover semantic classification by leveraging expert domain heuristics to weakly supervised deep-learning
- Abstract : During nearly 20 years object-based image analysis (OBIA) has been the state of the art for detailed land cover mapping from very-high resolution remote sensing data. At first, the gold standard was using rule-based classification where domain specialist performed feature engineering and designed case-specific rules. More recently, supervised machine learning classifiers such as Random Forest (RF) became the golden standard. Even if requiring a labelled train set, the relatively small amount of training data required to train such supervised algorithm still allowed important gains both in terms of human labour, map accuracy and transferability on close domains. Now, deep learning has clearly proven its ability to overcome traditional ML approaches, but it required large, labelled training data which as most of the time lacking when dealing with applied projects.
During this seminar, I will present you my research project that aims at leveraging the domain knowledge that remote sensing specialist developed through years to build large (weak) training dataset based on heuristics and using it to train a DL model that might be able to learn behind the expert heuristics.
- Room: Maison Glaz, in Gavres, near Lorient
- Title: Seminaire au vert! (Annual team seminar)
- Date: May, 30th – 13h30
- Speaker: Manal Hamzaoui (Univ. Bretagne-Sud, PhD. defense)
- Room: Amphithéâtre 101 – bâtiment DSEG
- Title: From Euclidean to Hyperbolic Space: Rethinking Hierarchical Classification of Remote Sensing Scene Images.
- Abstract : Remote sensing images are complex and typically exhibit a hierarchical structure which is often overlooked, particularly in scene classification methods. These methods tend to treat all non-target classes with equal importance, which can lead to severe mistakes when confusion between semantically unrelated classes. By introducing hierarchical information into the learning process, these approaches can provide more coherent predictions. This hierarchical information is often available explicitly via the class hierarchy or implicitly within the data. This thesis therefore focuses on scene classification with hierarchical information. Firstly, we introduce the class hierarchy when training a classifier via a hierarchical loss function. We evaluate its impact in a few-shot setting with hierarchical prototypes defined at each level of the class hierarchy. Experimental results reveal that the class hierarchy is a promising source of information to improve the scene classifier performance. Subsequently, we consider the hyperbolic space as an embedding space as it is better suited to handle data with an underlying hierarchy. We evaluate the approach within two settings: unsupervised and few-shot. The experimental results highlight the potential of the hyperbolic space for scene classification, making it a promising approach for the remote sensing community.
- Date: April, 13th – 9h
- Speaker: Iris De Gelis (Univ. Bretagne-Sud, IRISA & LETG & CNES, PhD. defense)
- Room: Amphithéâtre 102 – bâtiment DSEG
- Title: Apprentissage profond pour la détection de changements dans des nuages de points 3D
- Abstract: Whether caused by geomorphic processes or by human activities, contemporary times are
accompanied by ever more rapid and frequent changes in our landscapes. Monitoring these changes requires regular modeling of our environment. Rather than limiting ourselves to a two-dimensional representation, it seems appropriate to use 3D data to embody our world, using point clouds for example. However, the complexity of this data format makes it necessary to create
specific methodologies for their analysis. Therefore, deep learning appears to be the appropriate
solution to process 3D observations of the Earth. This thesis focuses on change detection in 3D
point clouds with deep learning. First, a 3D point cloud simulator in an urban environment has been developed. It allows to randomly generate datasets with a realistic evolution of the urban environment. After an experimental comparison of the state-of-the-art methods, this thesis proposes Siamese architectures for supervised change detection both in urban environment and in geosciences using kernel point convolutions (KPConv). Finally, in order to reduce tedious data annotation, the thesis focuses on weakly supervised methods with transfer learning, self-supervision and deep clustering. These methods are promising in this context, nevertheless a particular importance must be given to the design of the deep architecture.
- Date: April, 13th – 14h
- Speaker: Christian Heipke (Leibniz Universitaet Hannover (LUH), Institut fuer Photogrammetrie und GeoInformation (IPI))
- Room: Amphithéâtre 102 – bâtiment DSEG
- Title : Geometry and Semantics for 3D Reconstruction
- Abstract : The availability of accurate geospatial information is a prerequisite for many applications, including the fields of mobility and transport as well as environmental and resource protection; it typically forms the basis for a comprehensive understanding of an environment of interest. In order to obtain such an understanding, it is generally crucial to consider both geometry and the semantic meaning of the contained entities. One possibility to capture information on both of these aspects simultaneously is the use of image-based methods which jointly perform geometric 3D reconstruction and semantic segmentation. While first approaches that make use of semantic information to improve dense stereo matching have been recently presented, the information flow is often only unidirectional, meaning that the geometric information is not used to improve the semantic labels. Moreover, the results are commonly limited to a 2.5D representation of depth maps, are rasterised and are not able to reason about parts of a scene that are occluded in the images. Addressing these limitations, a method based on a learned implicit function is presented in this work, allowing to estimate a continuous three-dimensional implicit field from binocular stereo images, that encodes the geometry and semantics of a scene. The basic idea is to supplement partial observations of the geometry obtained via image matching with learned semantic priors for the shape of objects, allowing to reason about the geometry and semantics also for parts of the scene that are partially occluded in the images. For this purpose, we propose a novel implicit function that combines the regression of the distance to the closest surface with a semantic classification. Including free space as separate class allows us to restrict the regression of distances to the surrounding of actual surfaces. Thus, similar to unsigned distance fields and in contrast to other implicit representations commonly employed in the literature, such as occupancy fields or signed distance fields, the proposed method does not require a binary segmentation of the 3D object space into inside and outside of objects. The main advantage of avoiding such a differentiation is that reference data for the geometry used during training do not need to be watertight, something that is commonly hard to achieve for real-world data, especially for large-scale outdoor scenes. Moreover, a fully convolutional neural network is employed to represent this function, allowing to reconstruct scenes of arbitrary size and complexity without missing fine-grained details by applying a sliding window-based approach at test time. To evaluate the performance of the method proposed and to investigate strengths and limitations, extensive experiments on a synthetic dataset and on the real outdoor Hessigheim 3D benchmark dataset are carried out. First results demonstrate that the proposed method is comparable to the state-of-the-art on the synthetic dataset, whereas the joint estimation of geometry and semantics is particularly beneficial on the clearly more complex scenes of the Hessigheim 3D benchmark.
- Date: April, 13th – 14h45
- Speaker: Dino Ienco (UMR TETIS – Territoires, Environnement, Télédétection et Information Spatiale
- Room: Amphithéâtre 102 – bâtiment DSEG
- Title : SENECA: Change detection in optical imagery using Siamese networks with Active-Transfer Learning
- Abstract : Change Detection (CD) aims to distinguish surface changes based on bi-temporal remote sensing images. In recent years, deep neural models have made a breakthrough in CD processes. However, training a deep neural model requires a large volume of labelled training samples that are time-consuming and labour-intensive to acquire. With the aim of learning an accurate CD model with limited labelled data, we propose SENECA: a method based on a CD Siamese network, which takes advantage of both Transfer Learning (TL) and Active Learning (AL) to handle the constraint of limited supervision. More precisely, we jointly use AL and TL to adapt a CD model trained on a labelled source domain to a (related) target domain featured by restricted access to labelled data. We report results from an experimental evaluation involving five pairs of images acquired via Sentinel-2 satellites between 2015 and 2018 in various locations picked all over Asia and USA. The results show the beneficial effects of the proposed AL and TL strategies on the accuracy of the decisions made by the CD Siamese network and depict the merit of the proposed approach over competing CD baselines.
- Date: April, 13th – 15h30
- Speaker: Chloé Thénoz (Magellium)
- Room: Amphithéâtre 102 – bâtiment DSEG
- Title : Towards 3D Reconstruction from Very High Resolution satellite imagery
- Abstract: Satellite imagery allows to get information for a large surface with a high revisit frequency at a better cost than aerial acquisition compaigns. As the development and launch of satellites get cheaper, more and more Earth Observation satellites are launched providing many data across time and space. Trying to build a “digital twin” Earth becomes an objective that could benefit a lot of applications as urban growth monitoring or climate change. However, getting a 3D reconstruction from very high resolution satellite imagery comes with challenges. In this talk, we will present two projects that Magellium have conducted for CNES (French Space Agency) that investigates this subject. The first one aims at improving digital surface models (DSM) produced from multi-view stereo images with deep learning techniques to make them “LiDAR like”. The second one presents a first step towards a 3D reconstruction pipeline from multi-view stereo point clouds.
- Date: April, 12th – 11h
- Speaker: Paul Viallard (Post-doc, INRIA Paris)
- Room: VISIO
- Title: Complexity Measures in Generalization Bounds: New Results and Future Directions
- Abstract: In supervised learning, practitioners may experience overfitting, which occurs when the model performs well on the training set but poorly on the learning task (represented by the test set). Hence, practitioners use regularization techniques and model selection procedures to avoid such a phenomenon. Another possibility to control overfitting is through the generalization gap, which can be interpreted as a difference in performance between the training set and the learning task. However, since the gap is not computable, it is upper-bounded by a PAC (Probably Approximately Correct) generalization bound. Such a bound is mainly composed of two terms: the number of examples in the training set and a model’s complexity. Unfortunately, the bound used imposes the complexity, such as the VC-Dimension or the Rademacher complexity in the case of a uniform-convergence bound. In this talk, I will introduce a contribution where we derive generalization bounds that explicitly include complexity measures chosen by the practitioner. More precisely, I will present how we can leverage the (disintegrated) PAC-Bayesian theory and the Gibbs distributions to obtain such a result. Lastly, I will discuss future research directions.
- Date: March, 23th
- Speaker: Thomas Lampert, Computer Science researcher, chaire Datascience and Artificial Intelingence, Univ. Strasbourg
- Room: D003 (ENSIBS)
- Title: Semi-supervised learning in multi-modal images and time-series
- Abstract: This talk will present two approaches to semi-supervised learning applied to two distinct data types (remote sensing images and time-series). The first is focussed on domain adaptation, in which classical deep learning approaches come from the computer vision community. These approaches tend to focus on RGB images, with possible inclusion of depth, but what happens when the dimensionality/modality of the data differs between domains? This talk will present our work in this direction. Instead of performing domain adaptation, we focus on domain invariance and explore the problems associated with decoupling the feature encoders from two domains so that they can have different characteristics (resolution, number of bands, imaging modality, etc). A limitation of such ‘supervised’ approaches is that the classes must be defined in advanced, even if only present in one domain. In the second part of the talk, I will therefore present approaches to semi-supervised clustering, in which unlabelled data can be grouped into classes that fulfil a user’s expectations without requiring class definitions at all. This is achieved using pairwise constraints and I will present our advances on integrating them into new time-series representation. This new approach generalises constrained clustering to the inductive setting, enabling the re-use of constraints in new data, thus overcoming the limitations of existing approaches.
- Date: March, 16th
- Speaker: Ana di Toro (PhD. candidate, Unicamp, remote sensing scientist Regrow, São Paulo, Brasil)
- Room: D070 (bat Coppens)
- Title: SAR and Optical data applied to Early season Mapping Integrated Crop-Livestock systems
- Abstract: Regenerative agricultural practices are a suitable path to feed the global population since those practices tend to reverse climate change, increase crop production by restoring soil biodiversity and increase soil organic matter. Integrated Crop–livestock systems (ICLSs) are key approaches once the area provides animal and crop production resources. In Brazil, the expectation is to increase the area of ICLS fields by 5 million hectares in the next five years. In this context, there is a lack of knowledge about ICLS fields and how to identify and monitor them. In this seminar, after giving an overview of the ICLS systems, I will show the results achieved using three machine and deep learning algorithms (random forest, long short-term memory, and transformer) to perform early-season (with three-time windows) mapping of ICLS fields. Also, considering the high incidence of cloud cover in Brazil, we tested and compared SAR and Optical time series, in two different study sites. Finally, I will explain the next steps and remaining challenges.
- Date: February, 2d
- Speaker: Lynn Miller (PhD. candidate, Monash Univ., Australia)
- Room: D009 (bat ENSIBS)
- Title: Deep learning from SITS for predicting live fuel moisture content
- Abstract: Live fuel moisture content (LFMC) is a key environmental indicator used to monitor for wildfire high risk conditions. Many statistical models have been proposed to predict LFMC from remotely sensed data, with recent studies exploring the use of both deep learning models and satellite image time series (SITS) data. However, almost all these models estimate current LFMC (i.e., they are nowcasting models). Models able to make accurate predictions of LFMC in advance (projection models) would provide fire management authorities with more timely information for assessing and preparing for wildfire risk. In this seminar I will discuss our work designing and evaluating a deep learning model to predict LFMC across the continental United States 3-months in advance. This is the first model that can make wide-scale long-range predictions while achieving an accuracy close to that of nowcasting models. The model consists of a small ensemble of temporal convolutional neural networks created using readily available inputs. I will also talk about some of the challenges of using machine learning to predict LFMC and potential ways of addressing these challenges.
- Date: January, 26th
- Speaker: Matteo Ciotola (PhD. candidate, Università degli Studi di Napoli)
- Room: D001 (bat ENSIBS)
- Title: Pansharpening by Convolutional Neural Networks
- Abstract: Pansharpening is a fusion process which combines a lower-resolution multispectral image with a higher-resolution panchromatic band to provide a high-resolution multispectral image.
There has been a growing interest in deep learning-based pansharpening in recent years. Thus far, research has mainly focused on architectures. Nonetheless, model training is an equally important issue. One of the problems is the absence of ground truths, necessary items for supervised pansharpening models. This is often addressed by training networks in a reduced-resolution domain and using the original data as ground truth, relying on an implicit scale invariance assumption. However, on full-resolution images, results are often below expectations, suggesting the violation of this assumption. In this presentation, it will be explored a new training scheme for pansharpening networks operating on full-resolution, real, data. The framework is fully general and can be used for any deep learning-based pansharpening model. Training takes place in the high-resolution domain, relying only on the original data, thus preventing possible mismatches between the simulated, lower-resolution, training datasets and the real, full-resolution, test datasets. To prove the effectiveness of the proposed framework, different networks and datasets have been used for experimental validation, achieving consistent and high-quality results.
- Date: January, 12th
- Speaker: Johan Faouzi (ass. prof. in Computer Sciences, ENSAI, Rennes)
- Room: D001 (bat ENSIBS)
- Title: Time series classification: A review of algorithms and implementations
- Abstract: Many algorithms on time series classification have been published in the literature. From dynamic time warping to shapelets to images to convolutions, a wide variety of approaches have been investigated. However, as more and more algorithms are published, using and comparing may be more and more cumbersome for users. In this presentation, I will highlight the main approaches that have been investigated to tackle time series classification. Finally, I will briefly present open source software tools that allow for using these algorithms in a user-friendly way, including a Python package that I created and still maintain.
- Date: December, 8th
- Speaker: Mathieu Le Lain (INFO dpt, IUT Vannes, univ. Bretagne Sud)
- Room: D001 (Bat. ENSIBS)
- Title: Classification of Halpha lines for Be stars by neural networks
- Abstract: A database of Be stars spectra has been implemented 15 years ago by the Observatoire de Paris-Meudon in order to collect astronomical spectra made by amateur and professional astronomers. In order to analyze the different states and potentially predict the next bursts of stars, this work focuses on the classification of Halpha line shapes as a GADF graph from residual neural networks. After having discussed the context and the objectives, we will detail the steps of this work, its implementation in application form and the future steps.
- Date: November, 3rd
- Speaker: Martina Pastorino (PhD student, INRIA Nice)
- Room: Amphi Y. Coppens
- Title: Stochastic models and deep-learning methods for remote sensing image analysis
- Abstract: Recent advances in DL, especially deep convolutional neural networks, have made it possible to obtain very significant results in the field of remote sensing image analysis. However, as for other methods, the map accuracy depends on the quantity and quality of ground truth (GT) used to train them. Having densely annotated data (i.e., a detailed, pixel-level GT) allows obtaining effective models, but requires high efforts in annotation. GTs related to real applications, such as remote sensing, are almost never exhaustive, they are spatially sparse and typically do not represent the spatial boundaries between the classes. Models trained with sparse maps usually produce results with poor geometric fidelity. This significantly affects the accuracy of the classification, and it is a major challenge in the development of deep neural networks for remote sensing. At the same time, PGMs have sparked even more interest in the past few years, because of the ever-growing availability of VHR data and the correspondingly increasing need for structured predictions. The objective of this research work is to combine different ideas from these approaches (deep learning and stochastic models) to develop novel methods for remote sensing image classification. The logic is to take advantage of the spatial modeling capabilities of hierarchical PGMs to mitigate the impact of incomplete GTs and obtain accurate classification results. In particular, the study focuses on the possibility of exploiting the intrinsically multiscale nature of FCNs, to integrate them with hierarchical Markov models.