The overall objective of the team is the processing of complex images for environmental purposes. In such a context, available data form a massive amount of multidimensional (multi- or hyperspectral) noisy observations with high spatio-temporal variability. While understanding these data stays very challenging, environmental systems always come with some additional knowledge or models which are worth being exploited to achieve environment observation. Finally, whatever the task involved (e.g., analysis, filtering, classification, clustering, mining, modelling, …), specific attention has to be paid to the way results are provided to the end-user, helping them to benefit from their added value.
- Processing complex data.
Environment observation requires to perform various data processing tasks: analysis to describe the data with relevant features, filtering and mining to highlight significant data, clustering and classification to map data with predefined or unknown classes of interest, and modelling to understand the underlying phenomena. In this context, processing complex data bring various challenges which will be addressed by the team,both from theoretical and computational points of view. Highly dimensional images, massive datasets, noisy observations, fine temporal and spatial scales, together motivate the design of new dedicated methods that can handle this complexity. The underlying techniques refer to scale-space models (e.g. using hierarchical tree-based image representations) and manifold learning for the theoretical part, and to massive computing using GPUs networks and data intensive systems (based on Hadoop for instance) for the operational level.
- Incorporating prior knowledge and models.
To face intrinsic complexity of images, environment observation can most often benefit from supplementary information. Incorporating such information when processing environmental data is thus highly expected. Among available information, physical models issued from researchers exist to describe the observed phenomena. However, these models are rarely compatible with existing data analysis tools (e.g., in the case of time series of remote sensing images, the physical models are often non-linear and thus do not fit the classic assumptions in computer vision such as stable structures with linear evolutions along time). It is therefore of prime importance to design alternative tools (e.g. assimilation methods) able to accurately mix the recent physical models (e.g. surface models describing interactions between biophysical variables and meteorological parameters) with variables derived from images. Besides such models, additional information such as local probes, investigations or expert knowledge are often available in some specific study areas. This leads to information of various nature (symbolic, descriptive, expert rules) associated with heterogeneous degrees of confidence. Therefore we will
investigate the design of specific fusion rules as well as reasoning techniques to face this particular issue.
- Providing significant results to the end-user.
Since most of the results of the methodological developments of the team will be aimed towards nonspecialists of computer science (computer vision and image processing, machine learning and data
mining), a particular focus will be given to their understanding by the end-user.The team expects to first specialize some methodologies to achieve this goal (e.g., explain some unobserved data by a combination of known data, as can be done with matrix factorization techniques), before considering visualization methods. This last point belongs to the category of visual analytics
and can be considered as a crucial step to help decision makers exploit rapidly scientific advances.
Presentation of team activities (2014) is available in a low-resolution / high-resolution poster (in french).
Summary slide (French and English versions):