Projects funded in 2022-2023

  • High-speed FRET imaging of living cells: dealing with extremely low signal-to-noise ratio, low frame-rate, and raster-scan deformations​ (PIs – Prof. Foi and Dr. Ihalainen)
    Förster (or Fluorescence) Resonance Energy Transfer (FRET) is an imaging technique that is unique in generating fluorescence signals sensitive to  molecular conformation, association, and separation in the 1–10 nm range. The project aims to substantially improve the FRET imaging of mobile, living cells, where there can be deformation during image acquisition. Specifically, the project will develop a computational imaging pipeline for FRET that jointly compensates for the low signal-to-noise ratio (SNR) that is inherent to FRET, corrects for the image deformations due to scanning, and reconstructs a spatio-temporally accurate output with higher frame-rate.
  • Bayesian Machine Learning for Model Parameter Selection in Inverse Problems: Application in Personalized Brain Activity Imaging​ (PI – Dr. Koulouri)
    The estimation and visualization of the electrically active parts of the brain from electroencephalography (EEG) recordings is referred to as the EEG Source Imaging. The imaging of focal sources from EEG recordings is very sensitive to the electrical modelling of the head tissues of the patient and the modelled prior information. The conductivities of the head tissues have significant variation patient by patient. Currently available EEG devices do not take these into account; instead, it is common to use fixed literature-based conductivity parameters and simplified prior information. The aim of this project is to employ the state-of-the art Bayesian machine learning methods to (a) compensate for source imaging errors due to the unknown tissue conductivities and, simultaneously, to compute at least a low-order estimate for the patient-specific skull conductivity value, and (b) design appropriate hierarchical sparsity priors in order to recover focal sources both on the brain surface and deep in the brain.
  • Smart material-based diffractive optical elements for hardware in the loop hybrid diffractive optics design for achromatic extended-depth-of-field imaging (PIs – Prof. Egiazarian and Prof. Priimägi
    Th project will focus on advanced hardware for flat optics creation by smart materials. Flat lenses are an alternative to conventional refractive optics to overcome its limitations (e.g. chromatic aberration, low spectral resolution, and shallow depth-of-field), and to enable novel functionalities possible due to the potential of these lenses for nearly arbitrary modulation of light. Flat lenses are compact, light, inexpensive in fabrication, and with the potential to reduce power consumption.  The hardware will include a digital micromirror device and azopolymer film, which comprise unique smart features.  For high-quality imaging, the hardware will be inserted in the programming loop, where in each iteration flat optics will be adjusted for better performance. The adjustment will be managed by involving image datasets and neural network training.
  • HIDDEN – Hyperspectral Imaging with metaoptics and Deep Neural network​ (PIs – Prof. Gotchev and Prof. Caglayan)
    Hyperspectral imaging critically serves various fields such as remote sensing, biomedical, and agriculture. Typically, each multispectral imaging  application requires a specific spectral range and number of spectral bands, and corresponding spectral (color) filters. Current settings are either spectral-resolution limited or bulky and a cost-effective manufacturing approach remains elusive. The project HIDDEN will develop a novel snapshot hyperspectral imaging system using optimized diffractive optical elements and color  filters along with new computational imaging methods. In particular, the researchers will  seek jointly designed and optimized novel optical elements and neural network architectures,  which together will advance the reconstruction quality of hyperspectral images.

Projects funded in 2021-2022

  • Image-based Robot Grasping, RoboGrasp (PI – R. Pieters).
    Visual and robotic tasks, such as pose estimation for grasping, are trained from image data that need to be representative for the actual objects, to acquire accurate results. This implies either generalized object models or large datasets that include all object and environment variability, for training. However, data collection is often a problem in the fast development of learning-based models. This project will develop a data generation pipeline that takes as input a CAD model of an object and automatically generates the required training data for object pose estimation and object grasp detection. The group will study (i) what visual representation and imaging phenomena are required for a synthetic dataset such that it is similar to real-world sensor observations, and (ii) what object representation is required for training a grasping model in simulation such that it is robust in the real world.
  • Seamless Interaction in Virtual Reality for Remote Visual Monitoring with a Network of UAVs (PI – A. Gusrialdi)
    An effective monitoring strategy can aid us in solving present social and environmental issues by providing an accurate assessment of the risk of the damage, comprehensive information for proper action, and a measure of the undergoing policies’ effectiveness. An emerging solution for an effective monitoring strategy is by deploying a team of  autonomous robots that could communicate within a network and cooperatively perform monitoring tasks. develop control algorithms that allow a human operator to seamlessly control and interact with multiple unmanned aerial vehicles (UAVs) through Virtual Reality (VR) for remote monitoring applications.
  • DCNN Computational Lens for Phase Imaging (PI – K. Eguiazarian)
    Phase imaging exploits differences in the refractive index of different materials to differentiate between structures under analysis. Currently the design is led by optics, concentrated on the registration as good as possible in quality images and results in complex and bulky imaging systems. The goal is to substitute the bulky lens system with DCNNs. ​The project will investigate different DCNN structures, develop learning procedures, and obtain real-life experimental results. Since no data sets of phase images exists, the project will build a ‘dataset creation optical system’.
  • Selective Plane Illumination Fluorescence Lifetime Microscopy (SPIMFLIM): A cell-friendly 3D microscope for (chemical) imaging (PI – H. Välimäki)
    Biological in-vitro disease modelling and preclinical drug testing have started to move from 2D towards more complex 3D models, comprising various cell types in a physiological micro-environment. This trend necessitates a transformation of various measurement and imaging technologies to 3D as well
    . For 3D chemical imaging, the luminescence technologies provide an unparallel possibility: to spread analyte-sensitive luminescent indicators into the cell culture and detect their changing luminescence properties, such as luminescence lifetime, remotely with advanced fluorescence microscopy techniques. The combination of laser scanning confocal microscopy (LSCM) and a fastdetector provides an option for 3D fluorescence lifetime microscopy (FLIM), but the point-by-point scanning is time-consuming and generates high phototoxicity. To make the 3D FLIM faster and significantly more cell-friendly, the project proposes to combine two sophisticated microscopy modalities: selective plane illumination microscopy (SPIM) and FLIM.
  • Light Field Microscope (PIs – A. Gotchev, H. Caglayan, T. Ihalainen): 3D imaging of light sensitive specimens (living cells and model organisms) is challenging and nowadays has been conducted with suboptimal spatial and temporal resolutions and with the risk of causing imaging-related changes in specimen physiology due to prolonged scanning-based sensing. Light Field Misrocopy brings the promise of snapshot-like 3D volumetric imaging, greatly reducing the needed acquisition time and light dose. The project will develop the methodology for designing the microscope in terms of optics, hardware, and related computational methods in relation with the targeted use cases. It will also construct the first prototype to be used for real-life validating experiments and developing the corresponding LF microscopy measurement protocols.