Projects funded in 2024-2025
- TAU Light Field Microscope (LFM2) (PI – Prof. Gotchev, Prof. Caglayan)
This is the second stage of TAU LFM project funded by the Imaging Platform in 2021-2022. The main components of modular LFM are set up and it is ready for experimenting with novel design LFM system components (metaoptics and neural processing). New extended depth of field (EDoF) microscopy imaging method has been designed and its efficacy has been demonstrated through simulations. Once the ongoing fabrication of metaoptic is completed, the EDoF microscopy approach is going to be integrated in the modular LFM setup and it will be tested against reference methods. Targeting the fully functional LF microscope, the LFM2 project will address the following directiona: 1) the light transport model developed for EDoF needs to be further developed to address the challenges in optical modelling of LF imaging and 2) once the efficacy of new LFM method is validated through simulations, the experimental verification will further require addressing the possible mismatches between the simulated and real light transport models, where the latter is tailored for the desired use-cases, i.e., imaging of large biological specimens, cells inside transparent hydrogels, and Zebrafish. - DIGIWHEEL- Onboard monitoring of wheel-rail conditions using computer vision technique (PI – Prof. Xinxin Yu)
Conventionally, the condition monitoring of the railway equipment is processed while the train stops, which easily leads to train delays and only reduces the likelihood of accidents. The most precise way is to monitor the current (ideally real-time) condition on a moving train to ensure a highly detailed forecast. DIGIWHEEL aims 1) to develop an automatic inspection system for the condition monitoring of railway wheels and rails and its interaction location using computer vision and 2) to validate it under laboratory condition. The resulting DIGIWHEEL computer simulation outputs will provide essential parameters for the maintenance of tracks and vehicles (wheel-to-rail profile geometry and track geometry), and the estimation of the wheel-to-rail interaction locations will contribute to decision-making algorithms and safety prediction for autonomous trains. - Imaging with mm-wave high-resolution radar (PI – Dr. Strokina, co-PI Prof. Ghabcheloo)
Cost-effective, compact, lightweight, the modern MIMO (Multiple-Input Multiple-Output) radars provide spatial and angular resolution comparable to that of optical sensors. Together with velocity-measuring capabilities and a long detection range, this makes radars an attractive alternative to LiDARs and cameras in various perception tasks. However, the major challenge of the MIMO radar data is the significant amount of noise and artifacts. Besides, the intensity of the measurement points depends on the properties of the materials, the location of the object, and the general conditions of the environment. In this project, the aim is to develop occupancy grid reconstruction method using MIMO radar addressing the problem from the following directions: semantic conditioning, taking into account the complex nature of the signal, and by collecting an extensive dataset indoors and outdoors, utilizing CIVIT and AMM research facilities.
Projects funded in 2023-2024
- Neuromorphic Imaging for Robotics (PI – Prof. Pieters, co-PI Prof. Kämäräinen)
In this project, the neurocomputing potential of a dynamic vision system (DVS) for imaging, neurocomputing and robotics is explored. DVSs are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. The goals of this project is 1) to develop an algorithm that will process the spike events coming from a DVS directly on a neuromorphic chip, and 2) to connect a frame-based camera to a neuromorphic computer in order to extract relevant features (color, depth, etc.) by using dynamic neural fields and 3) to develop a framework that will sustain the autonomous learning of new skills in robotics due to the low-level, straightforward, and temporal representation. - 3D microrheology for micromechanical and microstructural analysis of hydrogel cell culture systems (PI – Dr.Koivisto)
Microrheology is a material characterization method based on tracking the Brownian motion of particles inside complex fluid systems. In multiple particle tracking (MPT) microrheology, the movement of particles in microscope video can be analyzed to give information on the visco-elasticity of the surrounding fluid. In this proposed project, the aim is to develop the MPT microrheology further to enable the 3D analysis of complexfluid systems such as 3D hydrogel cell cultures. This is achieved by combining individual MPT data sets of the same sample and reconstructing a 3D image of the microstructure based on the particle motion.
Projects funded in 2022-2023
- High-speed FRET imaging of living cells: dealing with extremely low signal-to-noise ratio, low frame-rate, and raster-scan deformations (PIs – Prof. Foi and Dr. Ihalainen)
Förster (or Fluorescence) Resonance Energy Transfer (FRET) is an imaging technique that is unique in generating fluorescence signals sensitive to molecular conformation, association, and separation in the 1–10 nm range. The project aims to substantially improve the FRET imaging of mobile, living cells, where there can be deformation during image acquisition. Specifically, the project will develop a computational imaging pipeline for FRET that jointly compensates for the low signal-to-noise ratio (SNR) that is inherent to FRET, corrects for the image deformations due to scanning, and reconstructs a spatio-temporally accurate output with higher frame-rate. - Bayesian Machine Learning for Model Parameter Selection in Inverse Problems: Application in Personalized Brain Activity Imaging (PI – Dr. Koulouri)
The estimation and visualization of the electrically active parts of the brain from electroencephalography (EEG) recordings is referred to as the EEG Source Imaging. The imaging of focal sources from EEG recordings is very sensitive to the electrical modelling of the head tissues of the patient and the modelled prior information. The conductivities of the head tissues have significant variation patient by patient. Currently available EEG devices do not take these into account; instead, it is common to use fixed literature-based conductivity parameters and simplified prior information. The aim of this project is to employ the state-of-the art Bayesian machine learning methods to (a) compensate for source imaging errors due to the unknown tissue conductivities and, simultaneously, to compute at least a low-order estimate for the patient-specific skull conductivity value, and (b) design appropriate hierarchical sparsity priors in order to recover focal sources both on the brain surface and deep in the brain.
- Smart material-based diffractive optical elements for hardware in the loop hybrid diffractive optics design for achromatic extended-depth-of-field imaging (PIs – Prof. Egiazarian and Prof. Priimägi)
Th project will focus on advanced hardware for flat optics creation by smart materials. Flat lenses are an alternative to conventional refractive optics to overcome its limitations (e.g. chromatic aberration, low spectral resolution, and shallow depth-of-field), and to enable novel functionalities possible due to the potential of these lenses for nearly arbitrary modulation of light. Flat lenses are compact, light, inexpensive in fabrication, and with the potential to reduce power consumption. The hardware will include a digital micromirror device and azopolymer film, which comprise unique smart features. For high-quality imaging, the hardware will be inserted in the programming loop, where in each iteration flat optics will be adjusted for better performance. The adjustment will be managed by involving image datasets and neural network training. - HIDDEN – Hyperspectral Imaging with metaoptics and Deep Neural network (PIs – Prof. Gotchev and Prof. Caglayan)
Hyperspectral imaging critically serves various fields such as remote sensing, biomedical, and agriculture. Typically, each multispectral imaging application requires a specific spectral range and number of spectral bands, and corresponding spectral (color) filters. Current settings are either spectral-resolution limited or bulky and a cost-effective manufacturing approach remains elusive. The project HIDDEN will develop a novel snapshot hyperspectral imaging system using optimized diffractive optical elements and color filters along with new computational imaging methods. In particular, the researchers will seek jointly designed and optimized novel optical elements and neural network architectures, which together will advance the reconstruction quality of hyperspectral images.
Projects funded in 2021-2022
- Image-based Robot Grasping, RoboGrasp (PI – R. Pieters).
Visual and robotic tasks, such as pose estimation for grasping, are trained from image data that need to be representative for the actual objects, to acquire accurate results. This implies either generalized object models or large datasets that include all object and environment variability, for training. However, data collection is often a problem in the fast development of learning-based models. This project will develop a data generation pipeline that takes as input a CAD model of an object and automatically generates the required training data for object pose estimation and object grasp detection. The group will study (i) what visual representation and imaging phenomena are required for a synthetic dataset such that it is similar to real-world sensor observations, and (ii) what object representation is required for training a grasping model in simulation such that it is robust in the real world. - Seamless Interaction in Virtual Reality for Remote Visual Monitoring with a Network of UAVs (PI – A. Gusrialdi)
An effective monitoring strategy can aid us in solving present social and environmental issues by providing an accurate assessment of the risk of the damage, comprehensive information for proper action, and a measure of the undergoing policies’ effectiveness. An emerging solution for an effective monitoring strategy is by deploying a team of autonomous robots that could communicate within a network and cooperatively perform monitoring tasks. develop control algorithms that allow a human operator to seamlessly control and interact with multiple unmanned aerial vehicles (UAVs) through Virtual Reality (VR) for remote monitoring applications. - DCNN Computational Lens for Phase Imaging (PI – K. Eguiazarian)
Phase imaging exploits differences in the refractive index of different materials to differentiate between structures under analysis. Currently the design is led by optics, concentrated on the registration as good as possible in quality images and results in complex and bulky imaging systems. The goal is to substitute the bulky lens system with DCNNs. The project will investigate different DCNN structures, develop learning procedures, and obtain real-life experimental results. Since no data sets of phase images exists, the project will build a ‘dataset creation optical system’. - Selective Plane Illumination Fluorescence Lifetime Microscopy (SPIMFLIM): A cell-friendly 3D microscope for (chemical) imaging (PI – H. Välimäki)
Biological in-vitro disease modelling and preclinical drug testing have started to move from 2D towards more complex 3D models, comprising various cell types in a physiological micro-environment. This trend necessitates a transformation of various measurement and imaging technologies to 3D as well. For 3D chemical imaging, the luminescence technologies provide an unparallel possibility: to spread analyte-sensitive luminescent indicators into the cell culture and detect their changing luminescence properties, such as luminescence lifetime, remotely with advanced fluorescence microscopy techniques. The combination of laser scanning confocal microscopy (LSCM) and a fastdetector provides an option for 3D fluorescence lifetime microscopy (FLIM), but the point-by-point scanning is time-consuming and generates high phototoxicity. To make the 3D FLIM faster and significantly more cell-friendly, the project proposes to combine two sophisticated microscopy modalities: selective plane illumination microscopy (SPIM) and FLIM. - Light Field Microscope (PIs – A. Gotchev, H. Caglayan, T. Ihalainen): 3D imaging of light sensitive specimens (living cells and model organisms) is challenging and nowadays has been conducted with suboptimal spatial and temporal resolutions and with the risk of causing imaging-related changes in specimen physiology due to prolonged scanning-based sensing. Light Field Misrocopy brings the promise of snapshot-like 3D volumetric imaging, greatly reducing the needed acquisition time and light dose. The project will develop the methodology for designing the microscope in terms of optics, hardware, and related computational methods in relation with the targeted use cases. It will also construct the first prototype to be used for real-life validating experiments and developing the corresponding LF microscopy measurement protocols.