In vivo fluorescence imaging is a powerful tool for understanding and characterizing biological systems. For example, with the help of in vivo fluorescence imaging, one can record the neural activity (neuronal firing) of freely moving mice. While in vivo fluorescence imaging technique has been widely used and has brought many benefits to the Biomedical image processing field, a few major physical limitations affect the image quality and data analysis. The three fundamental limits in fluorescence microscopy include poor signal-to-noise ratio (SNR), poor spatial resolution, and dense axial images. In this work, we employ machine learning (ML) models based on deep convolutional neural networks (CNN) to overcome the above-mentioned fundamental limits for 3D in vivo fluorescence imaging.
Firstly, the poor SNR arises due to the faster imaging speed in scenarios where one needs to understand the dynamic processes in 3D in vivo. Conventionally, to improve image SNR for a given image acquisition rate, one can use statistical methods-based computational denoising techniques to suppress the noise. However, these methods are either computationally expensive or rely on only simple Poisson or Gaussian noise statistics, which are not appropriate for fluorescence microscopes that usually contain a mixture of Poisson and Gaussian (MPG) noise. Therefore, to overcome this fundamental limitation, we have developed two CNN models based on the Noise2Noise and DnCNN architectures trained on the massive fluorescence microscopy denoising dataset containing MPG noise. Noisy images in the trained dataset were collected by wide-field, confocal and two-photon microscopes with samples including fixed cells, zebrafish, and mouse brains. An open-source ImageJ plugin was developed using the trained CNN model that performs instant image denoising within tens of milliseconds with superior performance (~8.1 dB PSNR improvement or 8x faster in acquisition) compared to the conventional denoising methods. Hence, imaging speed is improved by eight times using this CNN-based image denoising method. Next, extensive validation of the pre-trained models was performed on the out-of-distribution noise, contrast, microscope modality, biological samples (2D and 3D images), and other open-source fluorescence microscopy datasets. ML-based fluorescence lifetime and phasor denoising techniques and their applications are also demonstrated. The ML-based approaches provide high SNR in lifetime information and improve the lifetime segmentation with denoised phasor.
Secondly, after achieving a faster image acquisition speed of the in vivo images, our next goal is to improve the spatial resolution of the fluorescence images by overcoming the fundamental diffraction limit. To develop the ML model, we need another massive training dataset that maps from the diffraction-limited images to super-resolution images, which is a cumbersome process. Currently, we have developed a prototype of the ML model that requires only an ultra-small training dataset (15 target SR images with 50 diffraction-limited images per target) to generate super-resolution images. Briefly, this model is based on the novel “Dense Encoder-Decoder” (DenseED) block, developed based on the dense layer in the existing popular super-resolution models. We experimentally verified our demonstrated ML model using fluorescent-labeled fixed bovine pulmonary artery endothelial (BPAE) cells. The improvement in SNR and spatial resolution in the ML-generated super-resolution images is ~3.49 dB and 2x, respectively, compared to the diffraction-limited image. This approach is beneficial for in vivo imaging, X-ray, and MRI imaging, where extracting large datasets is challenging. Next, validation of the proposed methodology is shown on the experimentally captured diffraction-limited and super-resolution image datasets. Clearly, the demonstrated ``DenseED” ML block provides enhanced super-resolution images compared to simple CNN-based ML models when trained with an ultra-small training dataset (minimal number of FOVs). In the case of FLIM super-resolution, we proposed a traditional deconvolution approach to identify the true object/sample from the diffraction-limited images (experimentally captured). In this project, we provide a theoretical model for convolution in FLIM and Richardson-Lucy (RL) based deconvolution, including the total variation (TV) regularization method to correct the convolution-induced distortions in FLIM measurements. In addition, the proposed method is validated on experimental images captured using multi-photon microscopy (MPM)-FLIM images of fluorescent-labeled fixed bovine pulmonary arterial endothelial (BPAE) cells.
Finally, we further enhance the capabilities of low dosage, long-term, in vivo imaging with the assistance of ML models. Compressive sensing-based (unsupervised machine learning model) 3D volume reconstruction and image denoising for low-power image acquisition will be developed to accurately reconstruct and improve the SNR for long-term imaging. Experimental results are presented in the respective sections.
Overall, in this dissertation, we show that the above-mentioned methods namely image denoising, image super-resolution, and low-dosage 3D volume imaging can be used to solve the fundamental limitations of fluorescence microscopy/FLIM using machine learning models and obtain significant acceleration in terms of accuracy and computational cost (in-terms of instant imaging, faster training ML models and avoiding photo-bleaching of the sample) than conventional methods.