Biomedical image analysis plays an essential role in providing both qualitative and quantitative insights crucial for advancing biomedical research and applications. In recent years, deep learning has surpassed traditional image processing methods which rely on handcrafted features, achieving superior performance in biomedical image analysis. However, a significant challenge persists: image annotation effort required for training deep learning models is often both costly and time-consuming. The recent accumulation of large volumes of biomedical image data has heightened the demand for more efficient processing methods. Given the complexities surrounding image annotation and the need for practical solutions, there is a pressing demand for annotation-efficient deep learning approaches. In this dissertation, we propose novel deep learning methodologies aimed at reducing annotation effort and enhancing model applicability and practicality for various biomedical image processing tasks.
Firstly, we present a novel cell nucleus generation approach aimed at expanding the training set for cell segmentation tasks. This method enhances cell segmentation performance for both 2D and 3D cells, even when annotations are limited. Secondly, we introduce a method using weak annotation for 3D instance segmentation. Our approach only requires bounding box annotation for all instances and voxel annotation for a small subset. Despite this reduction in annotation effort, our method achieves comparable performance to state-of-the-art approaches. Lastly, we propose a synthetic image-driven approach to improve the quality of fluorescence images. This method does not rely on human annotations and is capable of transforming noisy fluorescence images into clean, interpretable ones. This enhancement facilitates better observations and quantification in biological studies and life sciences.