Time-lapse fluorescence live cell imaging has been widely used to study various dynamical processes in cell biology. However, fluorescence live cell images often have low contrast, noises, and uneven illumination, preventing accurate cell segmentation. The convolutional neural network has been successfully applied in natural image classification and segmentation by extracting hierarchical features, which could be transferred into other fields for image segmentation. Moreover, the temporal coherence in time-lapse images can allow us to extract sufficient features from a limited number of image frames to segment the entire time-lapse movies. In this paper, we propose a novel framework called vU-net, which integrates the VGG-161 pretrained model as an encoder and a U-net2 derived simplified convolutional structure to reconstruct cell edge with a higher accuracy using limited training images. We evaluated our framework on the high-resolution images of paxillin, a canonical adhesion marker in migrating PtK1 cells acquired by a Total Internal Reflection Fluorescence (TIRF) microscope, and achieved higher accuracy of cell segmentation than conventional U-net. We also validated our framework on noisy confocal fluorescence live images of GFP-mDia1 in PtK1 cells. We demonstrated that vU-net could be practically applied to challenging live cell movies since it required limited training sets and achieved highly accurate segmentation.
Supplementary notes can be added here, including code, math, and images.