Advances in microscopy have increased output data volumes, and powerful image analysis
methods are required to match. In particular, finding and characterizing nuclei from
microscopy images, a core cytometry task, remains difficult to automate. While deep
learning models have given encouraging results on this problem, the most powerful
approaches have not yet been tested for attacking it. Here, we review and evaluate
state-of-the-art very deep convolutional neural network architectures and training
strategies for segmenting nuclei from brightfield cell images. We tested U-Net as
a baseline model; considered U-Net++, Tiramisu, and DeepLabv3+ as latest instances
of advanced families of segmentation models; and propose PPU-Net, a novel light-weight
alternative. The deeper architectures outperformed standard U-Net and results from
previous studies on the challenging brightfield images, with balanced pixel-wise accuracies
of up to 86%. PPU-Net achieved this performance with 20-fold fewer parameters than
the comparably accurate methods. All models perform better on larger nuclei and in
sparser images. We further confirmed that in the absence of plentiful training data,
augmentation and pretraining on other data improve performance. In particular, using
only 16 images with data augmentation is enough to achieve a pixel-wise F1 score that
is within 5% of the one achieved with a full data set for all models. The remaining
segmentation errors are mainly due to missed nuclei in dense regions, overlapping
cells, and imaging artifacts, indicating the major outstanding challenges.