We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the “ground-truth” data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Pádraig Looney, Gordon N. Stevenson, Kypros H. Nicolaides, Walter Plasencia, Malid Molloholli, Stavros Natsis, Sally L. Collins
Usage data is cumulative from February 2020 through February 2021.
Usage information is collected from two different sources: this site (JCI) and Pubmed Central (PMC). JCI information (compiled daily) shows human readership based on methods we employ to screen out robotic usage. PMC information (aggregated monthly) is also similarly screened of robotic usage.
Various methods are used to distinguish robotic usage. For example, Google automatically scans articles to add to its search index and identifies itself as robotic; other services might not clearly identify themselves as robotic, or they are new or unknown as robotic. Because this activity can be misinterpreted as human readership, data may be re-processed periodically to reflect an improved understanding of robotic activity. Because of these factors, readers should consider usage information illustrative but subject to change.