IEEE Trans Med Imaging. 2022 Nov 1;PP. doi: 10.1109/TMI.2022.3218662. Online ahead of print.
Non-rigid registration between 3D surfaces is an important but notorious problem in medical imaging, because finding correspondences between non-isometric instances is mathematically non-trivial. We propose a novel self-supervised method to learn shape correspondences directly from a group of bone surfaces segmented from CT scans, without any supervision from time-consuming and error-prone manual annotations. Relying on a Siamese architecture, DiffusionNet as the feature extractor is jointly trained with a pair of randomly rotated and scaled copies of the same shape. The learned embeddings are aligned in spectral domain using eigenfunctions of the Laplace-Beltrami Operator. Additional normalization and regularization losses are incorporated to guide the learned embeddings towards a similar uniform representation over spectrum, which promotes the embeddings to encode multiscale features and advocates sparsity and diagonality of the inferred functional maps. Our method achieves state-of-the-art results among the unsupervised methods on several benchmarks, and presents greater robustness and efficacy in registering moderately deformed shapes. A hybrid refinement strategy is proposed to retrieve smooth and close-to-conformal point-to-point correspondences from the inferred functional map. Our method is orientation and discretization-invariant. Given a pair of near-isometric surfaces, our method automatically computes registration in high accuracy, and outputs anatomically meaningful correspondences. In this study, we show that it is possible to use neural networks to learn general embeddings from 3D shapes in a self-supervised way. The learned features are multiscale, informative, and discriminative, which might potentially benefit almost all types of morphology-related downstream tasks, such as diagnostics, data screening and statistical shape analysis in future.