Projective Transformation Rectification for Camera-captured Chest X-ray Photograph Interpretation with Synthetic Data
Automatic interpretation on smartphone-captured chest X-ray (CXR) photographs is challenging due to the geometric distortion (projective transformation) caused by the non-ideal camera position. In this paper, we proposed an innovative deep learning-based Projective Transformation Rectification Network (PTRN) to automatically rectify such distortions by predicting the projective transformation matrix. PTRN is trained on synthetic data to avoid the expensive collection of natural data. Therefore, we proposed an innovative synthetic data framework that accounts for the visual attributes of natural photographs including screen, background, illuminations, and visual artifacts, and generate synthetic CXR photographs and projective transformation matrices as the ground-truth labels for training PTRN. Finally, smartphone-captured CXR photographs are automatically rectified by trained PTRN and interpreted by a classifier trained on high-quality digital CXRs to produce final interpretation results. In the CheXphoto CXR photograph interpretation competition released by the Stanford University Machine Learning Group, our approach achieves a huge performance improvement and won first place (ours 0.850, second-best 0.762, in AUC). A deeper analysis demonstrates that the use of PTRN successfully achieves the performance on CXR photographs to the same level as on digital CXRs, indicating PTRN can eliminate all negative impacts of projective transformation to the interpretation performance. Additionally, there are many real-world scenarios where distorted photographs have to be used for image classification, our PTRN can be used to solve those similar problems due to its generality design.
READ FULL TEXT