Fast autofocusing using tiny transformer networks for digital holographic microscopy

OPTICS EXPRESS(2022)

引用 4|浏览9
暂无评分
摘要
The numerical wavefront backpropagation principle of digital holography confers unique extended focus capabilities, without mechanical displacements along z-axis. However, the determination of the correct focusing distance is a non-trivial and time consuming issue. A deep learning (DL) solution is proposed to cast the autofocusing as a regression problem and tested over both experimental and simulated holograms. Single wavelength digital holograms were recorded by a Digital Holographic Microscope (DHM) with a 10x microscope objective from a patterned target moving in 3D over an axial range of 92 mu m. Tiny DL models are proposed and compared such as a tiny Vision Transformer (TViT), tiny VGG16 (TVGG) and a tiny Swin-Transfomer (TSwinT). The proposed tiny networks are compared with their original versions (ViT/B16, VGG16 and Swin-Transformer Tiny) and the main neural networks used in digital holography such as LeNet and AlexNet. The experiments show that the predicted focusing distance Z(R)(Pred) is accurately inferred with an accuracy of 1.2 mu m in average in comparison with the DHM depth of field of 15 mu m. Numerical simulations show that all tiny models give the Z(R)(Pred) with an error below 0.3 mu m. Such a prospect would significantly improve the current capabilities of computer vision position sensing in applications such as 3D microscopy for life sciences or micro-robotics. Moreover, all models reach an inference time on CPU, inferior to 25 ms per inference. In terms of occlusions, TViT based on its Transformer architecture is the most robust. (C) 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
更多
查看译文
关键词
tiny transformer networks,fast autofocusing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要