Domain‐invariant attention network for transfer learning between cross‐scene hyperspectral images

Minchao Ye, Chenglong Wang, Zhihao Meng,Fengchao Xiong,Yuntao Qian

IET Computer Vision(2023)

引用 0|浏览7
暂无评分
摘要
Abstract Small‐sample‐size problem is always a challenge for hyperspectral image (HSI) classification. Considering the co‐occurrence of land‐cover classes between similar scenes, transfer learning can be performed, and cross‐scene classification is deemed a feasible approach proposed in recent years. In cross‐scene classification, the source scene which possesses sufficient labelled samples is used for assisting the classification of the target scene that has a few labelled samples. In most situations, different HSI scenes are imaged by different sensors resulting in their various input feature dimensions (i.e. number of bands), hence heterogeneous transfer learning is desired. An end‐to‐end heterogeneous transfer learning algorithm namely domain‐invariant attention network (DIAN) is proposed to solve the cross‐scene classification problem. The DIAN mainly contains two modules. (1) A feature‐alignment CNN (FACNN) is applied to extract features from source and target scenes, respectively, aiming at projecting the heterogeneous features from two scenes into a shared low‐dimensional subspace. (2) A domain‐invariant attention block is developed to gain cross‐domain consistency with a specially designed class‐specific domain‐invariance loss, thus further eliminating the domain shift. The experiments on two different cross‐scene HSI datasets show that the proposed DIAN achieves satisfying classification results.
更多
查看译文
关键词
hyperspectral imaging,pattern classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要