Joint Linguistic Steganography With BERT Masked Language Model and Graph Attention Network

IEEE Transactions on Cognitive and Developmental Systems(2023)

引用 0|浏览3
暂无评分
摘要
Generation-based linguistic steganography embeds secret information by generating high-quality text sequences according to the information. However, this method may cause low statistical feature consistency between normal text and the text with embedded information, thus making the steganalysis models detect the steganographic texts easier. And conditional generation-based text steganography, which embeds secret information into the suitable position in text sequence to improve the statistical feature consistency, has a disadvantage in embedding capacity. To solve the problems above, in this paper, we propose a joint linguistic steganography method that combines conditional generation-based steganography with substitution-based steganography that uses the BERT pre-trained model. The graph attention network is also used in this method to extract and analyze spatial features of the text sequence, which are taken as auxiliary information in the text generation process based on temporal features. The comparative experimental results with other models show that our steganography model improves the embedding capacity of conditional generation-based text steganography while ensuring the consistency of text features, and is superior to most state-of-the-art models in imperceptibility and anti-steganalysis ability.
更多
查看译文
关键词
Neural networks,linguistic steganography,graph attention network,feature consistency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要