| 基于改进Transformer的网络攻击智能分析模型 |
投稿时间:2025-11-25 修订日期:2025-12-30 点此下载全文 |
| 引用本文: |
| 摘要点击次数: 42 |
| 全文下载次数: 0 |
|
|
| 中文摘要:针对网络流量局部突变频繁、模式碎片化导致传统Transformer过度平滑、误报率高的现实问题,提出了一种稀疏-局部-全局耦合的改进Transformer攻击识别模型。首先建立字节嵌入层,将原始字节流无损映射为固定维向量并引入分段位置编码以保留局部顺序;在此基础上,设计并行稀疏注意力、局部敏感注意力与全局原型注意力,通过自适应门控融合不同感受野信息,兼顾长程依赖与局部突变;接着利用共享基矩阵低秩参数化与稀疏语义压缩,把可变长序列凝练为256维攻击语义向量,实现端到端威胁判定。实验结果表明,改进Transformer较次优对照CL-BERT召回率提高1.6%,误报率降低0.09%,吞吐量提升1.44倍。该研究为高速网络无解析条件下的实时攻击识别提供了新思路,具备一定的实际应用价值。 |
| 中文关键词:网络安全 攻击流量识别 Transformer 端到端模型 |
| |
| Intelligent analysis model for network attacks based on improved Transformer |
|
|
| Abstract:A sparse local global coupled improved Transformer attack recognition model is proposed to address the practical problems of frequent local changes in network traffic, fragmented patterns leading to excessive smoothness and high false alarm rates in traditional Transformers. Firstly, establish a byte embedding layer to lossless map the original byte stream into a fixed dimensional vector and introduce segment position encoding to preserve local order; On this basis, parallel sparse attention, local sensitive attention, and global prototype attention are designed to fuse different receptive field information through adaptive gating, taking into account long-range dependencies and local mutations; Then, by utilizing low rank parameterization of the shared base matrix and sparse semantic compression, the variable length sequence is condensed into a 256 dimensional attack semantic vector to achieve end-to-end threat determination. The experimental results show that the improved Transformer has a 1.6% increase in recall rate, a 0.09% decrease in false positive rate, and a 1.44-fold increase in throughput compared to the suboptimal control CL-BERT. This study provides a new approach for real-time attack recognition in high-speed networks without parsing conditions, and has certain practical application value. |
| keywords:Network security Attack traffic identification Transformer End to end model |
| 查看全文 查看/发表评论 下载pdf阅读器 |