ByteDefend

Research highlights, media coverage, and updates from the ByteDefend Cyber Lab

ByteDefend Lab β€” LinkedIn Highlights

Follow us β†’
ByteDefend
ByteDefend Cyber Lab
Mar 2026
🚨
🚨 Nearly 50% of popular Hugging Face models still use the Pickle format, which allows arbitrary code execution. Worse β€” current industry scanners miss up to 93% of these threats. Our new SafePickle scanner achieves 90% F1-score (vs. 7–63% for SOTA tools), detects 9/9 evasive adversarial attacks that others catch 0/9, and requires no library-specific setup. We're also releasing a labelled dataset of 727 files to help the community build better defences.
#AI Security #Hugging Face #ML Models #Supply Chain
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Feb 2026
πŸ€–
Our new preprint tackles a silent threat in the AI ecosystem β€” malicious payloads hidden inside serialised ML model files (Pickle format). SafePickle provides a robust, model-agnostic ML detector that catches these attacks without unpickling the file.
#AI Security #ML Models #Supply Chain
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Jan 2026
πŸ“‘
Proud to present three papers at CCNC 2026 β€” ultra-fast network throughput estimation via intelligent sampling, GNN-based early detection of cloud-service anomalies, and QoE prediction for first-person shooter online gaming traffic.
#Network Intelligence #GNN #QoE
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Oct 2025
πŸ”¬
Attackers can embed malicious payloads directly into a model's weight matrices. NeuPerm leverages the natural permutation symmetry of neural networks to disrupt such hidden payloads β€” a fundamentally new defence primitive for AI model security.
#AI Security #Neural Networks #Malware
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Sep 2025
πŸ›‘οΈ
Accepted at FedCSIS 2025. AI-MTD introduces a moving-target defence layer for deployed AI models: the model's internal structure is continuously mutated at runtime, making it a moving target that thwarts adversarial probing and model-theft attacks.
#Zero Trust #Moving Target Defense #AI Security
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Jun 2025
πŸ”
With post-quantum cryptographic algorithms (CRYSTALS-Kyber, Dilithium) being deployed across the internet, traffic classifiers must evolve. PQClass is the first ML framework targeting encrypted traffic carrying PQC applications β€” presented at IEEE ICC 2025.
#Post-Quantum #Encrypted Traffic #Classification
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Jan 2025
πŸ”‘
Published in Computers & Security: a classification-by-retrieval framework that detects API injection attacks with only a handful of labelled examples per attack class. Works in real-world settings where labelled attack data is scarce.
#API Security #Few-Shot Learning #Anomaly Detection
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Sep 2024
πŸ”
Attackers are increasingly hiding malware inside the weight tensors of open-source AI models. Model X-Ray uses few-shot learning to scan model weights for hidden payloads β€” a scalable approach for securing AI model registries and distribution pipelines.
#AI Security #Few-Shot Learning #Model Weights
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Nov 2023
πŸ€–
Published in IEEE Access: the first Content Disarm & Reconstruction (CDR) methodology applied directly to neural network model files. Sanitises malicious content embedded in model architectures while preserving model functionality.
#CDR #AI Security #IEEE Access
View on LinkedIn
ByteDefend
ByteDefend Cyber Lab
Feb 2023
πŸ“„
IEEE Transactions on Information Forensics and Security: a zero-trust methodology for Content Disarm & Reconstruction of RTF files. Automatically strips active content, macros, and embedded objects without losing document fidelity.
#CDR #RTF #Zero Trust #IEEE TIFS
View on LinkedIn