Artificial IntelligenceLegit Security Bolsters AI Supply Chain Security with Risky Model Detection

Legit Security, the leading platform for enabling companies to manage their application security posture across the complete developer environment, today announced new capabilities that allow customers to discover unsafe AI models in use throughout their software factories. These new capabilities provide actionable remediation steps to reduce AI supply chain security risk across the software development lifecycle (SDLC).

Organizations increasingly use third-party AI models to gain a competitive advantage by accelerating development. Still, from security vulnerabilities and training data to the method of storing and managing third-party AI models, outsourcing comes with risks. For example, in late 2023 Legit’s research team reported on the potential damage of AI supply chain attacks, such as “AI-Jacking.”

Legit’s expanded capabilities go beyond the visibility layer of identifying which applications utilize AI, LLMs, and MLOps by quickly detecting risks in AI models used throughout the SDLC. For security and development teams, this offers an essential tool against AI supply chain security risks by empowering organizations to flag models with unsafe files, insecure model storage, or a low reputation. Initially, Legit covers the market-leading HuggingFace AI models hub.

Legit’s latest innovation in enabling the safe use of AI in software development complements other features announced in February 2024, including the ability to discover AI-generated code, enforce policies such as requiring human review of AI-generated code, and enact guardrails that prevent vulnerable code from going to production,

“The release of LLMs and new AI models triggered the generative AI boom and started ‘AI-native’ application development. We are seeing an explosive growth in the integration and usage of third-party AI models into software development. Nearly every organization uses models within the software they release,” said Liav Caspi, Chief Technology Officer at Legit. “This trend comes with risks, and to guard against them companies must have in place a responsible AI framework with continuous monitoring and adherence to best practices to protect their development practices from end-to-end.”

Key benefits of Legit’s expanded AI discovery capabilities include:

  • Reduction of risk from third-party components in the developer environment
  • Alerts and prompts when a developer is using an unsafe model
  • Protection of the AI supply chain



Leave a Reply

Your email address will not be published. Required fields are marked *