Snyk has rolled out its new AI Trust Platform to help organizations manage the accelerating risks of AI-generated code, reports SiliconANGLE. As AI tools become central to modern software development, they also introduce new vulnerabilities like flawed code generation to AI-powered attacks. Snyk’s platform is designed to address these risks directly, offering a suite of tools to embed security and governance into every step of AI-assisted development.The platform includes Snyk Assist, a chat-based interface that provides real-time recommendations and security insights to developers. Alongside it is Snyk Agent, which automates key security tasks such as applying fixes and running checks during the software development lifecycle. Together, these tools aim to reduce the burden on developers while maintaining robust security across increasingly AI-driven workflows.Snyk Guard and the AI Readiness Framework complement the core tools by introducing governance and maturity models tailored to AI adoption. Snyk Guard offers policy enforcement for AI systems, ensuring compliance and control in dynamic environments. The AI Readiness Framework, meanwhile, gives organizations a structured way to evaluate and improve their secure AI development practices over time.To support broader collaboration and innovation, Snyk is also launching Snyk Labs and Snyk Studio. Labs will focus on research and experimentation in AI security, including the development of an AI model risk registry and tooling like AI Bill of Materials. Studio, on the other hand, enables technology partners to co-develop secure AI-native applications with Snyk experts—bringing contextual security into third-party AI tools through standards like the new Model Context Protocol.
AI/ML
Snyk Launches AI Trust Platform to Help Developers Secure AI-Powered Code

An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
You can skip this ad in 5 seconds