Similar Concerns About MSPs and the Use of Agentic AI
When it comes to the recent emergence of agentic AI in the world of enterprise tech, Breen said he has concerns as well. Agentic AI is the use of AI to create autonomous, automated, rules-based software agents that are built to perform routine tasks without human intervention.The danger of an over-reliance on agentic AI is that it can prevent new generations of software developers from learning fundamental coding and networking skills by taking these routine tasks out of their hands, said Breen.In addition, agentic AI can have the same negative impacts on businesses and MSPs as vibe coding, he said.“As far as whether agentic AI is good, it really depends on how it is being used and what level of oversight an organization has on it,” said Breen. “It can be effective in getting tasks done quicker and lowering the barrier for entry, allowing people to be more efficient, but it also introduces critical problems: it can hallucinate malicious software packages, embed hard-to-find business logic flaws, and flood development teams with ‘AI slop’ bug reports that waste time. This is why it is critical to have a human in the loop. Technology like this cannot be fully trusted to operate on its own. Organizations need to have someone there to properly control and review logs.”For MSPs and the smaller companies they typically serve, they will often hear large technology vendors exalting the benefits of agentic AI so they don’t fall behind competitors, said Breen."But not all businesses need AI agents. It can be effective, but smaller businesses that may not have the proper strategy and infrastructure to back it up would be opening themselves up to more risk than reward. Before any business, small or large, implements any type of AI, it is important to make sure the business’s infrastructure is truly ready for it.”Ultimately, businesses “must ask themselves if their teams have the appropriate skills to understand and look at the code, even if AI is writing it,” he said. “If they cannot understand the code, they cannot debug it, and they would be better off without agent-based AI.”But it’s not just the loss of skills in companies that use agentic AI, he said. There are also real security shortcomings, especially for MSPs that work closely with critical infrastructure or military organizations in highly sensitive areas. “There are tasks that they cannot risk implementing this technology. That is going to be the same for an enterprise; there will be certain tasks that, either due to the risk of it going wrong, or reputational damage, or ethical concerns, that they cannot use the technology.”
Caution: The Bad Guys Have Access to Agentic AI, Too
While Breen is passionate about warning MSPs and businesses about using agentic AI internally without extensive analysis and consideration of its risks, he is also clear in pointing out that cybercriminals have access to it as well, and can use it as a weapon to quickly launch AI-powered attacks.“No matter how much is spent on cyber defenses, people remain a critical line of defense,” said Breen. “There is a risk of blindly trusting AI-generated code, unlike code from traditional sources that developers would typically verify. Over-reliance on agentic AI can prevent technical staff from learning core coding and debugging skills, leading to a future where they may be unable to understand or secure AI-written code. Therefore, human expertise is still needed to focus on complex issues like business logic flaws that AI may not detect.”Agentic AI and Vibe Coding Are Not Going Away: Analyst
Despite these security concerns, which are being widely discussed in the tech world, the use of generative and agentic AI by developers is profoundly reshaping software development, Katie Norton, an analyst and research manager for DevSecOps and software supply chain security at IDC, told ChannelE2E.“These technologies are here to stay, and they will play an increasingly important role in how software is built and secured.”Some 91 percent of developers surveyed in a recent IDC study reported they are already using AI coding assistants, which has produced a mean 35 percent increase in productivity on average, said Norton.“By automating repetitive work like boilerplate code and tests, AI helps teams move faster and control costs. It also opens up software creation to non-developers through natural language prompts,” she said. “For developers, AI frees up time to focus on higher-value, creative work, leading to a more satisfying and efficient experience. As these tools mature, we can expect even stronger secure-by-design capabilities through integrated security features and ongoing improvements.”For MSPs, this is also an opportunity to help customers use AI as an accelerator, while continuing to invest in critical expertise to guard against potential vulnerabilities, she said.“MSPs and their clients should be learning about these technologies now,” said Norton. “AI-driven development is a shift as significant as the rise of the internet or smartphones, and adoption is accelerating quickly. Ignoring it will only make it harder to catch up later.”To build expertise in this area, MSPs must look for tools that emphasize security, transparency, and control, rather than those that rely on unchecked automation, she advised. MSPs must also help clients understand what AI does well, where it falls short, and how it changes the security landscape, she added.“Putting the right safeguards in place is the key to minimizing their inherent security concerns,” said Norton. “This means strong human oversight, where developers validate AI-generated code and approve changes,” she said. “It also requires rigorous testing, including unit tests and AI-focused penetration testing. Organizations need clear AI governance policies that manage data sharing and dependencies. Real-time security feedback from integrated tools is essential. Good prompt engineering and the use of shared rules can guide AI toward safer outputs. Tooling vendors are working hard on these guardrails, and when done right, this approach helps security teams scale without slowing development.”Ultimately, Norton said she is “stressing caution, but it is a thoughtful and proactive caution. The goal is to help organizations see AI not as a replacement for human expertise, but as a powerful tool that needs supervision, testing, and strategic integration into existing security practices.”A Shrinking Talent Pipeline Is Also a Risk
Another analyst, Janet Worthington, senior analyst for security and risk at Forrester Research Inc., concurs.“For MSPs and enterprises considering the adoption of AI coding assistants, it is essential to keep software developers actively involved in the process,” said Worthington. “Additionally, integrate automated guardrails to ensure security, quality, and maintainability throughout development. Consider all code inherently untrusted and subject it to thorough vetting and validation for security risks before downloading, purchasing, using, or deploying it to production.”Worthington said she is particularly concerned that companies “relying heavily on LLMs for designing, coding, testing, and debugging software may hinder junior developers from acquiring the essential skills that traditionally come from years of writing and maintaining applications. Even more troubling is the decline in entry-level software development roles, which are critical for nurturing the next generation of skilled professionals.”She is also seeing enterprises increasingly focus on hiring senior-level developers and equipping them with AI assistants to enhance productivity, while also relying on human oversight to review and validate AI-generated code.The problem is that this may not deliver the desired results, she said.“This unsettling trend could lead to significant challenges in the next five years, as the pipeline of experienced senior developers may dwindle, creating a talent gap in the industry,” said Worthington.Excited by the possibilities of AI coding assistants, CEOs and other executives may be failing to grasp the risks of shortsighted decisions, such as bringing in AI tools while letting go of human talent, she said.“While such moves may deliver short-term cost savings, they can lead to long-term problems, including frequent application failures, poor maintainability, increased vulnerability to cyberattacks, subpar user experiences, and vast tech debt,” said Worthington. “Leaders must not underestimate the critical importance of human expertise in building enterprise-grade software.”