Channel

5 ChatGPT Limitations to Consider Before Going All-In with AI

Credit: Getty Images
Laura DuBois, group VP, N-able
Laura DuBois, group VP, product strategy, N-able

Many partners I’ve spoken with are experimenting with ChatGPT to automate script generation, write proposals etc—you name it, it’s being tried.

But even in a budget-conscious economy, it's still not worth sacrificing the human element. This is especially true if that move results in script errors, unreliable information, introducing cybersecurity risks, or compromising your company’s intellectual property. You still need a human involved in the coding process today.

We operate at lightning speed in the IT world, so an AI that can automate operations and tasks can be valuable. But take it for what it is and don’t remove highly skilled and trained content marketing and coding experts.

Miscalculations, overstated claims, and instructional misinterpretations

The fact is as convincing as ChatGPT may be in its delivery, it pulls from a world where “everything on the internet is true.” Your generated scripts and articles might look great at first, but when the human eye reviews them, it will likely find miscalculations, overstated claims, and instructional misinterpretations.

In addition, and this one needs more public attention, using ChatGPT can pose a very serious cyberthreat as cybercriminals are using it to create and share malicious code and develop phishing schemes. Newly published research from BlackBerry revealed that 51% of IT decision-makers surveyed believe a successful cyber-attack will be credited to ChatGPT within the year. Among ChatGPT security concerns, there is a heightened focus on privacy concerns as well.

So, before you consider getting rid of your thinking, feeling, and trained human team in favor of what might currently seem faster, easier, and cheaper, here are five limitations of ChatGPT to consider:

  1. Accuracy and Reliability: ChatGPT is an advanced AI tool that may only sometimes provide accurate or reliable information.
  2. Context and Subject Matter: ChatGPT typically won’t understand the context of a conversation or the nuances of the given subject matter. And it certainly won’t read the room and react accordingly.
  3. Security and Privacy: ChatGPT may be handling sensitive information depending on the application.
  4. Integration and Scalability: Depending on the scale of the project, ChatGPT may need to be integrated with other systems or scaled up to handle a large volume of requests. In most cases, you won’t want to set it and forget it.
  5. Ethical and Legal Considerations: There could be bias in the training data and the risk of plagiarism. In addition, ChatGPT tends to answer requests based on its text sources without citing or referencing the sources of such information.

This factual dissonance, security concerns, and content incongruity doesn't mean ChatGPT isn't a helpful tool. Rather, it means we still need to think seriously about how we use it and consider the above factors before going all-in with it.

ChatGPT is an AI language model, not a human expert. When applied through the lens of trust but verify, its responses should be viewed as informative rather than definitive. You can trust some of the information ChatGPT returns, but it cannot replace the value gained from human oversight, active listening, confirmation, and genuine interaction.


Laura DuBois is Group Vice President, Product Strategy at N-able. Read more N-able guest blogs here. Regularly contributed guest blogs are part of ChannelE2E’s sponsorship program