Top 4 Backup as a Service (BaaS) Pitfalls MSPs Should Avoid

Author: David Gugick, VP of product management at CloudBerry Lab

As an MSP, failing to recover your customer’s data is one of the easiest ways to lose that customer and possibly find your managed services business in a legal crisis. When you add BaaS/DRaaS (backup and disaster recovery as a service) to your offerings, you need to make sure you can fulfill any requests to restore lost or damaged data or reimage a damaged server. Here are four pitfalls MSPs should avoid.

1. Running Manual Backups

A verified manual backup is better than no backup at all, but without proper scheduling and monitoring, backup continuity and backup retention will be difficult to manage.

Whether you back up client data manually or delegate this task to one of your employees, mistakes can and will likely occur. Backups may be skipped, backup options may not be set consistently, and critical data may not be available for restore.

Some MSPs opt for a do-it-yourself approach. The problem is that backups need to be run on a recurring schedule with consistent options and doing this manually will inevitably fail. Instead, make sure you take advantage of the scheduling and monitoring features of your backup software to make sure backups run when they need to and if they don’t, you are notified as to why.

2. Using a Backup Product that does not Perform all the Backup Types Needed

The backup product you use should be able to back up almost everything you need. If it doesn’t, you may be looking at using two or more products which creates added costs and implementation complexity, or you may end up creating a supplemental, home-grown solution to back up those servers or applications your backup software cannot handle. And home-grown solutions are notoriously error prone and require ongoing maintenance and custom scripting.

Understanding what types of recovery your clients need is critical and using a solution that can cover those requirements will alleviate a lot of pain for the service provider. Do your customers need to recover entire servers? If so, your backup software needs imaging capabilities. Do you need to be able to restore servers to a virtual machine or the cloud? If so, your solution may need to be able to restore images to a new physical server, a virtual server, or the cloud. Need to back up your Microsoft SQL Server databases? Then make sure your backup software works with SQL Server. Need to back up Windows and Linux? Make sure your backup solution supports the platforms you need.

But the opposite is also true. Don’t waste your money on solutions that have features you don’t need now and don’t expect to need in the future. You may end up adding software, implementation, and management complexity you don’t want.

In other words, select a backup solution based on what your customers need.

3. Storing Backups in a Single Location

If you’re backing up your on-premise servers and only send your backups to one location, like the local network or cloud storage, then you could end up with longer restore times or higher restore costs. Worst case, you may not be able to restore at all.

Using a 3-2-1 backup strategy is a recommended and popular backup concept which says that you should keep 3 copies of your data (live + 2 backups), they should be in 2 backup locations, and one of those backup locations should be offsite. This gives the best protection and recoverability.

With local backups, your latency is low and performance high. Meaning, restores can be performed quickly and without the added cost of pulling data from cloud storage. With offsite cloud backup, you and your customers are protected in case of disaster (disk corruption, natural disaster, etc.)

If you’re only backing up locally, consider also sending backups to the cloud. Cloud storage costs seem to get more affordable every year. And if you’re only sending your backups to the cloud, consider also keeping local copies, if the fastest restore times are important to you and your customers.

4. Not Testing your Restores

This is the problem we see the most. Everyone should be testing restores in order to meet service-level-agreements you have in place. If you tell your customers you can recover a server within 2 hours, then you should know if you can do this. If you tell customers, you’re keeping a year’s worth of backups and up to 10 version of each file, then you need to be able to restore a file from 10 months ago.

The best way to do this is to periodically audit your backup jobs to ensure the retention and versioning are set up properly. You should also perform test restores to a sandbox environment. Test any scenarios that your customers may need. That could include restoring to a new physical server, restoring to a virtual machine, or restoring to the cloud. Once you test these restore types, you’ll have a good understand about what SLAs you can meet.

David Gugick is VP of product management at CloudBerry Lab. Read more CloudBerry Lab blogs here

Return Home

No Comments

Leave a Reply

Your email address will not be published.