April 12, 2022

A Q&A with our HPE storage experts

Over the past few years, Hewlett Packard Enterprise (HPE) has invested heavily in research and development across their storage portfolio. Amongst the advances, it has incorporated AI, rebuilt storage for the cloud, and enabled storage to be run ‘as a Service. These innovations have undoubtedly helped businesses derive more value from their storage solutions, and better meet the demands of the ever-increasing data proliferation we are experiencing today.

With the increasing data and infrastructure management complexity, it’s important that we reshape the storage conversation with our customers – because it is no longer about just implementing a storage platform. In a business climate that’s all about speed and driven by data, it has to be an overall storage strategy discussion. This means stepping back and focusing on the outcomes CIOs and CTOs are looking to achieve with their business data, and how we can make that journey easier for you.

To understand the shifts, we brought together Data#3 storage experts – Scott McGarry and Brenton Reeves – to discuss what this all means for Data#3 customers.

What’s causing the increasing complexity of storage infrastructure?

Scott:   The shift to cloud-based storage and trying to manage enormous volumes of data across on-premises and cloud is probably the biggest factor. The best way to understand is to look at where we’ve come from. At one point, even managing a single array had its difficulties because every unit ran almost entirely independent of each other. This meant a storage expert would need to physically visit your data centre and arrange all of your disks to fit your various workloads. Technologies soon evolved to better unify storage units, removing the need to provision each storage unit exactly to every workload.

Fast forward to today, and storage is provisioned all over the place – on-premises, in the cloud, or in multiple clouds, or on multiple premises. Which brings us to today’s big challenge: how can we manage the complexity of managing data across all those separate entities and different types of storage – primary, secondary and backup storage? To do this, we need to continue to unify storage and ensure the entire storage infrastructure can be viewed and managed holistically.

How does this increased complexity impact customers actual storage solutions and access to data?

Brenton:   These complications play out in a number of areas. The most obvious being security. We don’t just have an array that’s accessible on-premises that needs to be secured, storage is now also in the cloud. This means the attack surface has grown. Which brings us to the issue of data recoverability. We don’t just have one set of data that needs to be backed up to one spot, we’ve got data spread across many environments. All this data needs to be backed up and handled separately. For example, because of the different ways that storage is being provisioned if you’ve backed up data on-premises, it can be potentially difficult to recover that data from the cloud environment. You also need to consider data sovereignty as you backup across environments. For example, if you have data that must be stored locally then you need to ensure that happens. Again, by bringing some of your data together and managing it in a uniform manner, you can effectively tackle some of that complexity.

What have been the traditional approaches to managing these challenges and how successful have they been?

Scott:   Having storage spread across different sites and cloud vendors isn’t a particularly new issue. However, having solutions to tackle it is. Traditional approaches required you to log into each individual cloud, or storage array, in order to manage and control it. Looking into the future, you need to be able to manage your storage from one console with a single unified view. This level of visibility into your entire primary and secondary data stores, allows IT to make far more informed decisions around storage, replication and backups. This really is the ideal future state.

HPE have been talking a lot lately about Unified DataOps and the idea of bringing a cloud experience to your storage wherever it lives. Can you explain what this means in reality and talk through an example of it in practice?

Brenton:   What HPE is doing is bringing the experience of provisioning in the cloud and taking that to the on-premises environment. It’s happening in reverse too; they are taking aspects of provisioning in the on-premises environment and taking that to the cloud. As an example, you have a development platform that you’re rolling out into production. Your business model states that development works best in the cloud. However, when it goes into production, you need to have sovereignty over that data and various security principles in place. This has to be done in a controlled data centre environment.

The solution requires you to have a common management platform where you can do provisioning in your development platform and creating the templates that dictate how the application is going to be deployed. You then lockdown the development environment. The on-premises environment can then use exactly the same template but deploy on a different infrastructure.

In this example, you are not reinventing the wheel, you are using the same experience in the cloud and taking that on-premises. Conversely, depending on the needs of the application, you could develop on-premises and then deploy to the cloud where it’s more scalable, or accessible. The ultimate goal is to lower costs while improving productivity and reliability.

Are there different security considerations that come into effect with this type of model?

Brenton:   When you start bringing all your management together, there are a number of things to consider. With a typical infrastructure you might have 20 different storage platforms across cloud, on-premises and in branch offices. This creates a massive attack platform that could lead to individual breaches of data and a much higher chance of something going wrong somewhere. When you bring it all together, it’s not all rosy either because it means there’s a single place where all of your credentials are secured. This means your security has to be absolutely rock solid and top of mind.

What are the most time consuming activities that IT teams are currently performing that can be improved by a Unified DataOps approach?

Scott:   If we look at a standard business, you have your primary data, secondary data and backup data. Just as an example, your primary data might be on-site, secondary data in a data centre, and your backup data is in the cloud. Or maybe your primary data is on-site, your secondary data is in the cloud and your backup data is in another cloud. Both of those scenarios are fairly common. You will then have a console login for each storage area along with updates for each such as firmware and patching. This clearly adds more workload for IT.

With the unified approach, you can view everything from a single console, you can update most of your infrastructure, and you can manage it on a day-to-day basis from that one place. So, you’re no longer having to log into three different places just to keep the lights on, everything becomes so much easier and quicker to manage.

How can organisations bring more automation into their storage infrastructure environments?

Brenton:   One of the things we’re seeing is customers being interested in moving to common platforms within the cloud and on-premises, usually because there’s certain things that they need to have on-premises and others need to be in the cloud – and they want to be able to move things between easily. One of the problems we encounter in this situation is quite often cost. There can be a large cost difference between running that sort of system in the cloud, versus basically running your own cloud.

So, part of the whole unified storage environment with HPE is it allows a common platform to be deployed on both the cloud and on premises. This adds to the automation ability, where not only can you look at doing similar things on platforms, but when you’ve effectively got the same data systems underneath your on-premises and cloud environments, it allows you to automate at a much higher level.

How should organisations consider a shift to Unified DataOps – what’s the first step and how can Data#3 help?

Scott:   We find customers come to us when they are considering a new feature or storage array. One of the first things we do is reset that thought process to focus on what they’re trying to achieve – rather than the shiny new storage feature they have in mind. This means looking at the characteristics of their data, what they’re trying to do with that data, and where they are heading  in the future.

As a starting point, one of the tools we use is a piece of software called CloudPhysics to fingerprint your environment to gather accurate So, rather than looking at specific features, we bring the conversation back to the business outcome and what they are trying to achieve before the discussion of hardware crops up.

Brenton:   To add to Scott’s comments, when we’re designing a solution for your data landscape, we don’t just consider where the data should be stored – we also look at security around each of the platforms, including the recoverability of data and disaster recovery – and importantly, cost, because making the wrong decision on where to store data can be a multi-million-dollar mistake if it goes in the wrong place.

Discover your ideal storage solution

With over two decades of partnership experience, Data#3 is one of HPE’s largest Platinum Partners in Asia Pacific. We have worked together on a broad range of projects for leading Australian organisations across education, government and the corporate sector.

To help you make more informed decisions around your storage lifecycle, Data#3 is offering customers a free CloudPhysics Assessment where we will analyse your entire VMware® infrastructure against current pricing and configuration options from leading cloud providers, including Amazon AWS, and Microsoft Azure. In just 15 minutes, this will identify the ideal configuration per workload when moving to the cloud or identifying optimisation opportunities.