It’s a good day in the data centre. Everything is ticking over nicely, and for a change there are no urgent fires to fight. There you are, working away quietly, when someone walks in with an idea that just might be the best thing to hit your business in years. All they’ll need is 30 VMs spun up pronto, access to SQL databases, oh, and a whole heap of storage. Is that all?
You’re managing traditional IT, so after a quick calculation, you tell them if you start ordering extra infrastructure right away, and delay a couple of other projects, you can get it done in a couple of months. Well, maybe three, if you allow for contingencies. Their face falls as you explain that you’ve got to build servers and prepare the right environment. In three months, the competition will have already hit the market.
At this point, your visitor may walk out, and tell everyone who’ll listen that IT wouldn’t come to the party. She may turn around and say that if you give her a smartphone and a large espresso, she can have that done in half an hour using cloud services. Either way, you can see her point, and it really is a great idea. So, what type of modern data centre structure could turn the answer into an unqualified yes?
The Modern Data Centre
The speed of business today is faster than ever before, and IT is under pressure to match this. Traditional IT can’t compete with the speed and scalability of cloud; it is a complex beast, slow to move and cumbersome to manage. That certainly doesn’t mean cloud is everything, though, and new generation technologies can transform the situation.
Digital transformation is a necessary element of survival in modern business, and a modern data centre is the foundation. It must offer cloud-like speed and scalability, and use automation to reduce complexity so humans are free for high-value tasks. The three key features of a modern data centre are:
When I talk with customers, the top three requirements for data storage are inevitably that it is fast, cheap and easy to manage. A few years ago, I’d tell them that it was only possible to have two of those three things – if it was fast and easy to manage, it wouldn’t come cheap. As flash storage has become more affordable, that has changed.
There are a number of ways we can keep costs down. Recently, we worked with a large rail operator that had three or four racks of storage in a co-location. When it came time to refresh, we put in an all-flash solution that took up a fraction of the space, saving them significant ongoing costs and giving them a handy performance boost. When you factor in the maintenance costs of spinning disks as the equipment ages, the rail operator really comes out winning on all sides.
One caution here is to check exactly what you are guaranteed to get in terms of de-duplication and compression, and factor that into the purchasing decision. There are many promises about compression rates, but the key word here is guarantee. We’ll still see tier 2 storage, such as backup and video surveillance, sitting on disk.
Hyper-Converged Infrastructure (HCI)
HCI is the next generation of infrastructure in a data centre. Its combination of compute, storage and networking is software-defined and is key to achieving the modern data centre. Initially, there was a slow uptake of HCI, as customers would compare the purchase price of a HCI stack versus a converged stack (i.e. FlexPod, FlashStack etc.). HCI is more expensive due to its advanced software and automation. The first automation steps are completed for you as common tasks, such as server deployment, network configuration, and storage changes have already been orchestrated within the HCI portal.
I suggest the inclusion of a separate backup device, unless the HCI stack is to be replicated to other stack(s) in multiple locations, then a separate location for backup is needed. The 3:2:1 rule for backup still applies, so we need 3 copies of the data, 2 of these can be local and the last copy needs to be at a different location. In this model, there is the primary data and possible snapshots on the HCI stack and then a backup to the separate device.
Private Cloud Portal
This is the next challenge for many organisations and it is to extend the HCI management portal to a full private cloud portal. This is essentially the public cloud experience within your own data centre as it offers features such as self-service, charge back and multi-tenancy. These all rely on the automation and orchestration of tasks.
To complete the 3:2:1 backup rule discussed earlier, we need a second location for backups. This is commonly now being provided by a cloud-based solution. It achieves the 3:2:1 backup solution, but can greatly reduce the costs to an organisation as opposed to having a second data centre. The next logical step is to utilise this cloud location not just for backup, but as the DR site, so there is an ability to launch the replicated backup data as the production VMs, should a significant outage occur.
So many companies are dipping their toes into cloud for backup and DR purposes and this has been made possible by the deployment of public cloud instances within Australia. What is now occurring is the migration of production workloads to into public cloud. Data#3 has recently completed a project for a community care organisation in Victoria. This involved the deployment of Citrix XenApp onto the Azure platform to provide a consistent experience for the 2,000+ users that are spread across the state.
There is also a steadily increasing adoption of Software as a Service (SaaS) providing cloud hosted apps direct to organisations.
There are still many challenges with moving to cloud, so it’s important to do your due diligence. Data#3 recently completed a Cloud Readiness Assessment with a regional QLD organisation. While the applications and data were suitable for migration to cloud, the local telecommunications infrastructure was limited to a single Exchange and single line out of the area. This was considered to be too big a risk for production workloads.
Software-Defined Networking (SDN)
I mentioned earlier that HCI included software-defined storage and networking. This is key for the automation of tasks, but it also adds key features essential for the migration to a public cloud or hybrid cloud model. SDN includes two key features essential for the hybrid cloud, distributed firewall (DFW) and virtual networks. DFW allows a security profile to be assigned to a VM or a user ensuring that the security policies are applied to that object, regardless of which location it is currently deployed. This is essential as workloads become fluid and migrate seamlessly between the hosted cloud environments and the local data centre. It also greatly enhances the security of an orgainisation as it eliminates the “egg shell” security concept. This is when the perimeter is very hard and difficult to crack, but once inside, it is very soft and gooey. DFW prevents this as the firewall is applied to the VM and not at a subnet or IP address. Should a VM be compromised, it is still greatly limited as to which other VMs it can access. Virtual networks provide the ability to stretch our layer 2 networks over great distances. Previously this was limited to locations with high bandwidth and low latency, but virtual networks enable this capability with up to 150ms latency. This allows organisations to seamlessly extend their data centre to locations or cloud providers that are located in other states or similar distances.
So, how do we implement the modern data centre
There are two approaches to this, the first one involves the deployment of all the components I have discussed here today. It is achievable, but would be a lengthy and expensive process especially in the area of automating and orchestration. This is a significant effort both to initially implement and then to continually update and improve. Again, while achievable, few companies are willing to invest the time and money to get there.
So who has this sort of budget?
Last financial year, both Microsoft and AWS spent over 10 billion dollars in R&D. This includes the constant updating and automation of their public cloud platforms. Now, this is great, but how do we get this level of automation within our data centres. The public cloud is now available for the local data centre via either Microsoft Azure Stack or AWS outposts. These offerings provide a locally hosted platform that provides that same cloud portal experience for your internal users. While these offerings have been available for a while, there has been limited uptake as again they were compared to the cost of a converged or hyper-converged offering and as they are more expensive for the same amount of CPU/memory/Storage they get overlooked. The value is in the automation and cloud portal access that is provided and constantly updated. As organisations explore this journey into automation these offerings are going to become more popular. They are still not at the level where we will see these offerings become the only solution within the local data centre, but they will co-exist with local HCI stacks and services, and provide the ability to selectively place workloads in the various environments.
Call to Action
Now is the time to be discussing with your organisation what their plans are for digital transformation. The data centre needs to be prepared for the new workloads and dynamic requests and this can only be achieved by automation and hybrid cloud. Data#3 can assist in this journey with various assessments and the goal is to develop a gap analysis of where the current data centre is now and what tasks, software and hardware is required to achieve the modern data centre.
Tags: Public Cloud, Cloud, Private Cloud, JuiceIT, Microsoft Azure, Data Centre, Storage, Digital Transformation, Backup, Software-defined Networking (SD-WAN), Hyper-converged Infrastructure (HCI), JuiceIT 2019, Citrix XenApp