As an IT manager, building and provisioning service infrastructure for your internal customers can be challenging. Factoring in the many specific requirements often introduces greater complexity and decisions must be made that will ultimately affect the final design.
At a minimum, you should be asking yourself and your customer the following questions:
These variables will feed into the decision-making process, and ultimately, will dictate the deployment methodology and outcome.
Today, we have access to what I call “the modern service infrastructure approach” that dramatically simplifies and incorporates the above variables. If you leverage this approach, the time spent planning will shift towards functionality rather than complexity.
To articulate this, I am going to present two approaches for the decision-making process using fictitious scenarios. Both will provide the required infrastructure, however as you will see, one has a clear advantage over the other.
In scenario 1, we will be using the “traditional approach” – service infrastructure hosted on virtual machines (VMs).
In scenario 2, we will adopt the “modern approach” – utilising the advanced features of Azure to create scalable\integrated\functional and secure service infrastructure.
Let’s assume that I am an infrastructure specialist and one of our developers has created a new e-commerce web application. They need a hosting platform to place the application onto as they are currently hosting their application on a proof of concept power workstation.
When I recently migrated from physical to hypervisor-hosted VMs, I invested a significant amount of time and effort into creating the hosting platform strategy. This strategy was highly successful, so of course, I applied this methodology to the new Azure platform.
To kick off the design process we start whiteboarding the service infrastructure:
1. We build out a VM to host the application, with an Azure virtual network and 2 storage accounts for application diagnostics and the virtual machine disk.
2. I get my first curveball. As this is a web-facing application, my client requires “high availability”, so we add a second virtual machine, setup a replication method for the application service files, and add both VMs to an availability set to ensure that at least one VM will always be available.
3. Now we enter the “data platform” phase. The application requires a database for product and customer information, so we add a SQL Server into the design and include premium storage to handle the IO workload. We also add a secondary virtual network to control access between the front end and back end services.
4. Remembering that we must factor in high availability, we go ahead and add a second SQL host and step up our licensing to Enterprise so that we can use an always-on Active\Active availability group. We also add in load balancers for the front and back end tiers.
5. Our next decision to make is regarding connectivity and security, as the application needs to be internally and externally facing. We place an application gateway with web application firewall in front of the front end load balancer to protect the application from malicious attacks. We also introduce network security groups to further lock down inter-service traffic.
6. Our conversation and requirements-gathering session broadens when the developer advises that the API’s written into the front end application need to talk to other domain-hosted services, such as CRM. To maintain security, we ensure that all service accounts are AD managed and configured for Kerberos constrained delegation. To achieve low latency authentication options, we deploy two domain controllers.
7. From my perspective, I am done and can now feed these design diagrams into my deployment process. However, our developer throws us a tiny curveball.
“Our application needs a full development lifecycle from development and testing through to production, and each environment needs to be identical so we can ensure consistency between each deployment.”
In response, we duplicate the service components and to save on as much cost as possible we re-use some of the core components (domain controllers and storage accounts).
Once we’re agreed on design, I am going to be really busy. I now need to go ahead and deploy all of this infrastructure, and more importantly, protect it after it has been built. On top of this, I need to ensure this infrastructure is added to our protection methodologies to include:
This is the exciting world of modern service application methodology. Again, taking the needs of the developer’s application, and after careful review of the underlying code base and requirements, I offer up an alternative to the traditional approach by deploying one of the awesome platform as a service (PaaS) offerings available in Microsoft Azure.
The Application Service Environment
The below diagram almost does the same thing as the previous diagram, and provides significant capability, as well as reducing complexity. I can still control external access through an application gateway web application firewall (AGWAF) and built-in firewall and access control mechanisms with the Azure App Service Environment (ASE). SQL as a Service provides extra functionality.
For deployment, we have the option of creating deployment slots, which are isolated versions of the application that the developer can switch between.
We also only need to add the ASE and applications to a small subset of our protection methodologies. A bonus is that I have not had to think about high availability since the ASE is by design default, tolerant and highly available.
The great news here is that, since I now have a lot of free time on my hands, I can shift the conversation with the developer to functionality. I introduce the developer to some of the other advanced service offerings in Azure.
Full of inspiration, we start getting creative and extending the functionality of the application. We add in authentication options for Azure Business to Business (B2B) and Azure Business to Customer (B2C) to provide our suppliers with a customised portal that only they can log into to update our product database, and also provide our customer the ability to authenticate using third party credentials (Gmail and Facebook, for example).
We then add integration with GIT and CDN for code storage and continuous deployment functionality.
For development lifecycle streamlining and extended functionality, we migrate the application APIs into Azure Functions. Now our developer no longer needs extensive regression testing for the API functionality when core code is updated, the APIs allow us to securely hook into our CRM and product databases.
To take us to the next level, we explore customer interaction opportunities using the Bot Framework with natural language understanding, sentiment analysis and customer insights. Our customers can now interact with our site and talk to a trained AI powered chatbot, if we detect a negative experience. We can then feed that information back into our CRM and trigger a call to action for our sales representatives.
The Microsoft Azure cloud offers numerous services for almost every scenario that you might need – you just need a helpful guide to show you the what, and how! Shifting from the traditional to the modern approach dramatically simplifies your infrastructure and unlocks three key features – Agility, Extensibility and Functionality. Hopefully, scenario 2 will give you the confidence to check out some of the capabilities of Azure PaaS.
Click here to learn more about Azure, or feel free to reach out to me with any questions you might have.
Tags: Public Cloud, Cloud, Microsoft, JuiceIT, Microsoft Azure, Security Information & Event Management (SIEM), JuiceIT 2018, Backup, Microsoft Azure App Service Environment (ASE), Microsoft Azure Business to Business, Microsoft Azure Business to Customer, Platform as a Service (PaaS)