Skip to main content

5 Steps to Optimizing Cloud Value

More Value from the Cloud

In our ongoing series of articles about cloud financial optimization we have discussed the common experiences of customers as they migrate to the cloud, including challenges with the raw infrastructure, legacy application constraints, and inefficient engineering practices. We asserted that organizations need to stop treating infrastructure as a commodity and manage it as a key enabler of the organization’s business capabilities.  

 If you buy into the premise, the next question is, “Yes, but how?”  

 There are five basic steps any organization can take to improve the return on investment in cloud computing, including: 

  • Implement policies and access control to eliminate wasteful spend. 
  • Move on-premises components to enable the decommissioning of infrastructure and repurposing of associated operations staff. 
  • Enable continuous delivery of both applications and infrastructure.
  • Strategically refactor applications to improve resource utilization.
  • Replace expensive third-party components with cloud provider services.  

Policies & Access Control  

 The first step to getting control over cloud costs is to establish policies and procedures to effectively govern cloud spend. First and foremost of these practices is to configure your cloud vendor’s spend management tool with budgets, alerts and ongoing monitoring of spend. A close second is establishing and enforcing a tagging policy for every cloud asset. The tagging not only aligns costs with the right owners and capabilities, but it also facilitates the identification of “orphaned” resources. Finally, establishment of monthly reviews of spend with consumers of cloud resources to regularly ensure that the organization maximizes discounts and moves stable workloads to reserved instances or savings plans.  

Decommission Now!  

 As organizations move applications to the cloud, their on-premises data centers can become wastelands of underutilized assets. A key step to extracting value from a cloud implementation is maximizing what can be shut down. That is, organizations must consolidate remaining on-premises systems to fewer servers, virtualize physical servers, merge similar applications, and decommission outdated or underutilized applications without guilt or blame. As the physical footprint of the on-premises data center shrinks, pursue opportunities to release, sublease, or repurpose unused data center space.  Finally, with less infrastructure to manage, organizations can redeploy operations staff to higher value-adding activities.  

Make Build / Test / Deploy “Free, perfect, and now”  

 This next step, fully automating the build/test/deploy process for both applications and infrastructure, might seem counterintuitive because it is an investment in automating “build” work rather than runtime aspects of the infrastructure. However, many senior executives are surprised when they discover that every production release of an application requires as much as 100 – 300 hours of operations support, first to a development environment, then to one or more testing environments, then to a staging environment, and finally to production.  

When we include manual testing labor with the operational labor spend, companies spend thousands of hours of labor moving a release from idea to production. Automating these processes not only improves quality but also improves the cycle time for each product release. These benefits can be quantified and converted into cash, increasing your ROI on cloud spend.  

Strategically Refactor to Improve Resource Utilization 

Refactoring is the process of making changes to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior. (Fowler, 2019). When a monolithic application is migrated to the cloud, it brings its underlying inefficiencies along for the ride. Strategic refactoring is deciding what to refactor and where to establish boundaries and seams between application components, given how the code supports an organization’s business capabilities.  It also includes the discipline of establishing the economic value created by breaking up a monolithic application so we only change the components that are worth changing.  

Organizations can reduce the overall amount of infrastructure required to support an application by identifying components that are used more frequently or in more variable ways, surrounding them with tests, and making the underlying behavior accessible through an API. These components can be hosted on separate servers and autoscaled independently of the rest of the monolith, reducing the total cost of infrastructure for the application. 

Replace Expensive Third-Party Components   

Finally, the last step organizations can take to optimize the value of their cloud investments is to replace expensive third-party components with services provided by the cloud vendor or open-source solutions. High-value opportunities for cost optimization include commercial database management systems, data management tools, in-memory caching, application containers, and security components.   

Security components warrant a special focus because cloud providers invest heavily in certifying the security of their solutions, and the more a client organization leverages the cloud provider’s security mechanisms and tools, the cheaper it will be for the client to meet or exceed security requirements for its customers.

References 

Fowler, Martin – Refactoring: Improving the Design of Existing Code, Second Edition, Addison Wesley, Boston MA, 2019.  

    

Related

White Paper
Agile Transformation White Paper 75 Min View

Learn what a structured Agile Transformation is supposed to look like.

View
Video
Transformation Explained 47 Min Watch

Learn how to achieve true business Agility.

Watch
White Paper
Clarity in the Backlog 20 Min View

Take a deep dive into how you should go about creating clarity in your backlog.

View