Published Date :
Here are five ways to ensure your cloud is properly cost optimized
There are many reasons to migrate your IT to the AWS cloud. From any time, anywhere availability with any device, to the elimination of all hardware and software and their attendant hassles to high availability to business resilience to security, but above all cost savings. The ability to exchange the high cost of purchasing and upgrading hardware and software (Capex) for the controlled variability of operating costs (Opex), is a significant inducement. Nevertheless, many organizations find that despite all the much-touted cost savings, their AWS billing remains stubbornly high, and in some instances, even higher than their original on-premises costs. If you’re one of these, this article from our AWS MSP team is for you.
Pick the Correct Instance
You wouldn’t use a cannon to shoot a fly; By the same logic, using the wrong instance means you’re simply perpetuating the conditions of your legacy environment. Instances provide computing muscle and are the basic building block of your architecture in the cloud. AWS provides 10 different types of instances and offers a variety of combinations of CPU, storage memory, and networking ability to choose from. Each instance comes in multiple size options to fit different sizes of workloads. Depending on your application requirements, for example, high-traffic web applications, SAP applications, and even genome analysis platforms, you need to conduct rigorous testing to select the best-fit instance type for your particular application.
These instances are regularly upgraded by AWS, to improve performance, so shifting to these or downsizing your instance (depending on your workloads) can cut your costs down drastically.
Pick the Right Storage Tier for Your Needs
Amazon offers a simple storage service called S3. The beauty of S3 is that you can store anything from receipts, excel files to images, videos, etc. The data that can be stored in S3 has no limit, i.e. it scales as needed. Your data files are stored as objects in something called a bucket, as opposed to a file directory. Each object comprises data, metadata, and a key, and the object size that you can store can be anywhere up to five terabytes. But, as we said, the number of objects and volume of data is unlimited, as S3 scales on demand.
There are three tiers of storage classes in S3—Standard, Infrequent Access (S3-IA), and S3 Glacier—and your use cases should determine which one is best for you. There are also lifecycle policies, which you can create to move data between tiers, and costs change accordingly. For instance, you might need to store an object in S3 Standard for 60 days and then move it to S3-IA for the next 60 days, before finally moving it to S3 Glacier. You can create policies for this to happen automatically. In addition to these three basic storage tiers, AWS also offers classes like S33 intelligent tiering, S3 Infrequent Access 1 Zone and S3 Glacier Deep Archive, and S3 Outposts, which is used to deliver object storage to your on-prem AWS environment. Since you only pay for the S3 that you use, it is important to choose the storage class that fits your business and budget.
To do this consider the following two factors:
1. How frequently will you be retrieving your data?
2. How available should your data be?
Track and Kill all Zombies
Cloud Platform providers like AWS make it easy to create infrastructure, easier, in fact than destroying said infrastructure. And that can become a problem. Another issue is the difficulty in maintaining complete visibility of all the resources you’re using in your cloud. Possibly, the only time you know what’s running is when you scrutinize your bill. So it’s not surprising that many cloud clients end up using and paying for more resources than they need. Some of these untracked resources may still be performing some tasks, but the fact that you’re not monitoring or aware of them is a concern in itself. This spread of orphaned resources is known as cloud sprawl and is a major cost factor. It is also a security weak spot, as lack of tracking means they are more vulnerable to attacks. Eliminating these zombies will improve your security profile and save significant money. To do this…
1. Maintain full visibility of your cloud
2. identify and terminate these zombie resources
3. Use resource tags and set tagging conventions
4. If you’re operating at scale, use Infrastructure as Code tools, like CloudFormation or Terraform along with an automated CI/CD pipeline. This will make it simpler to track resources created outside the pipeline
5 Conduct audits and prepares compliance reports routinely
Zombies could include old snapshots, unattached EBS and elastic load balancers, and components of instances related to failed launches.
You can use AWS CloudHealth to find such instances. CloudHealth can give you visibility into your cloud environment, and find hard-to-locate assets that may not be easy to identify using the AWS System Manager or Console.
Increase the size of EC2 Instances according to workloads
Increasing the size of EC2 instances automatically doubles their capacity. If their peak utilization does not support this increase. You will simply be paying for unused resources. But you first need to ensure active monitoring and management of usage data to identify whether your EC2 instances are right-sized. Bu regularly moving workloads to the right-sized instances you can control your cloud costs without compromising cloud efficiency.
Leverage Reserved Instances
The term Reserved Instances refers to a billing discount that AWS offers for use of On-demand Instances. While they are arguably the easiest way to optimize cloud costs, they need to be managed or can end up costing more.
You can buy Standard Reserved and Convertible Reserved Instances for 1-year or 3-year terms or a Scheduled Reserved Instance for one year. Remember though that when the term of a Reserved Instance ends, if you continue to use the EC2 instance you will be charged on-demand rates. So you should terminate the instance and purchase a new reserved instance that matches the attributes of the previous instance. Amazon offers EC2 Savings Plans which are ideal for workloads that see consistent computing demands. These plans can be availed for a one-year or three-year period and can lower your computing costs by up to 72%, compared to on-demand costs.
If you have workloads with flexible start and end times, and which can withstand interruptions, consider Spot Instances. They don’t need contracts or a commitment to consistent computer use. Spot Instances can reduce compute costs, compared to on-demand costs, by up to 90%.
Takeaway
In truth, there is no better approach to optimizing your cloud cost than monitoring and measuring your cloud. You can use tools that provide visibility into usage patterns, and tools that can predict the cost. Doing this will help you, identify and manage your resources more, and right-size the services you use effectively.
If you think your cloud optimization approach could use a hand, we can help. As an Advanced AWS Partner and Partner with Google Cloud and Microsoft Azure, TeleGlobal is proficient with all tools and methodologies needed to help you get the most from your cloud.
Talk to us today.
Need help with your cloud?
"No worries! Our experts are here to help you. Just fill the form and we'll get back to you shortly!"