Menu

How to Lower Your Storage Costs by Tiering Hot & Cold Data

Hong Hock Tan
179 views

According to IDC’s Worldwide Global DataSphere Forecast, 2019–2023: Consumer Dependence on the Enterprise Widening, the amount of data that will be stored in the core data center is forecasted to double between 2014 to 2023.



While organizations are planning capacity builds to accommodate this, let’s look at how to determine the right balance of core capacity and cloud capacity, as well as the measures taken when budgets are being reallocated or reprioritised.



IT Leaders need to continue to modernize their on-premises data centers to handle the growth of their on-prem data. In addition, that data needs to be quickly accessable to internal teams who need to use it to deliver value to the organization. However, a recent survey conducted by IDC with 500 decision makers stated that 30% of those surveyed identified budget constraints on capital requirements as the main inhibitor of IT modernization.



Herein lies the proverbial conundrum of doing more with less. With data capacity requirements growing at the core, IT Leaders will need to make smarter decisions to ensure that their infrastructure is able to handle the growth in a tight budget environment.

Storage Efficiencies – your first move

Just like any other storage, storing data can be inefficient without on-going housekeeping. Through this process, organizations might realize that the amount of space they actually need is far less than what you have purchased.



As a first step, Storage Admins can switch to NetApp clusters running ONTAP® data management software to free up more than 30 times the required storage capacity. This is possible by leveraging the built-in storage efficiency technologies in ONTAP®, such as inline data deduplication, compression, compaction as well as by applying space saving snapshots.

Data Tiering makes data efficiencies more efficient

In addition to storage efficiencies, freeing up storage capacity can also be realized through data tiering. Data tiering is the tiering of inactive data, or cold data, to lower cost secondary object stores available in both private and public clouds. As a result, highly performant storage capacity on the all-flash array is freed up for additional low-latency use cases or applications accessing hot data.



The results of tiering data are significant. Online reports, such as Evaluator Group’s simulation, shows one can achieve 30% TCO savings by using data tiering capabilities like FabricPool, the data tiering feature in ONTAP®, to tier 80% of inactive data to a public cloud over a five year period.



FabricPool, now provides even more options for hybrid cloud deployments, making and further makes data tiering simple to deploy and manage.

Data tiering in a hybrid cloud

With FabricPool, Storage Admins can have more choices across more cloud providers to determine the best costs. Leveraging partnerships with all the major cloud service providers, ONTAP 9.6 tiering is available in Google Cloud Storage and Alibaba Cloud Object Storage Service, as well as NetApp®StorageGrid®, Microsoft Azure Blob Storage and Amazon S3, which were previously available as cloud targets. Flexible options include tiering as a pay-as-you-go model through NetApp’s Cloud Tiering Service where one can simply extend an OPEX cost model to data tiering.

Simplicity in the modern data center

FabricPool identifies inactive data and provides an automated report that gives an immediate view of potential space savings across Tier 1 aggregates. Once policies are set for that inactive data, full automation kicks in for tiering to the cloud – freeing up precious time and resources.



In the ONTAP design, metadata associated with datasets are kept on Tier 1 storage so that data on secondary storage can be quickly accessed when requested by the application. This also allows Storage Admins to effectively scale tiering capacities to the cloud, leveraging up to 50 times the amount of secondary storage beyond the primary tier.



Your tiered data also remains safe and protected. No one wants to experience a disaster or blackout, and getting operations back online need not be a complex exercise. By mirroring ONTAP® volumes, new destination volumes can now be restored quickly – making the recovery process simple and quick.

FabricPool and the Data Fabric

The amount of data stored on-premises is growing and will continue to grow, so it’s very important to modernize and future-proof your data center with integration into a hybrid cloud world. With NetApp’s Data Fabric, consistent data services to support the unique demands of your business are provided across a choice of endpoints spanning on-premises and multiple cloud environments.



First introduced in 2017 as part of the ONTAP 9.2 release, FabricPool, together with the Data Fabric, have driven wide adoption of ONTAP as the leading storage resource management platform, as reported by IDC. Hybrid cloud is here to stay, and making use of data tiering to the cloud can help free up high-performant all-flash storage in a cost effective manner and lower your storage TCO in the process.



To find out more about how FabricPool can help you modernize your data center with data tiering, visit our website or the NetApp Document Center.

Hong Hock Tan

Hong Hock is the APAC Technology Evangelist for NetApp Asia-Pacific. Based in Singapore, he is a keen technology advocate for data center renewal and emerging technologies that makes the world better for his children and future generations.

View all Posts by Hong Hock Tan

Next Steps

Drift chat loading