Before adding solid-state drives, right-size your infrastructure using workload profiling
- 16 October, 2015 22:06
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
If you’re looking to add Solid-State Drives to your storage environment you want to avoid under-provisioning to ensure performance and scalability, but to meet cost goals and avoid unnecessary spending you need to avoid over-provisioning. Workload profiling can help you achieve the critical balance.
A recent survey of 115 Global 500 companies by GatePoint Research and sponsored by Load DynamiX showed that 65% of storage architects say they are doing some sort of pre-deployment testing before making their investment decision. Alarmingly, only 36% understand their application workload I/O profiles and performance requirements. They don’t know what workload profiling is and how it can be used to accurately evaluate vendors against the actual applications that will be running over their particular storage infrastructure.
Here’s how workload profiling can help reduce over-provisioning and under-provisioning of SSD storage investments.
What is workload profiling?
Workload profiles, often called I/O profiles, are data and statistics that directly relate to storage activity and loading in real (observed) production storage arrays. Workload profiles characterize the realistic, sometimes massive, application workloads that stress networked storage at an infrastructure level. Profiles typically comprise a mix of virtualized applications, including databases, that can have significant random I/O content. I/O profiles contain information on reads vs. writes, the mix of random versus sequential data access, the data and metadata commands, file and directory structures and IOPs to name a few of the key metrics.
There are two key steps to creating a workload profile:
* Gather production data: To capture the data to create the workload model, the first step is to access the storage array logs and other statistics available from each of the production storage array’s proprietary tools. Every vendor has their own way of reporting storage I/O and utilization data and most storage admins know how to run these tools and utilities over a predetermined period of time. This data provides the foundation of the workload-to-performance relationship. And can be input into a storage workload modeling application.
* Workload Modeling: In the second step, we create the models based on the array data. There are commercially available tools that can take storage array log data and directly import it into the workload model. These models can then be used to vary the assumptions and perform a myriad of what-if and worst case scenarios. Such applications can maintain libraries of repeatable scenarios that can be used to stress a storage system under a realistic simulation of the workload(s) it will be supporting in production. Workload modeling enables the comprehensive performance testing of any flash or hybrid storage system under the actual conditions that it is expected to operate. With highly accurate simulations, storage performance can be fully predictable.
Applying workload profiles
After creating our workload profiles, we can typically use them to validate storage solutions with the following types of testing:
- Limits finding: determining the workload conditions that drive performance below minimal thresholds, and the documenting of storage behavior at failure point
- Functional testing: the investigation under simulated load of various functions of the storage system (e.g., backup, etc.)
- Error injection: the investigation under simulated load of specific failure scenarios (e.g., fail-over when a drive or controller fails)
- Soak testing: the observation of the storage system under load sustained over extended periods of time (e.g., 3 days, 1 week, or more)
Performance and load testing with workload profiles can be used to tune and validate flash and hybrid storage infrastructure in critical areas, including:
- Properly pre-conditioning flash arrays to create a realistic application state prior to applying load.
- Stressing and profiling of specific SSD behaviors, such as data compression and deduplication techniques and other common enterprise features, such as clones and snapshots.
- Testing with realistic emulations of application workloads typically deployed with flash storage.
Why is workload profiling important?
Workload profiling can offer vital insights into the existing or planned SSD infrastructure of an organization, empowering storage professionals to optimize cost while assuring performance and reliability goals are met.
With a robust storage performance validation process in place, engineers and architects can optimally select and configure networked storage systems for their workloads by aligning performance requirements to purchase and deployment decisions. With such insight, application performance can be predictably assured and storage costs can be significantly reduced for production storage systems. It simply enables storage architects and engineers to make better, more informed decisions by eliminating the guesswork related to storage performance.
Rosenthal is responsible for worldwide marketing at Load DynamiX. Prior to Load DynamiX he held executive positions at Virtual Instruments, Panasas and QLogic and held senior marketing management roles at Inktomi, SGI, and HP.