Stack over lays of time series data
System performance testing relies (or should rely) on the aggregation of large amounts of data from a variety of sources within the supporting infrastructure. It’s often difficult to assess the correlation of different subsystem performance characteristics by manually trolling large volumes of text files. Pulling critical metrics from these logs and correlating these metrics by time stamp creating a visual representation of the entire experiment is a necessity for a holistic view of system performance.
Here’s an example of an Excel based tool I developed for our performance engineering staff to import data sets from our test environment for a rapid assessment of the results of a test run under a specific set of test conditions:
This overlay graphic is generated using a series of custom VBA macros that create dynamic micro plots for targeted server subsystems across the service layer, application, application server and database server. Since this is a test environment, we target both the throughput profile and steady state values to gauge the impact on changing specific variables. Here data from the entire run is represented in light gray lines, and the steady state interval specified is called out in red. This steady state interval can be dynamically adjusted via the slider at the top of the graphic in Excel (not here though – this is an image of the output). As you slide the steady state range back and forth the corresponding steady state statistics calculated by Excel are automatically adjusted.
I get asked that all the time. I am currently investigating a shift to R as an alternative for dealing with very large scale data sets, but I have found it to be easier to ship Excel workbooks to field personnel or customers with custom macros embedded in them along with a series of buttons that trigger log scrubbing, import, data parsing and plotting. It’s simply a vehicle for shipping a customized solution to a customer for rapid viewing of large log files.