Dan Sahar

Dan Sahar on

Comparing Apples to Apples (the meaning of 5x)

Ask any racer, any real racer. It doesn’t matter if you win by an inch or a mile; winning’s winningDominic Toretto, The Fast and the Furious

In car racing, measuring who wins first place is easy – the first car across the finish line wins. But in infrastructure gear, it’s not as trivial as that – when comparing different solutions operators typically are bombarded with a ton of different parameters and things only get more complex when you start comparing solutions that are built from several products.

After extensive research and interviews with over 70 network operators worldwide we discovered a common theme: quantifying and comparing different caching solutions in the market is not as easy as comparing apples to apples. What we realized is that in order for operators to be able to quantify and compare different solutions there needs to be a common ground – some normalized ruler by which to measure.

In boxing they call this **“Pound for pound”** – in networking, we understood that the easiest measure to normalize on is Rackspace or Rack Units, e.g. RU.
By normalizing per RU, everyone is playing by the same rules and this also typically applies to the size, costs and power consumption of the solution. This means that when you compare solutions from different vendors, the key1 metrics to look for are:

  • Classification and Analysis Throughput per RU

    ATpU =
    v? {Analysis throughput of solution} / v? {storage rack units + networking rack units + caches rack units}

  • Video Delivery Throughput per RU

    VTpU =
    ? {Video delivery throughput of solution} / v? {storage rack units + networking rack units + caches rack units}

Let’s use an example to see how this works using a scenario of 20 Gbps classification and analysis and 5 Gbps of video delivery throughput.

Now suppose a certain solution requires several cache engines, storage devices, switches and management servers totaling 20 Rack Units to achieve these requirements. This means that the performance figures for that solution are:

  • ATpU = 20 Gbps / 20 RU = 1 Gbps/RU
  • VTpU = 5 Gbps / 20 RU = 0.25 Gbps/RU

Normalizing means that you cannot simply compare one element of a solution, e.g. a cache engine as there are typically other parameters to account for as well such as storage and switching devices. In order for a transparent caching solution to work, it must perform the following functions:

  • Identification – finding the content it needs to cache
  • Monitoring – realizing which content is popular, e.g. repetitive, and is worthwhile caching
  • Storage – locally storing the popular content
  • Delivery – sending the content back to the subscribers
  • Management – controlling and managing the entire solution

What this means is that you must account for all the elements in the solution that cover ALL of these functions – in many legacy solutions this means cache engines, management servers, storage, switches, load balancing gear and more.

You must also take into account the operational costs of the solution, i.e. how easy it is to deploy, manage and maintain (obviously the more point-product elements, the more complex it is to manage) – but that’s a topic for a separate discussion of its own.

After summing up all the elements you then need to measure the simultaneous (or concurrent) performance of the entire solution – what this means is that you can’t simply measure cache-out figures, you also need to measure how many devices are needed for processing of the traffic that will eventually lead to cache out.

At Qwilt we are confident that we built the best “pound for pound” transparent caching solution in the market.

But don’t take our word for it – we encourage you to put our devices to the test based on the above metrics and are confident you will see so for yourself.