Data Expedition, Inc.
Articles, events, announcements, and blogs
One of the most common questions we are asked is, "How long will it take to send my XX gigabyte file if I use your software?" The answer comes from a very simple equation:
Quantity divided by Rate equals Time.
Given the quantity (size of the file) we still need to know the rate (speed of the path) to answer the question (transfer time). Our performance calculator is just that simple. But there is a very good reason why many people fail to grasp that simplicity.
Here are the questions asked by one of our competitors on their performance calculator:
Somehow the simple two input equation has morphed into four or maybe eight. Can the number of variables be a variable? It's no wonder that people come to us confused.
The reason some of our competitors ask these and other questions is that they want to show a comparison. They ask about distance, when they really mean latency, because that is something the traditional TCP transport protocol is sensitive to. They ask about packet loss because it is a number they can plug into an equation to make TCP look bad compared to claims about their own technology. Interestingly, they do not ask which TCP you are using, they just assume it's something like a decades old protocol stack that last shipped with Windows 98.
None of those things matter. If you are looking to improve your data transfer performance, the one thing you are probably sure of is how fast (or slow) it is already going. You don't need a calculator to tell you that. The real point of asking all those questions is to claim that those extra variables don't (much) affect acceleration. While I might quibble over the wisdom of completely ignoring critical network congestion signals, there is a kernel of truth there: base latency should have no direct effect on throughput and simple ping loss measurements tell you little or nothing about the actual congestion levels in the network.
However, figuring out exactly how fast data will travel on a real network path is difficult, very difficult. In fact, it can't be done with mere statistics. It takes real-time, millisecond precision observations of network behavior to understand a path's capability. That is what our software does. File size divided by path speed gives you a goal: this is how fast your path should go. You already know the speed of your existing transfer methods, and you are probably here because that goal isn't being reached. So, your real question is, how close to that ideal speed will our software get?
As the only acceleration vendor to provide fully functional, immediately downloadable, free trials of our software, that is a question we are not afraid to answer.