Tech Note 0003

Analyzing Network Performance

Best practices for testing network performance

The data path between any two computers involves dozens, sometimes thousands, of hardware and software devices.  Any one of these may have a substantial impact on performance.

At any given time there is likely to be one factor which most directly limits the maximum speed of a data flow.  Identifying the limiting factor for each data flow is vital to improving performance.

The limiting factor may change with time or in response to changes in other factors.  Changing network conditions may change the relative performance of MTP and other technologies.  Test under a variety of conditions and learn all you can about your network environment.

Testing Tips

Tests should be conducted in the same real-world environments where you plan to deploy the software.  If that is not practical, great care must be taken to ensure that the test environment matches the production environment in equipment, network devices, and traffic patterns.  Even if the test conditions appear to be very similar to the intended use conditions, there will be many hidden factors which may affect performance.  See Tech Note 0023 for hardware recommendations.

Enable Diagnostic Logging
MTP/IP continuously monitors network conditions.  The resulting statistics can reveal problems in the network infrastructure.  Follow the steps in Tech Note 0033 to enable diagnostic logging (Debug level 1) at the receiving side of your data transfers.
Avoid Emulators
Network emulators use statistical models that are designed around TCP and TCP-like network traffic.  While emulators can be useful, they must be carefully configured using statistics appropriate to the traffic being tested.  DEI strongly recommends testing MTP in real-world environments whenever possible.  If you must use an emulator, carefully read Tech Note 0022 for information on programming it with the best possible data.
Keep Records
Record complete details of your testing so that you know exactly what you did and what effect it had.  Variations which seem inconsequential at the time, may provide our engineers with vital clues to understanding your network.
Configure Firewalls, NAT, and VPN Devices
Firewalls, NAT gateways, and other such devices are designed to interfere with network traffic.  They may selectively block or degrade MTP/IP applications.  They are often configured to interfere with some traffic more than others.  For testing purposes, you must disable, or at least be aware of, any "bandwidth management", "packet shaping", "traffic shaping", "stateful security, "dynamic security", or other features which selectively slow down traffic.  Once a baseline is established, you can then re-enable these features one at a time to observe their effects.  See Tech Note 0002, "Configuring Firewalls" for instructions.
Disable TCP acceleration devices
You may already be using a hardware or software device to "accelerate" your network.  Such systems may be configurable to work with MTP, but for test purposes disable third-party acceleration devices.  You can re-enable them later once you have established a baseline of performance.
Measure Time Independently
Many data transfer products only report data transfer speeds during bursts of network activity, ignoring hangs, error recovery, or cache flushing.  MTP/IP applications report their final speed based on the full time it takes to read data from the source and commit it to storage at the destination.  For the most accurate results, use a stopwatch or a command line utility such as "time" to measure the actual time it takes to complete a transfer.  Be sure to note whether write cache flushing is included (ExpeDat and SyncDat always include write cache flushing).
Test Different Times and Different Paths
Network conditions can very tremendously based on the underlying equipment and the activity of other users.  Whenever possible, conduct tests between multiple locations at varying times of day and days of the week.  A wide variety of test conditions can be extremely helpful in isolating the cause of any problems which arise.

Factors Affecting MTP/IP Performance

The following factors affect MTP/IP directly, and will therefore affect all MTP/IP applications.

Maximum Path Speed
MTP cannot move data faster than the underlying network hardware.  When network hardware is the limiting factor, no software can provide a throughput improvement.  MTP will still provide reliability and transaction speed improvements.  Note that the maximum path speed is the net maximum throughput of the slowest link between two computers.  Very often, network links such as WiFi, cellular, or consumer broadband are rated at speeds much higher than they actually deliver.  Determine your actual path speed before evaluating any performance options.
Network Devices
As described above, firewalls, emulators, routers, NATs, acceleration appliances, and other devices may selectively limit performance of some traffic flows and not others.  Be aware of any such devices and check their configurations carefully.  See Tech Note 0002 for configuration tips.
TCP Relative Speed
TCP's performance may vary substantially due to many factors, including the time of day.  If TCP is operating close to the Maximum Path Speed, then MTP may not offer faster throughput than TCP, because no transport protocol can move data faster than the network itself.
Operating Systems
Different operating systems handle networking in different ways.  Windows is very different from unix systems.  The choice of system also affects how other factors such as CPU Load, Memory, Disk Speed, and Local Link Speed will affect each protocol and application.  See Tech Note 0024 for tunable parameters which affect some unix systems.
CPU Load
A high CPU load will affect network performance.  Depending on the Operating System, type of software running, and many other factors, one protocol may be affected more than the other.  Using a faster processor and running fewer other applications may improve network performance for both protocols.
Third Party Traffic
Your network traffic must compete with other users of the network.  As more people use a network path, performance for all users will decrease.  This is especially true if the Maximum Path Speed is the limiting factor.  TCP becomes very inefficient when third party traffic is high, causing TCP to consume large amounts of resources while providing poor performance.  MTP will show the largest relative performance improvement at times when third-party traffic is high.  Tech Note 0005 describes how MTP's impact on third-party traffic can be increased or decreased.
Idle Latency (Round Trip Time)
The time it takes for a very small amount of data to travel between computers and back again is called the Round Trip Time (RTT).  Rising RTT is indicative of heavy network use.  If RTT is high (over 500ms), TCP performance will often be poor and MTP performance relatively good.  If RTT is very low (under 10ms), TCP performance may be limited only by the Maximum Path Speed.  MTP can be configured to limit its effect on RTT.  See each application's documentation or contact DEI for details.

Factors Affecting ExpeDat and SyncDat Performance

ExpeDat and SyncDat are normally used to transfer files from the storage system of one computer to the storage system of another computer.  Storage hardware, user interactions, and various features may have additional impacts on performance.

Storage Speed
ExpeDat cannot move data faster than the storage system can read or write the data.  Hard disk drives become exponentially less efficient when there is more than one data flow at a time.  Two parallel high-speed transfers will add up to less than one!  Solid State Drives (SSD) provide the most consistent performance.  Network disks and SANs may be many times slower than directly attached storage.  RAIDs must be carefully configured to provide maximum performance, see Tech Note 0018.  See the ExpeDat documentation for more information about storage performance.
Memory Cache
Most operating systems will use a portion of main memory (RAM) to store data that has been recently read from or written to disk.  Sending the same test data multiple times may provide unrealistically fast results due to read caching.  Write caching can make small files appear to transfer quickly for applications which do not flush their caches, while making large files take much, much longer.  Linux in particular may suffer severe performance problems when writing large files on systems with many gigabytes of RAM.  See Tech Note 0035 for tuning Linux write caches for better performance.
Compression
Enabling ExpeDat's inline compression feature will cause both the client and server to consume many times more CPU resources.  If the CPU load is the limiting factor, then enabling compression may cause the transfer to go slower, even if the data being transferred is compressible.  Inline compression also limits MTP's ability to cope with packet-loss on high-latency networks, which may reduce performance on those networks even when CPU load is not a problem.  See Tech Note 0014 for more about the effects of compression.
Encryption
Encrypting and decrypting data requires extra CPU processing on both the client and server.  ExpeDat and SyncDat use highly efficient, multi-core encryption which generally does not affect network performance.  But if CPU resources are limited, turning off encryption may provide some improvement.  Note that usernames and passwords are always encrypted, even when content encryption is turned off.
Object Handlers and Piping
Server-side object handlers, such as the CloudDat S3 Gateway, are limited by the speed of the back-end systems they access.  Likewise, if the client-side pipes data to or from another application, performance will be limited by that application's performance.  Object handlers and piping will increase CPU and memory requirements, which may also affect performance.

Factors Affecting MTP Tunnel Performance

The MTP Tunnel application uses MTP/IP to move data between a TCP/IP based server and a TCP/IP based client.  To do this, MTP Tunnel must maintain TCP/IP connections to the server and client at both ends, and copy the data between TCP/IP and MTP/IP.  This introduces several additional factors which may affect performance.

Maximum Server Throughput
MTP Tunnel cannot move data faster than the TCP server sends it.  If the server is limited in the rate at which it sends data, this will limit the rate at which MTP Tunnel can move the data.  Be sure that the server is not limiting the rate at which it delivers data.
Client Throughput
MTP Tunnel cannot move data faster than the client consumes it.  If the client is unable to process incoming data quickly, MTP Tunnel may be forced to pause and wait for the client to be ready.
Copy Overhead
MTP Tunnel must copy data from TCP to MTP at the server side, and then copy the data back from MTP to TCP at the client side.  This causes a delay which may affect performance.  If TCP and MTP performance are close due to MTP/IP specific factors, this extra copying time may make MTP Tunnel slower than ExpeDat or plain TCP.
Buffers
MTP Tunnel must copy data from TCP to MTP at the server side, and then copy the data back from MTP to TCP at the client side.  If the network is very fast, or if the Web Browser is very slow, larger buffers may be needed.  If you are using MTP Tunnel on a network faster than 1000 megabits per second, you may need to increase its buffer size.  See the MTP Tunnel documentation for details.
Dropped Connections
If either the TCP server or TCP client experiences a problem, it may drop its connection to MTP Tunnel.  If this happens, MTP Tunnel will be forced to stop the transaction.  If you are experiencing dropped connections, investigate the client and server for error messages.

Performance Checklist

If MTP/IP appears to be performing poorly in a particular environment, perform the following checks to determine the cause.  You should read through the entire list before beginning, because some checks may be easier or faster than others to perform.  Most of these checks can be performed using the software included in your MTP/IP product.

Test
Follow the steps in Tech Note 0033 to enable diagnostic logging (Debug level 1) at the receiving side of your data transfers.  This is typically done by setting "Debug 1" or "-d 1".  Perform a data transfer lasting at least 30 seconds and watch for errors and warnings.  Submit the resulting logs Tech Support for analysis.
Verify
Check your billing or service records to verify the intended capacity of the data lines at both the client and the server.  Remember that some links, especially consumer broadband, have different upload and download speeds.  Even enterprise networks may have different performance characteristics in different directions.  Write down the speeds, link type, and IP addresses.  Compare this to the test results.  Beware of units errors: don't try to compare bits to bytes.  Use independent timing, such as a stopwatch, to measure performance.  Note what percentage of the link capacity is being used by each protocol.  Tech Note 0030 has calculators for estimating data transfer for times and converting units.
Trace
Use the tracert or traceroute commands to measure latency and the number of hops between the client and servers.  If you cannot get through in BOTH directions between the client and server, but other traffic does get through, then you KNOW there is a firewall blocking traffic in between.  You can use the mtping utility to detect firewalls which are specifically blocking MTP traffic.
Firewalls, Emulators, NATs, Routers, etc.
Make SURE that the tests are not being affected by firewalls, NAT, or VPN devices.  If possible, disable any software firewalls, NAT programs, VPNs, or other packet filters which may be running on the client or server.  CHECK the configuration of all routers, hubs, DHCP devices, NAT devices, VPNs, and modems to make sure their firewall features are disabled or configured to EXPLICITLY allow BOTH incoming and outgoing traffic and the ports used by your MTP/IP application.  See Tech Note 0002 for a list of default port numbers and guidance on configuring network devices.
VPNs and Tunneling
If a VPN is being used, check its documentation and configuration to verify that it DOES NOT tunnel UDP/IP over TCP/IP.  Devices which tunnel network traffic over TCP/IP, including SSL VPNs, severely impair performance and are not compatible with MTP/IP.  Use an IPsec VPN instead.
Compression
Make SURE that none of the network devices is performing compression.  To test for network compression, build two test files each large enough to take at least 30 seconds to transfer.  One should consist entirely of zeros, the other should consist entirely of random data.  If the file consisting of zeros transfers much faster than the one consisting of random data, then something is compressing data in the network.  If you don't have an easy way to create zero and random data, using a plain text file and a compressed binary file will also work.
Network Service Provider
Check with both your client AND server network providers to see if they are (a) blocking any ports, (b) using any kind of network compression technology, (c) using any kind of packet shaping, rate throttling, speed boosting, or bandwidth management technology.  If possible, have them disable these technologies, at least for the purposes of testing.
Hardware
Check that you are using modern networking hardware from reputable vendors.  Older or low-quality equipment may not conform to Internet Protocol standards.  If possible, try testing with different equipment such as CPUs, routers, and modems to see if a particular piece of hardware is causing a problem.

Contact DEI

Feel free to request technical support if you are unable to resolve any performance issues.  Be sure to let DEI's engineers know what steps you have already taken to isolate the problem.

Tech Note History

Jul302019Diagnostic Logging
General updates and modernization
Feb282013Updated Test Instructions
Dec022011MTP Tunnel
Oct262010Tech Note 0024
Aug262010Emulators
Worksheet Link
General Cleanup
Feb172009ExpeDat Compression
Feb122007Adapted from PDF