Data Expedition, Inc. ®

Move Data Faster


Tech Notes
Network Performance
Managing Bandwidth
Common Problems
RAID Performance
Loss, Latency, Speed
Network Emulators
Rec. Hardware
UDP Tuning
NAS Tuning

Network Performance

Page Index:
Slow Networks
Operating System
Gigabit+ Networks
Virtual & Cloud
Tech Note History
Jul232012Slow Devices
Mar252011Virtual & Cloud
Disk Images
Oct132010First Post

Recommended Hardware

MTP/IP applications will transfer data up to the maximum speed of the underlying data path.  This will be limited by the slowest component of that path, including computing hardware such as hard drives, CPUs, and operating systems.  This article provides recommendations for end-point hardware, particularly for ExpeDat and SyncDat servers.

Many factors affect the speed at which data can be transferred.  This article focuses on the server-side computing device.  See the ExpeDat Performance chapter, or the Tech Notes linked to the left, for a more general discussion of performance issues.

Slow Networks

For total network bandwidth of 50 megabits per second or less, almost any modern desktop computer should provide sufficient hardware throughput.  The guidelines below will still be helpful, but at slower speeds it is more likely that network issues will be the limiting factors in performance.


The slowest component of a computer is usually the data storage device.  Hard Disk Drives (HDDs) are often rated according to the speed of their interface, but their real-world transfer speed is much, much slower.

Hard disk drives are particularly vulnerable when multiple files are being accessed at the same time or in rapid succession.  The read-write head must move back and forth between files, which typically takes 10 milliseconds per round-trip.  At one gigabit per second, 10 milliseconds represents a delay of 1.25 megabytes of data.  Because of this seek-time overhead, hard disk drive performance drops off exponentially as the number of concurrent files increases.

For example, a HDD which supports 500 megabits per second of throughput on a single file, might support only 300 megabits per second for two files (150 each), and only 100 megabits per second for three files (33 each).  The exact values will vary greatly depending on the drive and filesystem.

To maximize your storage throughput, follow these guidelines:

  • Use directly attached storage:
    Network attached storage of any kind will be limited by the speed of the network protocol.  Windows file sharing (CIFS) is particularly inefficient.  NFS can introduce file integrity problems when configured for high-performance.  See Tech Note 0029 if you must use NAS.
  • Avoid disk images:
    Disk images are virtual disks stored as files in a "real" filesystem.  If you must use a disk image, create it with storage pre-allocated.  Disk images which dynamically expand have much higher CPU and I/O overhead for both reading and writing.  As expanding images grow, performance will become severely impaired.
  • Solid State Drives (SSDs) are ideal:
    SSDs have an effective seek time of 0.  Their total throughput does not diminish as the number of concurrent transfers increase.  If SSDs are not practical due to cost or size limitations, read on.
  • Choose HDDs with the lowest seek time:
    As noted above, the time it takes for a disk drive to move its read-write head between files will dominate performance whenever multiple transfers are occurring at the same time or in rapid succession.  If SSDs are not an option, look for disk drives with the lowest average seek time, preferably 2ms or less.
  • Avoid slow or high latency storage:
    Targeting files on very slow filesystems or devices such as optical disks, tape drives, network attached storage across a WAN, or any other filesystem with extreme latency may result in severely reduced performance or loss of connectivity.  Targeting slow filesystems on the server may affect the performance and connectivity of all clients accessing that server, even those targeting other filesystems.
  • Do not partition HDDs:
    Partitioning a hard disk drive will greatly increase the amount of head movement, particularly if the files being accessed are on separate partitions from each other or the operating system.
  • Use multiple HDDs:
    Ideally, you would have each transfer targeting a different physical HDD plus a separate HDD for the operating system.  A somewhat more practical strategy may be to have separate HDDs for each high-speed user, or group of users.  The more you can split the traffic amongst multiple drives, the better the concurrent performance will be.
  • Hardware RAIDs might help:
    It is possible to use a hardware RAID to achieve high throughput, but it must be correctly configured for high speed, high concurrency performance.  An incorrectly configured RAID will be slower than a consumer HDD!  See Tech Note 0018 for details about RAID configuration.
  • Tune your operating system:
    Many operating systems and filesystems are not well optimized for disk or network performance out-of-the-box.  Configure filesystems for asynchronous performance.  Maximize write-behind and read-ahead caching.  Update to the latest stable drivers for your disk controller.  Make sure kernel UDP buffers are configured to allow at least 1 megabyte (see Tech Note 0024 for details about UDP buffer limits).

These storage guidelines apply to any system, but are particularly important for servers supporting multiple concurrent transfers at speeds of 500 megabits per second or more.


For speeds up to about 500 megabits per second, any modern processor of 2 gigahertz or more should be sufficient.  For faster speeds, get the fastest processor you can.  Because the network interface is a serial device, having multiple CPUs or cores will not speed up network traffic, but they may help prevent other CPU intensive tasks from slowing it down.

There is no particular advantage to 64-bit over 32-bit.  At this time, most MTP/IP applications are compiled for 32-bit.


MTP/IP network transport uses very little memory by itself.  The system should have at least enough RAM to prevent any virtual memory swapping from other applications.  The ExpeDat server, for example, uses only a few megabytes on launch and only about 100 kilobytes for each uncompressed transaction.  Use of compression will add 16 megabytes per compressed transaction by default.

Having extra RAM will improve disk caching and I/O scheduling and may improve performance at higher network speeds.  Linux in particular will take advantage of extra RAM to improve write speeds.

For ExpeDat and SyncDat clients, RAM only becomes a factor when trying to handle hundreds of thousands of files or more at a time.  If your system has at least 4 gigabytes of RAM, they will be able to handle more than a million files at a time.

Operating System

In most cases, the hardware factors above have a greater influence on performance than the choice of operating system.  However, not all operating systems are tuned for maximum performance "out-of-the-box".  For example, FreeBSD requires that you set async mode on the filesystem while some Linux distributions require that you raise the UDP buffer limits.

If you are using Linux, FreeBSD, NetBSD, Solaris 8 or 9, or Mac OS X 10.4, see Tech Note 0024 for important information about UDP buffer limits.

As of this writing, Linux is believed to provide the best performance for high-speed file transfer with ExpeDat and SyncDat.  However, this is highly dependent on exactly which version of an operating system is installed, which filesystem is being used, and how they are tuned.

In general, using the most recent stable build of any operating system will provide better performance.  Windows and Mac OS X have particularly improved performance in recent versions, though they still lag behind most unix systems.

If possible, consult your operating system vendor for instructions on tuning for maximum UDP network and filesystem performance.

For minimum operating system version requirements, see Tech Note 0004.

Multi-Gigabit Networking

For network speeds over 1 gigabit per second the I/O interaction between the system motherboard, CPU, filesystem, and operating system can become very complex.  Careful attention must be paid to all of the factors above as well as the particular network and user scenario.

When you have many incoming transfers whose total throughput may approach ten gigabits per second, it may be necessary or more cost effective to divide the traffic amongst multiple physical servers.

Virtual & Cloud Machines

For virtual machines, the recommendations above apply to whatever portion of the host hardware resources are available to the guest operating system.  For the best performance, the virtual machine should be running on dedicated hardware (no other virtual machines) with direct access to a direct-attached hard-drive (not a virtual or network hard-drive).  Latency and CPU load may be slightly elevated under these conditions, but otherwise performance should be close to that of the host hardware.

If both the host and guest operating systems have firewalls and you are running MTP/IP server software, then you may need to configure both the host and guest firewalls.  See Tech Note 0002 for general firewall instructions.

If the virtual machine is sharing hardware with other virtual machines, if it is using a virtual disk image, or if it is using network attached storage, then disk I/O may be severely impaired for the reasons given above.

Cloud based machines generally behave like virtual machines, except that you may have more options to scale up the available resources.  See Tech Note 0025 for information about Amazon Web Services EC2 instances.