Data Expedition, Inc. ®

Move Data Faster

Support

Support
Tech Notes
Performance
Network Performance
Managing Bandwidth
Common Problems
Compression
RAID Performance
Loss, Latency, Speed
Network Emulators
Rec. Hardware
UDP Tuning
NAS Tuning
Multigigabit

Network Performance

Page Index:
Slow Networks
Storage
CPU
MTU
Operating System
RAM
Multigigabit
Virtual & Cloud
Tech Note History
May012014MTP 4.0
Feb202013TN0029
Jul232012Slow Devices
Mar252011Virtual & Cloud
Disk Images
NFS
Oct292010TN0024
Oct132010First Post

Recommended Hardware

MTP/IP applications will transfer data up to the maximum speed of the underlying data path.  This will be limited by the slowest component of that path, including network links, hard drives, CPUs, and operating systems.  This article provides recommendations for end-point hardware, particularly for ExpeDat and SyncDat servers.

Many factors affect the speed at which data can be transferred.  This article focuses on the server-side computing device.  See the ExpeDat Performance chapter, or the Tech Notes linked to the left, for a more general discussion of performance issues.

Slow Networks

For total network bandwidth of 100 megabits per second or less, any modern desktop computer should provide sufficient hardware throughput.  The guidelines below will still be helpful, but at slower network speeds it is more likely that network issues will be the limiting factors in performance.

Storage

The slowest component of a computer is usually the data storage device.  Hard Disk Drives (HDDs) are often rated according to the speed of their interface, but their real-world transfer speed is much, much slower.  Network Attached Storage (NAS) is limited by the speed if its protocol, which is usually much, much slower than the physical interface.

Hard disk drives are particularly vulnerable when multiple files are being accessed at the same time or in rapid succession.  The read-write head must move back and forth between files, which typically takes 10 milliseconds per round-trip.  At one gigabit per second, 10 milliseconds represents a delay of 1.25 megabytes of data.  Because of this seek-time overhead, hard disk drive performance drops off exponentially as the number of concurrent files increases.

For example, a HDD which supports 500 megabits per second of throughput on a single file, might support only 300 megabits per second for two files (150 each), and only 100 megabits per second for three files (33 each).  The exact values will vary greatly depending on the drive and filesystem.

To maximize your storage throughput, follow these guidelines:

  • Use directly attached storage, avoid Network Attached Storage:
    Network attached storage of any kind will be limited by the speed of the network protocol.  Windows file sharing (CIFS) is particularly inefficient, although SMB3 is much better than earlier versions.  NFS can introduce file integrity problems when configured for high-performance.  See Tech Note 0029 if you must use NAS.  Storage Area Network (SAN) protocols, such as iSCSI or Fibre Channel, perform much better than NAS protocols but are still not as fast as directly attached storage.
  • Avoid disk images:
    Disk images are virtual disks stored as files in a "real" filesystem.  If you must use a disk image, create it with storage pre-allocated.  Disk images which dynamically expand have much higher CPU and I/O overhead for both reading and writing.  As expanding images grow, performance will become severely impaired.
  • Solid State Drives (SSDs) are ideal:
    SSDs have an effective seek time of 0.  Their total throughput does not diminish as the number of concurrent transfers increase.  If SSDs are not practical due to cost or size limitations, read on.
  • Choose HDDs with the lowest seek time:
    As noted above, the time it takes for a disk drive to move its read-write head between files will dominate performance whenever multiple transfers are occurring at the same time or in rapid succession.  If SSDs are not an option, look for disk drives with the lowest average seek time, preferably 2ms or less.
  • Avoid slow or high latency storage:
    Targeting files on very slow filesystems or devices such as optical disks, tape drives, network attached storage across a WAN, or any other filesystem with extreme latency may result in severely reduced performance or loss of connectivity.  Targeting slow filesystems on the server may affect the performance and connectivity of all clients accessing that server, even those targeting other filesystems.
  • Do not partition HDDs:
    Partitioning a hard disk drive will greatly increase the amount of head movement, particularly if the files being accessed are on separate partitions from each other or the operating system.
  • Use multiple HDDs:
    Ideally, you would have each transfer targeting a different physical HDD plus a separate HDD for the operating system.  A somewhat more practical strategy may be to have separate HDDs for each high-speed user, or group of users.  The more you can split the traffic amongst multiple drives, the better the concurrent performance will be.
  • Hardware RAIDs might help:
    It is possible to use a hardware RAID to achieve high throughput, but it must be correctly configured for high speed, high concurrency performance.  An incorrectly configured RAID will be slower than a consumer HDD!  See Tech Note 0018 for details about RAID configuration.
  • Tune your operating system:
    Many operating systems and filesystems are not well optimized for disk or network performance out-of-the-box.  Configure filesystems for asynchronous performance.  Maximize write-behind and read-ahead caching.  Update to the latest stable drivers for your disk controller.  Make sure kernel UDP buffers are configured to allow at least 1 megabyte (see Tech Note 0024 for details about UDP buffer limits).

These storage guidelines apply to any system, but are particularly important for servers supporting multiple concurrent transfers at speeds of 500 megabits per second or more.

CPU

For speeds up to about 500 megabits per second, any modern processor of 2 gigahertz or more should be sufficient.  Multiple CPUs or cores will be helpful when using encryption or at speeds above one gigabit per second.  For speeds of ten gigabits per second or more, at least four cores are recommended (eight with encryption).

There is no substantial advantage to 64-bit over 32-bit.

MTU

For speeds above one gigabit per second, larger datagram sizes will greatly reduce operating system overhead and improve efficiency.  True ten gigabit networks should support Jumbo frames (9000 MTU) and MTP will automatically switch to Jumbo packets when a transaction exceeds about one gigabit per second.

Always configure your network devices and computers to allow the largest MTU common to all of them.  MTP can take advantage of Super Jumbo frames up to 60 kilobytes.  See your software documentation for instructions on taking advantage of larger MTU support.

Operating System

The most recent stable build of any operating system will generally provide better performance.  For high speed networking with Windows, you should avoid XP or Vista and instead use Windows 7, Server 2012, or later.  For the best performance, particularly at multigigabit speeds, avoid Windows entirely.

Even amongst unix systems, some distributions are not tuned for maximum performance "out-of-the-box".  For example, FreeBSD requires that you set async mode on the filesystem while some Linux distributions require that you raise the UDP buffer limits.

If you are using Linux, FreeBSD, Solaris 8, or Solaris 9, see Tech Note 0024 for important information about UDP buffer limits.

For minimum operating system version requirements, see Tech Note 0004.

RAM

MTP/IP network transport uses only a few megabytes of memory.  The system should have at least enough RAM to prevent any virtual memory swapping from other applications.  Use of compression or object handlers with ExpeDat will add 16 megabytes for each such transaction by default.

Having extra RAM will improve disk caching and I/O scheduling and may improve performance at higher network speeds.  Linux in particular will take advantage of extra RAM to improve write speeds.

For ExpeDat and SyncDat clients, RAM only becomes significant a factor when trying to handle hundreds of thousands of files or more at a time.  If your system has at least 2 gigabytes of RAM, they will be able to handle more than a million files at a time.

Multigigabit Networking

For speeds above one gigabit per second, you must verify that all network and storage components are actually capable of the desired speed.  For example, a 10 gigabit per second network with only 1 gigabit per second interface cards will not provide more than 1 gigabit per second of performance.

Bonding multiple one gigabit lines is NOT the same as a true multigigabit network.

See Tech Note 0032 for important guidance on setting up multigigabit network paths.

Virtual & Cloud Machines

For virtual machines, the recommendations above apply to whatever portion of the host hardware resources are available to the guest operating system.  For the best performance, the virtual machine should be running on dedicated hardware (no other virtual machines) with direct access to a direct-attached hard-drive (not a virtual or network hard-drive).  Latency and CPU load may be slightly elevated under these conditions, but otherwise performance should be close to that of the host hardware.

If both the host and guest operating systems have firewalls and you are running MTP/IP server software, then you may need to configure both the host and guest firewalls.  See Tech Note 0002 for general firewall instructions.

If the virtual machine is sharing hardware with other virtual machines, if it is using a virtual disk image, or if it is using network attached storage, then disk I/O may be severely impaired for the reasons given above.

Cloud based machines generally behave like virtual machines, except that you may have more options to scale up the available resources.  See Tech Note 0025 for information about Amazon Web Services EC2 instances.