Tech Note 0023
Recommended hardware for the best performance
MTP/IP applications will transfer data up to the maximum speed of the underlying data path. This will be limited by the slowest component of that path, including network links, hard drives, CPUs, and operating systems. This article provides recommendations for end-point hardware, particularly for ExpeDat and SyncDat servers.
Many factors affect the speed at which data can be transferred. This article focuses on the server-side computing device. See the ExpeDat Performance chapter, or the Tech Notes linked to the left, for a more general discussion of performance issues.
For total network bandwidth of 100 megabits per second or less, any modern desktop computer should provide sufficient hardware throughput. The guidelines below will still be helpful, but at slower network speeds it is more likely that network issues will be the limiting factors in performance.
The slowest component of a computer is usually the data storage device. Hard Disk Drives (HDDs) are often rated according to the speed of their interface, but their real-world transfer speed is much, much slower. Network Attached Storage (NAS) is limited by the speed if its protocol, which is usually much, much slower than the physical interface.
Hard disk drives are particularly vulnerable when multiple files are being accessed at the same time or in rapid succession. The read-write head must move back and forth between files, which typically takes 10 milliseconds per round-trip. At one gigabit per second, 10 milliseconds represents a delay of 1.25 megabytes of data. Because of this seek-time overhead, hard disk drive performance drops off exponentially as the number of concurrent files increases.
For example, a HDD which supports 500 megabits per second of throughput on a single file, might support only 300 megabits per second for two files (150 each), and only 100 megabits per second for three files (33 each). The exact values will vary greatly depending on the drive and filesystem.
To maximize your storage throughput, follow these guidelines:
These storage guidelines apply to any system, but are particularly important for servers supporting multiple concurrent transfers at speeds of 500 megabits per second or more.
For speeds up to about 500 megabits per second, any modern processor of 2 gigahertz or more should be sufficient. Multiple CPUs or cores will be helpful when using encryption or at speeds above one gigabit per second. For speeds of ten gigabits per second or more, at least four cores are recommended (eight with encryption).
There is no substantial advantage to 64-bit over 32-bit.
For speeds above one gigabit per second, larger datagram sizes will greatly reduce operating system overhead and improve efficiency. True ten gigabit networks should support Jumbo frames (9000 MTU) and MTP will automatically switch to Jumbo packets when a transaction exceeds about one gigabit per second.
Always configure your network devices and computers to allow the largest MTU common to all of them. MTP can take advantage of Super Jumbo frames up to 60 kilobytes. See your software documentation for instructions on taking advantage of larger MTU support.
The most recent stable build of any operating system will generally provide better performance than older versions. For high speed networking with Windows, use Windows Server 2012 or later. For the best performance, particularly at multigigabit speeds, avoid Windows.
Even amongst unix systems, some distributions are not tuned for maximum performance "out-of-the-box". For example, FreeBSD requires that you set async mode on the filesystem while many Linux distributions require that you raise the UDP buffer limits.
If you are using Linux, FreeBSD, Solaris 8, or Solaris 9, see Tech Note 0024 for important information about UDP buffer limits.
For minimum operating system version requirements, see Tech Note 0004.
MTP/IP network transport uses only a few megabytes of memory. The system should have at least enough RAM to prevent any virtual memory swapping from other applications. Use of compression or object handlers with ExpeDat will add 16 megabytes for each such transaction by default.
Having extra RAM will improve disk caching and I/O scheduling and may improve performance at higher network speeds. Linux in particular will take advantage of extra RAM to improve write speeds.
For ExpeDat and SyncDat clients, RAM only becomes significant a factor when trying to handle hundreds of thousands of files or more at a time. If your system has at least 2 gigabytes of RAM, they will be able to handle more than a million files at a time.
For speeds above one gigabit per second, you must verify that all network and storage components are actually capable of the desired speed. For example, a 10 gigabit per second network with only 1 gigabit per second interface cards will not provide more than 1 gigabit per second of performance.
Bonding multiple one gigabit lines is NOT the same as a true multigigabit network.
See Tech Note 0032 for important guidance on setting up multigigabit network paths.
For virtual machines, the recommendations above apply to whatever portion of the host hardware resources are available to the guest operating system. For the best performance, the virtual machine should be running on dedicated hardware (no other virtual machines) with direct access to a direct-attached hard-drive (not a virtual or network hard-drive). Latency and CPU load may be slightly elevated under these conditions, but otherwise performance should be close to that of the host hardware.
If both the host and guest operating systems have firewalls and you are running MTP/IP server software, then you may need to configure both the host and guest firewalls. See Tech Note 0002 for general firewall instructions.
If the virtual machine is sharing hardware with other virtual machines, if it is using a virtual disk image, or if it is using network attached storage, then disk I/O may be severely impaired for the reasons given above.
Cloud based machines generally behave like virtual machines, except that you may have more options to scale up the available resources. See Tech Note 0025 for information about Amazon Web Services EC2 instances.
Tech Note History
|Mar||25||2011||Virtual & Cloud|