Tech Note 0005
Features for controlling performance in MTP/IP applications
MTP/IP Bandwidth Management Features
MTP/IP is designed to provide optimal network performance under a wide variety of network conditions. In shared network environments, "optimal" may involve choosing priorities amongst different users and applications. By default, MTP/IP tries to strike a balance between its own speed and fairness to other data flows. It has a number of features which allow you to fine-tune this balance and set specific performance policies.
This technical note explains the more common bandwidth management features which allow you to control how MTP/IP performs relative to other network traffic. For details on how to access these features in a specific MTP/IP application, refer to the documentation of each product.
Quality of Service (QoS) Devices
In addition to the settings described below, MTP/IP always obeys throttling signals from firewalls, routers, and other QoS devices. If a rate-limiting device is configured to restrict MTP/IP performance, MTP/IP will back-off to the requested level.
When setting up a QoS device to control MTP/IP data flow, start with default values for all of the settings below and do not make unnecessary changes. Remember that MTP/IP uses UDP/IP packets and each server listens on exactly one port. See Tech Note 0002 for more information about configuring port rules.
MTP/IP's flow-control algorithms seek to balance a maximum sustainable rate of data flow with minimal disruption to the network. You can control how much disruption MTP/IP will permit by choosing an Aggression level.
For most MTP/IP applications, Aggression is specified as an integer between -3 and 5. A higher Aggression may improve performance at the expense of third-party traffic. A lower Aggression may reduce MTP performance by giving greater preference to third-party traffic.
Aggression affects relative performance compared to other data flows sharing the same network path. If there are no other data flows, then changing Aggression may have little or no effect on transfer speed. Be sure to test Aggression settings under a variety of load conditions to see its full effect.
High Aggression can overstress some devices, leading to worse performance than you would get with a low or normal Aggression. Level 5, in particular, should not be used unless you know that all devices in the path are configured to handle "Jumbo" datagram sizes. Also see MinDatagram below.
Of all the bandwidth management features, Aggression provides the most flexibility and scalability, but is also the least specific.
You can set specific limits on MTP/IP's data transfer rate. This allows you to ensure that no one transaction uses more than the specified bandwidth. Some MTP/IP servers also allow you to set a maximum total data rate for all incoming and/or outgoing traffic. This allows you to ensure that no one application uses more than the specified bandwidth.
Setting the MaxRate to less than the known available bandwidth ensures that the remainder is available to other transactions or applications. It can also provide a work-around if you have network hardware with limited capacities.
Of all the bandwidth management features, MaxRate is least flexible and scalable because it requires specific knowledge of the network capacities.
When set, MTP/IP will throttle back its data flow whenever the round-trip delay of the network path exceeds the specified number of milliseconds. This allows you to ensure that MTP will not cause network latency to become excessive.
MaxRTT is recommended if you are running applications which are sensitive to high latency. For example, if you are running voice-over-IP or video streaming across the same path as your MTP/IP transactions, setting MaxRTT will minimize MTP/IP's impact on those applications.
MaxRTT can also be used to prioritize other traffic above MTP/IP in a manner more scalable than MaxRate.
Larger network datagrams are more efficient for end-point CPUs, but can cause problems for some network devices. MTP/IP attempts to discover the largest datagram that each network path will support. It does this very quickly and very conservatively, so it will make the right choice in most network environments.
It is very rare to for MaxDatagram to require adjustment.
Some network hardware may permit larger datagrams, but then behave erratically or suffer degraded performance. For example, some high-security encrypted VPNs are known to suffer poor performance or even crash when exposed to large datagrams. This can also happen any time one or more VPNs are tunneled through another. For example, using IPsec within another VPN.
Unless aggression is set to 5, the default maximum is 1408 bytes. Headers for MTP, UDP, and IP add another 56 bytes. This leaves 20 bytes between the IP packet size and the 1500 MTU frame size used by most link layers. Most IPsec and other packet-level VPNs typically add 20 bytes, so there is enough room left for one such layer of virtualization.
If MTP is reporting checksum errors, or if you know or suspect that there are multiple virtualization layers or that you are using network hardware which is sensitive to "packet fragmentation", you may need to lower MaxDatagram. Try values such as 1280, 768, or 520. For maximum optimization, use packet sniffing software to look for fragmented UDP packets and adjust MaxDatagram to the highest value which avoids fragmentation.
MTP/IP is normally able to detect the latency of a path with little to no overhead or data duplication, even in very high-latency environments. But if the base latency of a path is known to be far over 500 milliseconds, you may gain some performance advantage by notifying MTP of this condition.
It is very rare to for MinRTT to require adjustment.
This value should be set to the smallest observable round trip time of the path, as measured when the path is otherwise idle.
If your network path is very fast (gigabit or more) and every device along that path supports Jumbo frames (MTU 9000) or Super Jumbo frames (MTU up to 65536), then you may be able to reduce I/O overhead and improve speed by advising MTP/IP to use larger datagrams. This can be accomplished by setting MinDatagram to 8192 for Jumbo frame networks or up to 61440 for Super Jumbo. MTP will automatically attempt to use 8192 byte payloads if for an individual transfer exceeding 1 gigabits per second, unless MaxDatagram is set lower. If any component of the path does not fully support large datagrams, their use may cause severe performance degradation or loss of connectivity.
Site Specific Settings
Many MTP/IP applications allow you to set features based on the DNS name or IP address of the client or server being used. This allows you to tune the values above based on the different characteristics of different systems and network paths. Refer to the documentation of each product for details. In most cases, you can find this listed as the "SiteOptions" variable of the configuration file.
You can also use Quality of Service devices to regulate performance based on network rules, as described above.
Additional, more obscure, performance settings are available in MTP/IP. Most of these are accessible only through the Software Development Kits. If there is a specific bandwidth management or performance tuning feature you need, please let us know.
Tech Note History
|Sep||12||2014||QoS Devices, Updated MinDatagram & MaxDatagram|
|Mar||26||2012||3.15.2 Default MaxDatagram|
|Oct||03||2008||Note Aggr 5|