Emulators are devices which use statistical models to simulate certain network characteristics for testing purposes. As with any simulation, the results are only as accurate as the model and its inputs.
DEI strongly recommends testing software in real-world environments. See Tech Note 0003 for guidelines.
Emulator settings which have worked well for TCP/IP applications may not be appropriate for MTP/IP.
If it is not possible to test in a real world environment and an emulator must be used, it is absolutely critical that it be correctly configured and that the results be interpreted within the limitations of the device.
Obtaining reliable observations of real-world network speed, latency, and packet-loss can be very tricky. See Tech Note 0021 for advice.
The most common mistakes when configuring an emulator are:
- Using a software, switch, or firewall based emulator instead of a dedicated hardware based emulator. Dedicated hardware is needed accurately emulate a network, especially at high speed. Software running on a general purpose machine or a device that was not purpose-built for emulation introduces noise, artifacts, and limitations which would not be present in a real-world network.
- Assuming that loss and latency will not vary or will vary randomly. This assumption can be approximately true for isolated TCP/IP applications operating on wide-area networks amongst many other traffic flows. This assumption is absolutely untrue for MTP/IP. See Loss and Latency below.
- Setting the buffer or queue size for only one hop instead of the entire path. Since the emulator is modeling the entire data path, its buffer size must match the total of all the devices in the path. If that is not known, calculate the bandwidth delay product (maximum speed multiplied by maximum round-trip-time) and set the buffer to at least twice that value. See Buffer Size below.
As discussed in Tech Note 0021, loss and latency are almost never random: they vary directly in response to patterns of traffic flow, router buffer availability, and router queuing policies. MTP/IP observes these variations to assess network conditions and adjusts its own behavior accordingly. Therefore if an emulator does not accurately recreate the relationship between traffic flow and loss/latency statistics, it will not accurately reflect how MTP/IP will behave in a real-world environment.
If your goal is to maximize utilization of a network path, then your emulation must account for how loss and latency will change as utilization changes, not just static observations.
The steps below will guide you through setting up an emulation environment which will provide relatively predictive results. These steps assume you are emulating a terrestrial wide-area-network. See the Satellite section at the end for exceptions which may apply for satellite and some other radio communications.
If you have any questions about this Tech Note, or to arrange a free, no-obligation consultation with our technical team, please contact us.
The following settings will provide reasonable results in most emulated environments. The most important thing to remember is that real networks are not random, so your emulation should not be random either.
- Make sure the buffer is large enough
- The data capacity of a real-world path is equal to the sum of all the buffers of all the devices in that path. Your emulator buffer must be at least twice the target speed multiplied by the round-trip latency (the bandwidth delay product). See Buffer Size below for examples.
- Disable artificial packet loss
- Packet loss should occur naturally as a result of congestion. Setting a fixed or random packet-loss emulates a network which is already over-saturated. Disabling packet loss in the emulator does not mean loss will not occur, it just means that the loss will be related to real congestion events.
- Use a fixed latency, with no jitter
- Latency should vary naturally as a result of congestion. Randomly fluctuating latency simply does not happen in real networks. Set the fixed latency to the minimum observed on a real path, and let congestion determine rises from there.
- Disable all other random events
- Events like packet duplication, reordering, and packet corruption are extremely rare in real world networks. They only occur at statistically significant frequencies when a network is severely misconfigured or malfunctioning.
- Use sufficient hardware
- Use a hardware emulator with a rated speed at least twice that of the speed you are testing. If you must use a software emulator, run it on dedicated hardware.
The remainder of this article discusses why the steps above are necessary and how to achieve more varied results in a realistic manner.
An emulator is essentially a network router with the ability to vary certain properties. Emulator vendors tout a variety of features, but the most important are the throughput capacity, a large buffer capacity, and a variety of available queuing algorithms including Random Early Drop (RED).
Do not use software emulators running on the test source or destination. Software emulators must share critical resources with the software you are testing, which severely distorts the results.
The simplest environment is a private, point-to-point data path with no other users except for the traffic you are testing. This is not an entirely accurate description of any network, because even private links are shared at the telecom provisioning level, but it will suffice for most purposes.
| ||Emulator|| ||Test|
If the real-world network you are emulating is a private point-to-point data link that is shared with locally generated traffic, then you will need to use extra CPUs to generate that background traffic.
| || |
| || || ||Emulator|| || || |
| || || ||Test|
| || |
However, if you are emulating a network path which crosses a public or semi-public wide-area-network, you may need at least two emulators plus background traffic generators to accurately simulate network conditions:
The second emulator is necessary to differentiate the queuing behavior at the uplink, which is dominated by the uplink speed, from the queuing occurring in the WAN, which is dominated by background traffic. A third emulator may be necessary if you wish to test traffic flowing in both directions.
The hardware and software at the test source and test destination should be identical to that being used in the real-world. This includes the machine, its hard-disk storage, network interface, operating system, background activity, and many other factors. Tech Note 0009 describes such factors in detail.
As a general guide, if you are emulating speeds much less than 100 megabits per second, then the end-point CPUs only need to approximate those being used in the real world. But at speeds near or above 100 megabits per second, bottlenecks within the CPU may be more significant than anything occurring in the network and even slight deviations in setup could make a substantial difference.
If you are dealing with a high-speed network, then evaluate CPU performance on a simple LAN before investing resources in setting up an emulation environment. If you are unable to achieve the performance you desire in a LAN, then you will need to address the limitations in the CPUs first.
Emulating Network Speed
Most emulators have 100 or 1000 megabit inputs. Make sure that the WAN emulator's speed capability is greater than that of the network you are emulating, preferably by a factor of two or more. Otherwise the emulator's own processing limitations will determine the results.
Emulators simulate network speed by briefly delaying packets as they pass through. This is technically a switching delay, rather than a transmission delay, but should be sufficiently accurate for terrestrial networks.
Set the emulator's speed to the slowest link in the path. This is usually the uplink. If you have both uplink and WAN emulators, then set the uplink emulator to the speed of your uplink and the WAN emulator to its maximum speed, or at least ten times the uplink speed.
Remember that some data path's are asymmetric. ADSL, cable modems, and cellular, for example, have much slower uplink speeds than downlink speeds. If you have an asymmetric data path, make sure to use the correct setting for the direction you are moving data.
Some network paths may not have a fixed speed. Some hosting providers and consumer ISPs provide "burst" data rates which allow traffic to exceed the rated network speed for brief periods of time. It is extremely difficult to emulate burst speed because most emulators do not provide mechanisms for it and most providers do not reveal their algorithms. It is therefore best to ignore burst speeds and emulate only the rated speed.
A real-world wide-area-network consists of dozens of devices and links, each of which may be carrying or buffering data at all times. An emulator has only one buffer, which must reflect the total carrying capacity of the path.
Data cannot flow faster than the buffer size divided by the round-trip-time. For example, if you use a 1 megabyte buffer with a 120ms round-trip time, then data cannot flow faster 70 megabits per second regardless of other settings!
Ideally, you would set the buffer size to the known total of all the buffers of all the devices in the path. Since this is rarely a known quantity, the next best thing is to calculate the maximum bandwidth-delay product of the path and set the buffer to at least twice that value.
Speed (bits per second)
Buffer Size (bits)
For example, if a real-world path has a maximum speed of 1 gigabit per second (1,000,000,000) and is known to occasionally have round-trip-times of up to 200ms (0.2), then the aggregate buffering of that path must be at least 200,000,000 bits (25,000,000 bytes) and the WAN emulator buffer should be set to at least 50 megabytes.
Many emulators do not check whether their settings make sense! Always check the buffer sizes and calculate the bandwidth delay product by hand.
If the buffer sizes are too small, then the speed will decrease as latency increases in accordance with the bandwidth-delay product. If the queue sizes are too large, then latency may grow to be much higher than expected.
If you are using a separate Uplink emulator, its settings should simply match those of your uplink router.
Emulators typically allow you to choose amongst a number of pre-set queuing algorithms. A real-world network path will have many devices with many different algorithms. It is impossible for one or two emulators to fully simulate the interactions of all those devices, however approximations can be made by focusing on tail-drop and RED.
If none of the routers in the path are using Random Early Drop (RED), or if some are but you do not know how they are configured, then check that the buffer is correctly sized and set the queuing algorithm to tail-drop. When in doubt, this will provide the most consistent results.
If you know that some routers in the path are using RED and you know their configuration, then check that the buffer is correctly sized and set the WAN emulator to use RED as follows:
- Minimum Queue Threshold
- The smallest size amongst all the RED Minimum thresholds. If there is a router using tail-drop with a buffer size that is smaller still, use that value for the WAN emulator's Minimum Queue Threshold.
- Maximum Queue Threshold
- The largest size amongst all the RED Maximum thresholds. If there is a router using tail-drop with a buffer size that is larger still, use that value for the WAN emulator's Maximum Queue Threshold.
Precise configuration of the queuing algorithm is not important, so long as the buffer size is appropriate to the speed and latency, and any queuing parameters are scaled to that size.
If you are using a separate Uplink emulator, its settings should simply match those of your uplink router.
Emulating Packet Loss
All emulator packet loss and bit-error-rate settings should be set to 0. With very few exceptions, loss in real-world networks only occurs as a result of network congestion. It is never random. See Tech Note 0021 for details. Setting a fixed loss or error rate works for TCP/IP because its flow-control algorithm tends to create chronic packet-loss which appears semi-random when observed over a long time-period. However, MTP/IP observes packet loss on a very short time-scale and expects it to follow real-world congestion-related patterns.
Because MTP/IP pays attention to packet loss patterns, using a random packet loss rate amounts to feeding MTP/IP a garbage signal.
If the network being emulated is known to exhibit significant packet loss, this should be emulated by running a variety of TCP/IP traffic flows across the emulator in sufficient volume to produce the desired effects. Care should be taken that this background traffic is created on separate hardware from the systems running the tests. Otherwise CPU I/O contention may skew the results.
For WANs with other users beyond the local uplink, the background traffic should be generated between the uplink emulator and the WAN emulator. This more accurately simulates the background traffic occurring in the middle of the network than would traffic sharing the same uplink and it is vastly more accurate than setting random loss or latency values.
Ideally, the systems generating the background traffic should represent the variety of system types being used on the real-world network. Different TCP/IP implementations will produce different traffic patterns. For example, generating test traffic on Linux machines may not accurately reflect the real-world behavior of traffic generated by Windows machines. The precise mix of system types is not as important as achieving the proper volume.
Likewise, the type of background traffic should be accurately reproduced. If the dominant traffic on the real-world network involves large, continuous streams of data, then the same type of traffic should be used across the emulator. If the real-world traffic consists of many smaller transfers, such as is often the case with HTTP or CIFS, then the background traffic should be similarly bursty. The precise mix of traffic types is not as important as achieving the proper volume.
If you correctly configured the queuing buffer sizes and algorithms above, then you should observe loss rates similar to the real-world once you scale up the quantity background traffic.
The time it takes for a network datagram to travel across a real-world network is determined by router queueing, switching delay, transmission time, and propagation time. If you properly sized the emulation buffer, queuing algorithm, and traffic volume as discussed above, then the emulated network should already be exhibiting most of the latency present in the real-world network.
Switching delay is the time it takes each device's processor to move each packet from one network link to the next. It is usually no more than a few milliseconds per device, but on a path with many hops this could add up to a significant amount.
Transmission delay is only significant if there are slow links at one or both end-points. For example, on a 1.5 megabit per second T1 speed network, it takes about 8ms to transmit each 1500 byte network frame and less than 0.2ms for each TCP acknowledgement. Once the network datagrams reach the telecom backbone, they will be traveling closer to gigabit speeds, where the transmission time will be microseconds.
Propagation delay is the time it takes the physical signal to travel across the media. It can be significant for overseas paths and substantial for satellites.
An emulator's built-in latency simulator should only be used to establish the switching, transmission, and propagation delays in the WAN emulator. Randomly varying latency (jitter) does not occur in real-world networks, and must be disabled. Any variation in the latency should be created by background traffic as described above.
Once you have established an emulation environment which approximates the real-world environment through the use of uplink and WAN emulators, background traffic, properly sized queues, and appropriate queuing algorithms, you are ready to begin testing.
Make sure to test applications on different machines than the ones being used to generate background traffic. If you are uncertain about any of the emulation setups, such as the type of RED algorithm being used, make sure to test using a variety of settings.
A common goal of emulation testing is to observe not just how a given application will fare in a particular real-world environment, but also to simulate changes in that environment.
Changing the speed is simplest, as this can be done by reconfiguring the uplink emulator's speed settings. Varying the path length can be simulated by changing the buffer size and switching-delay in the WAN emulator. Differences in loss rates and overall latency require increasing or decreasing the volume of background traffic until the desired effect is achieved.
Be sure to keep complete and accurate records of everything you do. It can be very tempting to skip this step. However, it is absolutely vital to record every aspect of the test environment every time. There may be significant factors in the setup which are not obvious at the time and which you will need to know later. In particular, be sure to record all emulator settings including ones you have not changed. It may turn out later that some of those are important.
Network paths utilizing satellite links or other long-range radio links create several exceptions to the guidance above.
Satellite links have substantial propagation delays which may be larger than the queuing delays. This will require setting the emulator's base latency much higher since the physical signal may be dominating overall latency rather than the queuing of background traffic.
Satellite and other long-range radio links may also experience genuinely random bit-errors. As discussed in Tech Note 0021, this is the result of radio interference causing corruption in the signal which results in the loss of the surrounding datagram. However, most such links use forward-error-correction to compensate for nearly all such corruption, preventing packet loss.
To emulate satellite or radio interference, you must determine the post-correction error rate that remains after forward-error-correction has been applied. This will likely vary with weather conditions. You may need to contact your satellite provider for estimates of this. These satellite post-correction error rates will likely be expressed as bit-error-rates and should be programmed as such into the emulator.
A typical wide-area-network path involves dozens, if not hundreds, of devices and hundreds if not hundreds-of-thousands of simultaneous data flows. Accurately simulating the interaction of all of these factors with just one or two hardware appliances requires a detailed understanding of the network environment and careful setup of the test environment.
Because of the complexity involved in setting up accurate emulation testing, and because emulation appliances are often built using fixed assumptions which are highly specific TCP and TCP-like data flows, DEI strongly recommends testing in real-world conditions.