文档库 最新最全的文档下载
当前位置:文档库 › Experimental_Performance_Evaluation_of_a_Hybrid_Packet_Reception_Scheme_for_linux_network_subsystem

Experimental_Performance_Evaluation_of_a_Hybrid_Packet_Reception_Scheme_for_linux_network_subsystem

Experimental Performance Evaluation of a Hybrid Packet Reception Scheme for

Linux Networking Subsystem

1K. Salah 2A. Qahtan

1,2Department of Information and Computer Science, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia, Email: {salah,kahtani}@https://www.wendangku.net/doc/fa3748872.html,.sa

Abstract

The heavy traffic introduced by Gigabit networks can result in a significant performance degradation of network hosts. This degradation happens as a result of the interrupt overhead associated with the high rate of packet arrivals. NAP I, a packet reception mechanism integrated into the latest version of Linux networking subsystem, was designed to improve Linux performance to suit today’s Gigabit traffic. NAP I is definitely a major step up from earlier reception mechanisms; however, NAPI has shortcomings and its performance can be further enhanced. A hybrid packet reception scheme that switches between interrupt disabling-enabling (DE) and polling (NAPI) can better improve the performance of Gigabit network hosts. In this paper we prove experimentally that the hybrid scheme can boost under different traffic load conditions the performance of general-purpose network desktops or servers running network I/O-bound applications. The implementation of the hybrid scheme is done in the latest version of Linux kernel 2.6.15.

1. Introduction

With today’s general-purpose Gigabit network adapters (also termed as NICs), an incoming packet gets transferred (or DMA’d) through the PCI bus from the NIC to a circular buffer in the kernel space known as DMA Rx Ring. After the packet has been successfully DMA’d, the NIC generates an interrupt to notify the kernel to start protocol processing of the incoming packet. During protocol processing, other packets may arrive and get queued into the DMA Rx Ring. Protocol processing typically involves TCP/IP processing of the incoming packet and delivering it to user applications. The packet does not need to be delivered to user applications if the receiving host is configured for IP forwarding, routing, filtering or NATing.

Under heavy traffic load such as that of Gigabit networks, the performance of interrupt-driven systems can be significantly degraded, and thus resulting in poor host performance. This is because every incoming packet triggers a hardware interrupt, which involves a context switching of saving and restoring processor’s state and also in a potential cache/TLB pollution. More importantly, interrupt-level handling, by definition, has an absolute priority over all other tasks. If the interrupt rate is high enough, the system will spend all of its time responding to interrupts. This will cause the system throughput to drop to zero. This situation is called receive livelock [1]. In this situation, the system is not deadlocked, but it makes no progress on any of its tasks, causing any task scheduled at a lower priority to starve or have no chance to run.

A number of schemes to mitigate interrupt overhead and resolve receive livelock exists in the literature. Among the most popular ones are normal interruption, interrupt disabling and enabling, interrupt coalescing, and polling. In normal interruption, every incoming packet causes an interrupt to trigger protocol processing by the kernel. Typically protocol processing is performed by a deferrable and reentrant high-priority kernel function (e.g., tasklet in Linux). The idea of interrupt disable-enable (a.k.a. DE) scheme [2,3] is to have the received interrupts of incoming packets turned off (or disabled) as long as there are packets to be processed by kernel’s protocol stack, i.e., the protocol buffer is not empty. When the buffer is empty, the interrupts are turned on again (or re-enabled). This means that protocol processing of packets by the kernel is processed immediately and at interrupt priority level. Any incoming packets (while the interrupts are disabled) are DMA’d quietly to protocol buffer without incurring any interrupt overhead. With the scheme of interrupt coalescing (IC) [4], the NIC generates a single interrupt for a group of incoming packets. This is opposed to normal interruption mode in which the NIC generates an interrupt for every incoming packet. Finally, the basic idea of polling is to disable interrupts of incoming packets altogether and thus eliminating interrupt overhead completely. In polling [1,5-8], the OS periodically polls its host system memory (i.e., protocol processing buffer or DMA Rx Ring) to find packets to process. In general, exhaustive polling is rarely implemented. Rather, polling with quota or budget is usually the case whereby only a maximum number of packets is processed in each poll in order to leave some CPU power for application processing.

In [3], we utilized both mathematical analysis and discrete-event simulation to study the performance of those most popular interrupt-handling schemes which included normal interruption, polling, interrupt disabling and enabling, and interrupt coalescing. For polling, we studied both pure (or FreeBSD-style) polling and Linux NAPI polling. The performance was studied in terms of key performance indictors which included throughput, system latency, and CPU availability (i.e., the residual fraction of CPU bandwidth left for user applications). Based on the study carried out in [3], it was concluded that no particular interrupt handling scheme gives the best performance under all load conditions. Under light and heavy traffic loads, it was shown that the scheme of disabling and enabling interrupts (DE) outperforms, in general, all other schemes in terms of throughput and latency.

However, when it comes to CPU availability, polling is the most appropriate scheme to use, particularly at heavy traffic load. Based on these key observations and in order to compensate for the disadvantages of DE scheme of poor CPU availability, we proposed in [3] a hybrid scheme that combines both DE and polling. Such a hybrid scheme would be able to attain peak performance under both light and heavy traffic loads by employing DE at light load and polling at heavy load. In this paper we implement the Hybrid scheme in the latest Linux version 2.6.15. In order to minimize the changes made to the original code of both NIC driver and Linux kernel, we utilize NAPI polling which is implemented in the latest versions of Linux. In this paper, we show how to implement and measure experimentally the performance of Hybrid, DE, and NAPI. In particular, the paper addresses important implementation issues related to the Kernel code and NIC device driver. The performance of Hybrid is experimentally evaluated in terms of throughput, latency, CPU availability, packet loss, and interrupt rate.

The rest of the paper is organized as follows. Section 2 gives a brief background and related work on the hybrid scheme. It discusses how different our hybrid scheme from those proposed in the literature. Section 3 presents major changes required by network device driver to implement DE and Hybrid in latest version of Linux 2.6.15. Section 4 describes experimental setup, then Section 5 presents performance measurements. Finally, Section 6 concludes the study and identifies future work.

2. Hybrid Scheme

A hybrid packet reception scheme of normal interruption and polling was first proposed in [2]. Later, it was implemented and utilized in [5,8,9,10]. In this hybrid scheme, normal interrupt was used under both light and normal network traffic load, whereas polling was used under heavy network traffic load. In sharp contrast, our hybrid scheme (which we initially proposed in [3]) differs from previously proposed scheme in three significant ways: (1) Under light and normal loads, our hybrid scheme utilizes the scheme of DE as opposed to normal interruption which was used in [2,5,8-10].

(2) Our hybrid scheme switches between DE and NAPI based on the estimated incoming traffic rate. (3) The switching point is identified experimentally, rather than arbitrarily.

It was demonstrated in [3] that normal interruption performs relatively poorly under light and normal traffic loads in terms of system throughput, CPU availability, and latency. This was due to the fact that normal interruption introduces interrupt overhead for each packet arrival and thereby leaving limited CPU power for IP processing and user applications. On the other hand, DE gave acceptable performance in terms of system throughput, CPU availability, and latency under low and normal traffic loads.

To identify the severity of traffic load conditions, and as opposed to other hybrid schemes proposed in [2,5,8-10], our hybrid scheme switches between DE and NAPI based on the estimated incoming packet arrival rate. In particular, our hybrid scheme estimates the traffic rate periodically. It uses two thresholds for switching to minimize repeated switchings (or oscillation) around the saturation point in the presence of a traffic load that is highly fluctuating at the point of saturation. Moreover, the saturation point was not identified arbitrary but using before-hand experimentation. On the other hand, different ways to identify severity of traffic load conditions as

well as the switching point were used in [2,5,8-10].

In contrast with other proposed schemes, our hybrid scheme

is more practical. It is computationally inexpensive, and requires no hardware support. Our hybrid scheme uses a simple estimator of packet arrival rate [11]. The estimator is highly cost-effective as its computation is not carried out upon

the arrival of each incoming packet and does not require measurement of interrarival times of incoming packets. The estimator makes use of the number of packets received within

a predefined time window. Almost all today’s network adapters provide a total count of received packets. In [12], the quality and performance of the estimator was evaluated experimentally and shown to be highly effective in terms of computational cost, accuracy, agility, and stability.

3. Implementation

This section briefly describes major modifications to be introduced to network device drivers to integrate and implement both schemes of DE and Hybrid. All modifications

were made to the driver code, which was compiled as a kernel loadable module. In this section, an adequate level of details is provided so that such changes can be easily integrated in different or newer versions of drivers. In particular we present

and discuss major changes made to Broadcom BCM5752 network drivers to support DE and Hybrid.

In our implementation, we used BCM5700 network driver [13]. In order to understand the major changes needed to support DE and Hybrid, we first present the sequence of function calls that are already in place to support the default NAPI. We present the sequence of function calls for both interrupt and softirq handlings. Interrupt handling is part of the driver code, while softirq handling is part of the kernel code.

At the driver level, all NIC interrupts are handled by

bcm5700_interrupt() function which calls LM_ServiceInterrupts() to read NIC’s status registers to find

the cause of interruption. If the interrupt is caused by the arrival of a new packet into the DMA Rx Ring,

LM_ServiceRxInterrupt() is called to decide to process the packet using non-NAPI or NAPI. In case of NAPI,

MM_ScheduleRxPoll() is called which then calls

__netif_rx_schedule() which in turn raises a softirq of NET_RX_SOFTIRQ type to schedule polling. When control returns back to LM_ServiceRxInterrupt(), RxInt is disabled.

At the kernel level, NET_RX_SOFTIRQ softirq is handled

by net_rx_action() which gets called by __do_softirq() to process pending softirqs. net_rx_action() extracts a network device from poll_list and calls its respective poll virtual function. For the BCM5700 driver, bcm5700_poll() function is called which in turn calls LM_ServiceRxPoll(). These two latter functions are part of the BCM5700 driver. In

LM_ServiceRxPoll(), the descriptors of packets are dequeued

from the DMA Rx Ring and inserted in a device queue RxPacketReceivedQ for further protocol processing. LM_ServiceRxPoll() terminates when the minimum of quota Q

or budget B packets is processed or when the Ring becomes

empty. MM_IndicateRxPackets() then gets called to dequeue each packet (using pointer manipulation) from RxPacketReceivedQ into sk_buff buffer and calls the kernel function netif_receive_skb() for further processing. netif_receive_skb() routes packets to its respective packet handler function for protocol processing. For IP processing, the kernel function ip_rcv() is called. Refilling of Rx DMA Ring is carried out by LM_QueueRxPackets().DE Implementation. To implement DE scheme, interrupt handling has to be changed. The driver has to be modified so that protocol processing of all queued packets in the DMA Rx Ring is done during LM_ServiceRxInterrupt(). Any incoming packets (while the interrupts are disabled) are DMAed quietly to Rx Ring without incurring any interrupt overhead. This means that processing of received packets by the kernel is started immediately and executed at interrupt priority level. And this way, deferring protocol processing of received packets using softirq is mitigated. This change is only done for RxInt, and all other interrupts including errors and TxInt are left untouched. It is also to be noted with this change packet reception is given more priority than packet transmission, as packet transmission remains deferred with softirq. The sequence of function calls has been changed in the following way. LM_ServiceRxInterrupt() directly calls LM_ServiceRxPoll().LM_ServiceRxPoll() terminates after all

packets in Rx Ring are dequeued and inserted in RxPacketReceivedQ for further protocol processing. LM_ServiceRxInterrupt() then calls MM_IndicateRxPackets() to dequeue the packets into a sk_buff structure and calls netif_receive_skb() for further protocol processing. Lastly, and before returning from interrupt handling, bcm5700_interrupt checks the NIC status register and if more packets got DMA’d during packet processing, LM_ServiceRxInterrupt handling will be activated again. Overhead of DE vs. NAPI. With this implementation, it is critical to note that DE incurs far less overhead than NAPI. First, as opposed to DE in which processing of packets is executed immediately and at interrupt level, processing of packets in NAPI is deferred and executed at a lower priority using softirq. Second, all softirqs are reentrant, i.e., they run with interrupts enabled and therefore can be preempted at any time to handle a new incoming interrupt, therefore handling of a softirq may stretch due to other interrupt handling activities in the system. Third, a softirq may also stretch considerably due to processing of other softirqs, as __do_softirq does not handle only softirq of NET_RX_SOFTIRQ type for received packets, but also five other softirq types which include soft timers, high and low-priority tasklets, transmission of packets, and SCSI handling. Fourth, net_rx_action algorithm of NAPI is more computationally expensive than DE. NAPI incurs non-ignorable I/O write latencies to disable and enable RxInt [14], and also requires enforcing upper bounds for budget, quota, and handling time, besides the management of poll_lis t to provide fairness and avoid starvation. In contrast, DE simply performs exhaustive processing of received packets. Hybrid Implementation. Hybrid scheme operates at any given time either as DE or NAPI. This requires maintaining the current state of operation: DE or NAPI. For this a global variable RxScheme is defined and assigned an enumerated value of DE or NAPI. Driver functions (specifically

bcm5700_interrupt, LM_ServiceRxInterrupt, and LM_ServiceRxPoll ) are instrumented to check RxScheme in order to properly operate in either DE or NAPI. It is to be noted when Hybrid operates as NAPI, its budget has to be configured to a value of 2. Experimentally, we determined that a budget of 2 or 6 will outperform the default configuration of 300 in terms of throughput and latency. Changing operation mode between DE and NAPI is based on the estimated traffic rate. An experimental evaluation and detailed implementation of the packet rate estimation for BCM5700 driver is given in [12]. It is to worth noting that the switching or cliff point can best be determined experimentally by simply identifying host saturation in terms of throughput or CPU utilization as we subject the host to an arrival rate that increases gradually. For example in our experiment described later in Section 5, the CPU utilization consumed by system and shown with Linux “top ” and “mpstat ” utilities approaches close to 100% when traffic rate is around 100 Kpps. Therefore, we will use 100 Kpps as the cliff point in our implementation of the Hybrid scheme. We will use two thresholds (85 Kpps and 155 Kpps) to minimize repeated switchings (or oscillation) around the saturation point in the presence of a traffic load that is highly fluctuating at the point of saturation. 4. Experimental Setup

In order to test our implementation and evaluate the performance of the three schemes (viz. DE, NAPI, and Hybrid), we set up an experiment comprised of two Linux machines of a sender and a receiver connected with 1 Gbps Ethernet crossover cable (as shown in Figure 1). The sender has two Intel Xeon processors running at 3.6 GHz with 4 GB of RAM. It has an embedded Intel 82541GI Gigabit Ethernet NIC running with the e1000 driver. The receiver is an Intel Pentium 4 processor running at 3.2 GHz with 512 MB of RAM. It has a 3COM Broadcom NetXtreme Gigabit Ethernet card with BCM5752 controller. This NIC is running with a loadable kernel module of the modified BCM5700 driver version 8.2.18 that implements the schemes of DE and Hybrid. Both sender and receiver use Fedora Core 5 Linux 2.6.15. To minimize the impact of other system activities on performance and measurement, we boot up both machines with run level 3, and we made sure that no services are running in the background. We also disabled Ethernet link flow control. For both machines, the timer interrupt frequency or HZ was set to 250. The timer for packet rate estimation was set to fire every two jiffies (i.e., every 8 ms). To generate traffic from the sender machine, we used the open-source D-ITG 2.4.4 generator [15]. For all of our generated traffic, we used UDP packets with a constant 64-byte packet sizes and constant interarrival times. The reason we used UDP is to ensure that at the receiver we have 1:1 mapping between incoming packets and generated interrupts. This makes it easier to analyze results and source of delays and overhead. With this setup, ITGSend was able to produce up to 235 Kpps for one flow.

5. Performance Measurements

For evaluating performance, several measurements of various metrics were taken in relation to generated traffic load. These metrics include the average throughput, packet loss, latency, CPU availability, and interrupt frequency. For all of experimental results reported and shown in this section, we performed three experimental trials and final results are the average of these three trials. For each trial, we recorded the results after the generation of a flow with a specific rate for a sufficient duration of 30 seconds. The average throughput and packet loss were recorded by decoding the logs produced by ITGRecv. As for measuring the average CPU availability and interrupt rate, we used the “sar

” Linux utility at the receiver.

Figure 1. Experimental setup

We study and compare experimental performance results of the schemes of Hybrid, DE, and NAPI. Performance is reported as shown in Figure 2 for a number of important metrics which include throughput, packet loss, latency, interrupt rate, and CPU availability. In terms of throughput and packet loss (exhibited in Figure 2(a) and (b)), Hybrid outperforms both NAPI and DE. At heavy load, Hybrid gives more acceptable and sustainable throughput; whereas the throughput of NAPI and DE starts to degrade as traffic load increases. This is primarily due to the fact that Hybrid, after

reaching upper cliff point (which is set to 115 Kpps), switches to NAPI with a budget of 2, and thereby giving ITGRecv adequate CPU time to process packets. It is also depicted that DE gives more throughput and less packet loss than NAPI. As noted in Section 3, the main reason for this is that DE runs at interrupt level and incurs far less overhead than NAPI. In DE and NAPI, the residual CPU time for ITGRecv starts to diminish gradually with higher traffic rate.

Figure 2(c) exhibits the average round-trip latency recorded by ITGSend with respect to the generated traffic rate. The figure shows that Hybrid and DE outperform NAPI because NAPI incurs more overhead. Also both Hybrid and DE give comparable results up to a rate of around 115 Kpps (which is cliff point ). Hybrid latency approaches that of NAPI beyond 115 Kpps as Hybrid switches to operate in NAPI. Figure 2(d), which is a zoom-in of Figure 2(c) at low traffic rate, shows that at severely low rate of less than 200 pps, the latency for all schemes are relatively large, with NAPI exhibiting the largest. The reason for this is that an overhead (from interrupt or softirq scheduling) is incurred separately for almost every incoming packet. As the arrival rate increases, the incurred overhead is aggregated for multiple packets. At very low rate, NAPI exhibits the most overhead as discussed in Section 3. Lastly, it is observed in both figures that the overhead involved in estimating traffic rate in Hybrid did not introduce a noticeable additional delay when Hybrid operates in DE of up to a rate of 115 Kpps. This was expected as the implementation of rate estimation was performed every 8 ms and its code uses shift operations to avoid CPU expensive operations of multiplication and division.

Figure 2. Performance measurements in relation to incoming traffic rate

Figure 2(e) compares the three schemes in terms of mitigating the interrupt rate. It is observed that under a rate below 50 Kpps, the interrupt rate increases linearly in relation with the arrival rate for all three schemes. Shortly after that the interrupt rate drops significantly, and then it starts increasing again. This is in line with the expected behavior. Below 50 Kpps, the interrupt rate is low and host is able to finish interrupt handling and processing of packets before the next interrupt. The period of disabling and enabling RxInts (i.e., interrupt masking period) for all of the three schemes finishes before the occurrence of next interrupt. As incoming rate increases beyond 50 Kpps, multiple packet arrivals or interrupts occur within this masking period, and thus showing a significant drop of interrupt rate shortly after 50 Kpps. After 60 Kpps the interrupt rate starts increasing again but very slowly with respect to the arrival rate. There is still interrupt masking occurring; however, the masking period is not stretching considerably with higher rate. The CPU still has the power to handle and process packets relatively quickly. For Hybrid, at around 130 Kpps, the interrupt rate gradually drops to zero. This is due to the fact that masking period for Hybrid stretches considerably with such a high rate, and therefore forces Hybrid to operate in NAPI with the limited budget of 2. Under such high rate, NAPI with budget of 2 would not be able to exhaust all packets in one polling period and never re-enable RxInt.

The CPU availability is highly affected by the interrupt rate. This is clearly demonstrated in Figure 2(f) where it shows a comparison of the three schemes in terms of the percentage of residual CPU power or bandwidth left after processing packets by both Kernel and ITGRecv. As shown, NAPI results in the least residual CPU bandwidth. This is expected as noted in Section 3 that NAPI requires the most overhead. It is also shown that curves of DE and Hybrid are comparable with no noticeable impact due to rate estimation. It is observed that at around incoming rates of 50-70 Kpps, the CPU availability does not increase linearly (and in fact is more in NAPI at 60 Kpps than 50Kpps). The reason for this is that the corresponding interrupt rate at rate 50 Kpps (as shown in Figure 2(e)) falls considerably, and therefore results in less overhead. The figure shows that CPU availability at 100 Kpps approaches zero for all schemes. At this point the CPU is fully consumed by both kernel processing as well as ITGRecv.

6. Conclusion

We presented and discussed major changes required to implement a hybrid packet reception scheme in the latest version of Linux kernel 2.6.15. We proved experimentally that the Hybrid scheme can improve the performance of network I/O applications under low and high traffic rate. We measured and compared the performance of DE, NAPI, and Hybrid schemes in terms of throughput, packet loss, latency, and CPU availability. The Hybrid scheme clearly shows noticeable performance gain for general-purpose network desktops or servers running network I/O-bound applications as that of ITGRecv. Such gain can also be achieved for today’s general-purpose servers running critical and popular network I/O applications such as web, IRC, database transactions, networked data acquisition, network intrusion detection and prevention, deep-packet analysis, packet logging and monitoring, etc. Also Hybrid scheme has the potential of improving the performance of general-purpose servers con-figured as NAT/firewalls or routers. To accomplish this, we plan to evaluate experimentally the performance of Hybrid for hosts configured for IP-forwarding. We also plan to extend the implementation of Hybrid and evaluate its performance for Linux hosts with a quad-core processor and multiple network interfaces.

Acknowledgments

We acknowledge the support of King Fahd University of Petroleum and Minerals in completion of this work. This work has been funded under Project #INT-383. We are also very thankful to Robert Olsson and J amal Salim Hadi for their valuable replies and comments to some of the issues faced throughout this work.

References

[1]K. Ramakrishnan, “Performance consideration in designing network

interfaces,” IEEE Journal on Selected Areas in Communications, vol.

11, no. 2, February 1993, pp. 203-219.

[2]J. Mogul, and K. Ramakrishnan, “Eliminating receive livelock in an

interrupt-driven kernel,” ACM Trans. Computer Systems, vol. 15, no. 3, August 1997, pp. 217-252.

[3]K. Salah, K. El-Badawi, and F. Haidari, “Performance analysis and

comparison of interrupt-handling schemes in Gigabit networks,”

International Journal of Computer Communications, Elsevier Science,

Vol. 30(17) (2007), pp. 3425-3441.

[4]K. Salah “To coalesce or not to coalesce”, International Journal of

Electronics and Communications (AEU), vol. 61, no. 4, 2007, pp. 215-

225.

[5] C. Dovrolis, B. Thayer, and P. Ramanathan, “HIP: Hybrid Interrupt-

Polling for the network interface,” ACM Operating Systems Reviews,

vol. 35, October 2001, pp. 50-60.

[6]J. H. Salim, “Beyond softnet,” Proceedings of the 5th Annual Linux

Showcase and Conference, November 2001, pp 165-172

[7]L. Deri, “Improving passive packet capture: beyond device polling,”

Proceedings of the 4th International System Administration and Network Engineering Conference, Amsterdam, September 2004.

[8]O. Maquelin, G. R. Gao, H. J. Hum, K. G. Theobalk, and X. Tian,

“Polling Watchdog: combining polling and interrupts for efficient message handling,” P roceedings of the 23rd Annual International Symposium on Computer Architecture, P hiladelphia, P A, 1996, pp.

178-188.

[9]X. Chang, J. Muppala, P. Zou, and X. Li, “A robust device hybrid

scheme to improve system performance in Gigabit Ethernet networks”, Proceedings of the 32nd IEEE Conference on Local Computer Networks, Dublin, Ireland, October 15-18, 2007, pp. 444-451.

[10]X. Chang, J. Muppala, W. Kong, P. Zou, X. Li, and Z. Zheng, “A

queue-based adaptive polling scheme to improve system performance in Gigabit Ethernet networks,” Proceedings of the 26th IEEE International Conference on

P

erformance, Computing, and Communications Conference (IPCCC 2007), New Orleans, Louisiana, April 11-13, 2007, pp. 117-124.

[11]K. Salah and F. Haidari, “Evaluation and comparison of a novel packet

rate estimator,” In the Proceedings of 6th IEEE AICCSA Conference, March 2008.

[12]K. Salah, F. Haidari, A. Bahjat, and A. Mana, “Implementation and

experimental evaluation of a simple packet rate estimator”, International Journal of Electronics and Communications (AEU), In Press.

[13]“3Com? 10/100/1000 PCI-X Server Network Interface Card,”

Available from ftp://https://www.wendangku.net/doc/fa3748872.html,/pub/nic/3c996/linux-8.2.18.zip [14]J. Salim, “When NAPI comes to town,” P roceedings of Linux 2005

Conference, Swansea, U.K., August 2005.

[15]D. Emma, A. Pescape, and G. Ventre, “D-ITG, Distributed Internet

Traffic Generator”, Available from

http://www.grid.unina.it/software/ITG

相关文档