Latency Hiding in Multi-Threading and Multi-Processing of Network Applications

Xiaofeng Guo1,  Jinquan Dai2,  Long Li2,  Zhiyuan Lv2,  Prashant R. Chandra2
1Google, 2Intel Corp.


Abstract

Network processors employ a multithreaded, chip-multiprocessing architecture to effectively hide memory latency and deliver high performance for packet processing applications. In such a parallel paradigm, when multiple threads modify a shared variable in the external memory, the threads should be properly synchronized such that the accesses to the shared variable are protected by critical sections. Therefore, in order to efficiently harness the performance potential of network processors, it is critical to hide the memory latency and synchronization latency in multi-threading and multi-processing. In this paper, we present a novel program transformation used in the Intel® Auto-partitioning C Compiler for IXP, which perform optimal placement of memory access instructions and synchronization instructions for effective latency hiding. Experimental results show that the transformation provides impressive speedup (up-to to 8.5x) and scalability (up-to 72 threads) of the performance for the real-world network application (a 10Gbps Ethernet Core/Metro Router).