摘要: The exponential growth in Internet traffic has motivated the development of a new breed microprocessors called network processors, which are designed to address performance problems resulting from explosion traffic. efforts these processors concentrate almost exclusively on streamlining their data paths speed up packet processing, mainly consists routing and movement. Rather than blindly pushing processing hardware, an alternative approach is avoid repeated computation by applying time-tested architecture idea caching processing. Because streams presented general-purpose CPUs exhibit different characteristics, detailed cache design tradeoffs for two also differ considerably. This research focuses memory specifically processors. Using trace-drive simulation methodology, we evaluate series three progressively more aggressive routing-table designs. Our results demonstrate that incorporation hardware caches into when combined with efficient algorithms, can significantly improve overall forwarding due sufficiently high degree temporal locality streams. Moreover, designs result factor 5 difference average table lookup time, thus rate.