Recently, there has been significant interest in developing space and time efficient solutions for answering continuous summarization queries over data streams. While these techniques are evaluated in a standard CPU setting, many of their applications such as click-fraud detection, and network-traffic summarization typically execute on special networking architectures called Network Processing Units (NPUs). These NPUs interface with special kind of associative memories known as the Ternary Content Addressable Memories (TCAMs). In this paper, we describe how the integrated architecture of NPU and TCAMs can be exploited towards achieving the goal of developing high-speed stream summarization solutions. We analyze popular solutions for the frequent elements problem in data stream, discuss the bottleneck issues and motivate how TCAMs can help alleviate these bottlenecks. A preliminary evaluation on an NPU platform reveals the performance gains of the TCAM-conscious techniques over software implementations.