AI Research7 days ago
COMPACT: Common-token Optimized Model Pruning Across Channels and Tokens
arXiv:2509.06836v1 Announce Type: cross Abstract: Making LLMs more efficient in memory, latency, and serving cost is crucial for edge deployment, interactive applications, and sustainable inference at...