Neural Video Compression with Spatio-Temporal Cross-Covariance Transformers

This work aims to effectively and jointly leverage robust temporal and spatial information by proposing a new 3D-based transformer module: Spatio-Temporal Cross- Covariance Transformer (ST-XCT). The ST-XCT module combines two individual extracted features into a joint spatio-temporal feature, followed by 3D convolutional operations and a novel spatiotemporal- aware cross-covariance attention mechanism.

October 28, 2023
ACM International Conference on Multimedia (ACM Multimedia) 2023
 

 

Authors

Zhenghao Chen (The University of Sydney)
Lucas Relic (ETH Zurich)
Roberto Azevedo (DisneyResearch|Studios)
Yang Zhang (DisneyResearch|Studios)
Markus Gross (ETH Zurich)
Dong Xu (The University of Hong Kong)
Luping Zhou (The University of Sydney)
Christopher Schroers (DisneyResearch|Studios)

Neural Video Compression with Spatio-Temporal Cross-Covariance Transformers

Abstract

Although existing neural video compression (NVC) methods have achieved significant success, most of them focus on improving either temporal or spatial information separately. They generally use simple operations such as concatenation or subtraction to utilize this information, while such operations only partially exploit spatio-temporal redundancies. This work aims to effectively and jointly leverage robust temporal and spatial information by proposing a new 3D-based transformer module: Spatio-Temporal Cross- Covariance Transformer (ST-XCT). The ST-XCT module combines two individual extracted features into a joint spatio-temporal feature, followed by 3D convolutional operations and a novel spatiotemporal- aware cross-covariance attention mechanism. Unlike conventional transformers, the cross-covariance attention mechanism is applied across the feature channels without breaking down the spatio-temporal features into local tokens. Such design allows for modeling global cross-channel correlations of the spatio-temporal context while lowering the computational requirement. Based on ST-XCT, we introduce a novel transformer-based end-to-end optimized NVC framework. ST-XCT-based modules are integrated into various key coding components of NVC, such as feature extraction, frame reconstruction, and entropy modeling, demonstrating its generalizability. Extensive experiments show that our ST-XCT-based NVC proposal achieves state-of-the-art compression performances on various standard video benchmark datasets.

Copyright Notice