SelfDoc: Self-Supervised Document Representation Learning

SelfDoc: Self-Supervised Document Representation Learning

Abstract

We propose SelfDoc, a task-agnostic pre-training framework for document image analysis. Because documents are multimodal displays and are intended for sequential reading, our framework involves positional, textual, and visual information for every semantically meaningful component from a document, and models the contextualization between each block of content. Unlike from the existing document pre-training models, our model is coarse-grained instead of treating individual words as input, therefore avoiding an overly fine-grained with excessive contextualization. Beyond that, we introduce cross-modal learning in the model pre-training stage, and hence we fully leverage multimodal information from unlabeled documents. For downstream usage, we propose modality-adaptive attention for multimodal feature fusion by adaptively emphasizing language and vision signals. Our framework benefits from the self-supervised pre-training on documents without requiring annotations by a feature masking training strategy. Compared to previous works, our model achieves superior performance on multiple downstream tasks with significantly fewer document samples used in the pre-training stage.

Publication
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).