Document images are a ubiquitous source of data where the text is organized in a complex hierarchical structure ranging from fine granularity (e.g., words), medium granularity (e.g., regions such as paragraphs or figures), to coarse granularity (e.g., the whole page). The spatial hierarchical relationships between content at different levels of granularity are crucial for document image understanding tasks. However, existing methods learn features from either word level or region level but fail to take both into consideration at the same time. Word-level models are restricted by the fact that they originate from pure-text language models which only encode the word-level context, while region-level models attempt to encode regions corresponding to paragraphs or text blocks into a single embedding, but they perform worse with additional word-level features. To deal with these issues, we propose MGDoc, a new multi-modal multi-granular pre-training framework that encodes page-level, region-level, and word-level information at the same time. The new framework goes beyond single-granularity architectures and uses a unified text-visual encoder to obtain multi-modal features across different granularities, which makes it possible to project the multi-granular features into the same hyperspace. To improve on region-level models that lack region-word correlation, we design a cross-granular attention mechanism and specific pre-training tasks for our model to reinforce the model to learn the hierarchy between regions and words. Experiments demonstrate empirically that our proposed model is able to learn better features that perform well across granularities and also lead to improvements in downstream tasks.