PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling

1 Huazhong University of Science and Technology   2 Huawei Inc.
* Equal Contribution

Abstract

Teaser.

Document understanding is a challenging task to process and comprehend large amounts of textual and visual information. Recent advances in Large Language Models (LLMs) have significantly improved the performance of this task. However, existing methods typically focus on either plain text or a limited number of document images, struggling to handle long PDF documents with interleaved text and images, especially in academic papers. In this paper, we introduce PDF-Wukong, a multimodal large language model (MLLM) which is designed to enhance multimodal question-answering (QA) for long PDF documents. PDFWuKong incorporates a sparse sampler that operates on both text and image representations, significantly improving the efficiency and capability of the MLLM. The sparse sampler is integrated with the MLLM’s image encoder and selects the paragraphs or diagrams most pertinent to user queries for processing by the language model. To effectively train and evaluate our model, we construct PaperPDF, a dataset consisting of a broad collection of academic papers sourced from arXiv, multiple strategies are proposed to generate automatically 1M QA pairs along with their corresponding evidence sources. Experimental results demonstrate the superiority and high efficiency of our approach over other models on the task of long multimodal PDF understanding, surpassing proprietary products by an average of 8.6% on F1. Our code and dataset will be released at https://github.com/yh-hust/PDF-Wukong.

Method

Overall Framework of PDF-Wukong. our pipeline consists of three parts: a document parser, a sparse sampler and a large language model. The document parsing stage first converts the input PDF document into machine-readable content of interleaved text and images, Then, the sparse sampler encodes the text blocks and images separately and caches their embeddings. When a user inputs a query, the most relevant content can be sampled using a simple similarity measure. Finally, the query and the sampled tokens are input into the LLM to generate the answer.

PaperPDF Construction

The construction process of PaperPDF. It can be divided into four steps: structured parsing, rule-based extraction, prompt construction, and filtering. We obtained 89k PDF documents from the arXiv repository as our document set \( \mathbb{D} \). For each document \( D \) in \( \mathbb{D} \), we first employed Grobid to parse the document and extract its text chunks \( \{T_{1}, T_{2}, \ldots, T_{m}\} \) and images chunks \( \{I_{1}, I_{2}, \ldots, I_{n}\} \). Subsequently, predefined rules are used to randomly select chunks from the document, which may consist of text chunks \( T_{i} \), chart chunks \( I_{j} \) or a combination of both. The selected chunks will be input into commercial MLLM products according to different prompt templates and then, the question \( Q \) related to the input chunks, along with the corresponding answer \( A \), is subsequently generated. Notably, For the training set, we used Gemini for generation due to its free and rapid accessibility, while the high-performance GPT-4V was employed to construct the test set, ensuring the validity and robustness of the evaluation. Finally, we devise a set of rules to filter the generated training and testing sets automatically. The removing rules for the samples include too-short questions, too-long answers, non-English text, etc. Further manual checking is conducted on test set to ensure the reliability of the evaluation. PaperPDF consists of two types of QA pairs: Single-evidence and Multi-evidence.

Example comparisons

 
   
     

BibTeX

     
@article{xie2024pdfwukong,
        title={PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling}, 
        author={Xie, Xudong and Yin, Liang and Yan, Hao and Liu, Yang and Ding, Jing and Liao, Minghui and Liu, Yuliang and Chen, Wei and Bai, Xiang},
        year={2024},
        journal={arXiv preprint arXiv:2410.05970},
        url={https://arxiv.org/abs/2410.05970},
      }
   
Â