Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes

by   Simran Arora, et al.

A long standing goal of the data management community is to develop general, automated systems that ingest semi-structured documents and output queryable tables without human effort or domain specific customization. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using large language models (LLMs). LLMs, which are pretrained on broad data, can perform diverse downstream tasks simply conditioned on natural language task descriptions. We propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify two fundamentally different strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction. Our key insight is to generate many candidate functions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only outperforms the state-of-the art systems, but does so using a sublinear pass over the documents with the LLM. This equates to a 110x reduction in the number of tokens the LLM needs to process, averaged across 16 real-world evaluation settings of 10k documents each.


page 2

page 8

page 25

page 26

page 27

page 28

page 29

page 30


Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs

The analysis of public affairs documents is crucial for citizens as it p...

Domain-specific Continued Pretraining of Language Models for Capturing Long Context in Mental Health

Pretrained language models have been used in various natural language pr...

Enhancing Network Management Using Code Generated by Large Language Models

Analyzing network topologies and communication graphs plays a crucial ro...

Unified Text Structuralization with Instruction-tuned Language Models

Text structuralization is one of the important fields of natural languag...

PatternRank: Leveraging Pretrained Language Models and Part of Speech for Unsupervised Keyphrase Extraction

Keyphrase extraction is the process of automatically selecting a small s...

Open Domain Web Keyphrase Extraction Beyond Language Modeling

This paper studies keyphrase extraction in real-world scenarios where do...

Landmarks and Regions: A Robust Approach to Data Extraction

We propose a new approach to extracting data items or field values from ...

Please sign up or login with your details

Forgot password? Click here to reset