Semi-automatic Data Enhancement for Document-Level Relation Extraction with Distant Supervision from Large Language Models

Abstract
Document-level Relation Extraction (DocRE), which aims to extract relations from a long context, is a critical challenge in achieving finegrained structural comprehension and generating interpretable document representations. Inspired by recent advances in in-context learning capabilities emergent from large language models (LLMs), such as ChatGPT, we aim to design an automated annotation method for DocRE with minimum human effort. Unfortunately, vanilla in-context learning is infeasible for document-level Relation Extraction (RE) due to the plenty of predefined fine-grained relation types and the uncontrolled generations of LLMs. To tackle this issue, we propose a method integrating a Large Language Model (LLM) and a natural language inference (NLI) module to generate relation triples, thereby augmenting document-level relation datasets. We demonstrate the effectiveness of our approach by introducing an enhanced dataset known as DocGNRE, which excels in re-annotating numerous long-tail relation types. We are confident that our method holds the potential for broader applications in domain-specific relation type definitions and offers tangible benefits in advancing generalized language semantic comprehension.
Authors
Junpeng Li, Zixia Jia, Zilong Zheng
Publication Year
2023
https://arxiv.org/pdf/2311.07314.pdf
Publication Venue
EMNLP
Scroll to Top