Unifying Large Language Models and Knowledge Graphs for Question Answering: Recent Advances and Opportunities

Abstract

Large language models (LLMs) have demonstrated remarkable performance on several question-answering (QA) tasks because of their superior capabilities in natural language understanding and generation. On the other hand, due to poor reasoning capacity, outdated or lack of domain knowledge, expensive retraining costs, and limited context lengths of LLMs, LLM-based QA methods struggle with complex QA tasks such as multi-hop QAs and long-context QAs. Knowledge graphs (KGs) store graph based structured knowledge which are effective for reasoning and interpretability since KGs accumulate and convey explicit relationships-based factual and domain-specific knowledge from the real world. To address the challenges and limitations of LLMbased QA, several research works that unify LLMs+KGs for QA have been proposed recently. This tutorial aims to furnish an overview of the state-of-the-art advances in unifying LLMs with KGs for QA, by categorizing them into three groups according to the roles of KGs when unifying with LLMs. The metrics and benchmarking datasets for evaluating the methods of LLMs+KGs for QA are presented, and domain-specific industry applications and demonstrations will be showcased. The open challenges are summarized and the opportunities for data management are highlighted.

Publication
In 28th International Conference on Extending Database Technology
Chuangtao Ma
Chuangtao Ma
Postdoctoral Researcher