Automating pharmacovigilance evidence generation: Using large language models to produce context-aware SQL
Data files
Feb 03, 2025 version files 125.34 KB
-
business_context_document.zip
80 KB
-
essential_tables.txt
1.01 KB
-
NLQ_Queries.xlsx
42.53 KB
-
README.md
1.81 KB
Abstract
Objective: To enhance the accuracy of information retrieval from pharmacovigilance (PV) databases by employing Large Language Models (LLMs) to convert natural language queries (NLQs) into Structured Query Language (SQL) queries, leveraging a business context document.
Materials and Methods: We utilized OpenAI’s GPT-4 model within a retrieval-augmented generation (RAG) framework, enriched with a business context document, to transform NLQs into executable SQL queries. Each NLQ was presented to the LLM randomly and independently to prevent memorization. The study was conducted in three phases, varying query complexity, and assessing the LLM's performance both with and without the business context document.
Results: Our approach significantly improved NLQ-to-SQL accuracy, increasing from 8.3% with the database schema alone to 78.3% with the business context document. This enhancement was consistent across low, medium, and high complexity queries, indicating the critical role of contextual knowledge in query generation.
Discussion: The integration of a business context document markedly improved the LLM's ability to generate accurate SQL queries (i.e. both executable and returning semantically appropriate results). Performance achieved a maximum of 85% when high complexity queries are excluded, suggesting promise for routine deployment.
Conclusion: This study presents a novel approach to employing LLMs for safety data retrieval and analysis, demonstrating significant advancements in query generation accuracy. The methodology offers a framework applicable to various data-intensive domains, enhancing the accessibility of information retrieval for non-technical users.
README: Automating Pharmacovigilance Evidence Generation: Using Large Language Models to Produce Context-Aware SQL
https://doi.org/10.5061/dryad.2280gb63n
Description of the data and file structure
NLQ_Queries.xls contains the set of test NLQs along with the results of the LLM response in each phase of the experiment. Each NLQ also contains the complexity scores computed for each.
The business context document is supplied as a PDF, together with the Python and R code used to generate our results. The essential tables used in Phase 2 and 3 of the experiment are included in the text file.
Files and variables
File: NLQ_Queries.xlsx
Description: Contains all NLQ queries with the results of the LLM output and the pass, fail status of each.
Column Definitions:
Below are the column names in order with a detailed description.
- User NLQ: Plain text database query
- Phase_1: Pass or Fail status indicator "Pass, Partial, or Fail"
- phase_1_response: LLM Response text
- Phase_2: Pass or Fail status indicator "Pass, Partial, or Fail"
- phase_2_response: LLM Response text
- Phase_3: Pass or Fail status indicator "Pass, Partial, or Fail"
- phase_3_response: LLM Response text
- COMPLEXITY_SCORE: Numerical value of the computed complexity score of gold standard query
- COMPLEXITY: Level of complexity assessed by SME (LOW, MEDIUM or HIGH)
File: business_context_document.zip
Description: Contains PDF of the full business context document utilized in the study.
File: essential_tables.txt
Description: Contains a list of all essential tables used in Phase 2 and 3 of the experimental study.
Code/software
Python and R code used to create the LLM pipeline and supporting statistical analysis.
Methods
Test set of NLQ's used in the paper Automating Pharmacovigilance Evidence Generation: Using Large Language Models to Produce Context-Aware SQL. Also included are the Python scripts for the LLM processing, the R code for statistical analysis of results, and a copy of the business context document and essential tables.