Reasoning over ontologies using Large Language Models

This manuscript was automatically generated on May 24, 2023.

Authors

✉ — Correspondence possible via GitHub Issues or email to Chris Mungall <cjmungall@lbl.gov>.

Abstract

Reasoning is a core component of human intelligence, and a key goal of AI research. Reasoning has traditionally been the domain of symbolic AI, but recent advances in deep learning and in particular Large Language Models (LLMs) such as GPT-3 seem to suggest that LLMs have some latent reasoning ability.

To investigate this, we created a GPT-based reasoning agent that is intended to perform ontological reason using a few-shot learning approach, using instruction prompting and in-context examples. We also created a series of benchmarks to test ontological reasoning ability in LLMs and other systems.

Our results indicate that GPT is a poor reasoner, and is only able to perform ontological reasoning on some of the simplest tasks. Even on these simple tasks, results are highly variable, with performance degrading as the size of the ontology and the complexity of the explanation increases. In the cases where it does successfully perform the task, this seems to essentially be an advanced pattern-based form of lookup.

Our results indicate that a maximalist approach to using LLMs may be limiting, and that to be successful AI should use hybrid strategies.

Introduction

Citation by DOI [1].

Methods

..

Results

blah

Discussion

blah

Conclusions

blah

References

1.
Sci-Hub provides access to nearly all scholarly literature
Daniel S Himmelstein, Ariel Rodriguez Romero, Jacob G Levernier, Thomas Anthony Munro, Stephen Reid McLaughlin, Bastian Greshake Tzovaras, Casey S Greene
eLife (2018-03-01) https://doi.org/ckcj
DOI: 10.7554/elife.32822 · PMID: 29424689 · PMCID: PMC5832410