This website is under development. If you come accross any issues, please report them to Konstantinos Kanellis (kkanellis@cs.wisc.edu) or Yannis Chronis (chronis@google.com).

Revisiting Prompt Engineering via Declarative Crowdsourcing

Authors:
Aditya G Parameswaran, Shreya Shankar, Parth Asawa, Naman Jain, Yujie Wang
Abstract

Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering—the process of asking an LLM to do something via a series of prompts. However, for LLMpowered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a research agenda around declarative prompt engineering. We view LLMs like crowd workers and explore leveraging ideas from the declarative crowdsourcing literature—including multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches—to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and missing value imputation demonstrate the promise of our approach.