Labeling with Confidence

Schedule a Demo

October 16, 2023

Dhruva Bansal, Nihit Desai
by
Dhruva Bansal, Nihit Desai

Summary

When leveraging LLMs for data labeling, it is important to be able to accurately estimate the modelā€™s level of confidence in its own knowledge and reasoning. Doing so enables us to automatically reject low confidence labels and ensemble LLMs optimally. We examine different techniques for estimating confidence of LLM generated labels, in the context of data labeling for NLP tasks. Key takeaways:

  • Token-level generation probabilities (commonly referred to as ā€œlogprobsā€) are by far the most accurate technique for estimating LLM confidence.
  • Explicitly prompting the LLM to output a confidence score, while popular, is highly unreliable. This technique had the lowest accuracy and the highest standard deviation across datasets. More often than not the LLM just hallucinates some number.Ā 

We used Autolabel, our recently open-sourced library, to run all the experiments that are a part of this report.

Background & Motivation

Next generation data labeling tools like Autolabel leverage LLMs to create large, diverse labeled datasets. In an application domain like labeling where correctness is critical, it is imperative to be able to accurately estimate the modelā€™s level of confidence in its own knowledge and reasoning. Doing so enables us to automatically reject low confidence labels, ensemble LLMs optimally, and learn more about strengths and weaknesses of any given LLM.

As a motivating example, consider the Toxic comments classification dataset from our labeling benchmark. If we are able to estimate the LLMā€™s confidence level alongside the labels it generates, we can calibrate the modelā€™s label quality (% agreement with ground truth labels) at any given confidence score. We can then decide an operating point (confidence threshold) for the LLM, and reject all labels below this threshold.

Label quality vs Completion rate for gpt-3.5-turbo and gpt-4 on the same dataset

ā€

This blog post summarizes our learnings around estimating confidence of LLM generated labels. We demonstrate the efficacy of four different confidence estimation techniques across a wide range of text labeling tasks/datasets.

Methodology

[1] Setup

Experimentation setup for confidence estimation
  1. For an input x, we generate a label y by prompting the Labeling LLM (GPT-4). The setup and prompts for doing this follows exactly the methodology described in the LLM data labeling benchmark.Ā 
  2. Next, we estimate the confidence c, given x and y using a Verifier LLM (FLAN-T5-XXL or GPT-4). This score quantifies the Verifier LLMā€™s confidence in the given label y being correct. We describe four confidence calculation methods that we benchmark in the section below.
  3. Then, we compare y with the ground truth label gt to decide whether the prediction is a ā€œtrue positiveā€ (y == gt) or ā€œfalse positiveā€ (y != gt).Ā 
  4. Finally, we compute AUROC (Area Under the Receiver Operating Characteristic) to understand how ā€œgoodā€ a specific confidence estimation method is. For building the ROC curve, we first compute true positive rate and false positive rates at various score thresholds. We use all distinct values of c as thresholds.

For generating the labels themselves, we use GPT-4 as the labeling LLM in this evaluation. Ideally weā€™d like to use the same LLM for verification as well, but token-level generation probabilities (logprobs) are currently not available for OpenAI chat engine models (including GPT-4). Hence we use FLAN-T5-XXL as the verifier LLM. For techniques that donā€™t rely on logprobs, we also report the AUROC numbers with GPT-4 as the verifier LLM.

[2] Metric

Credit: https://vitalflux.com/roc-curve-auc-python-false-positive-true-positive-rate/

ā€

The AUROC is a scalar value that summarizes the overall performance of the classifier by measuring the area under the ROC curve. It provides a measure of the classifier's ability to distinguish between the positive and negative classes across all possible classification thresholds.Ā The confidence score that we are calculating can be thought of as a binary classifier where the labels are whether the model got a specific label correct (true positive) or incorrect (false positive). Using the plot above as a reference, we can see that a random classifier would have an AUROC of 0.5.

[3] Datasets

We included the following datasets for this evaluation. These datasets are available for download here.

List of datasets used for labeling in this report

[4] Techniques

Is the first option still a good method to try?

ā€

We benchmark the following four methods for confidence estimation:

P(True):Ā 

The labeling LLM (GPT-4) generates a label (llm_label) given an input prompt. Using this, we prompt the verifier LLM to complete the following sentence. The token generation probability of ā€œYesā€ is used as the confidence score

Prompting for Confidence Score:Ā 

The labeling LLM (GPT-4) generates a label (llm_label) given an input (prompt). Using this, we prompt the verifier LLM to complete the following sentence. The value output by the verifier LLM is parsed as a float and used as the confidence score. If parsing is unsuccessful, we give the sample a confidence score of 0.0 by default.

Token probabilities:Ā 

For classification-like and QA tasks, this is simply the probability of the first token in the generated label output produced. For NER, probabilities are first generated for all tokens and then the probability of tokens for each entity is averaged to compute confidence scores per entity.

We use the Verifier LLM for estimating token probabilities of a prediction - we first generate the prediction logits for all tokens in the vocabulary, for the length of the output sequence. Then, we compute the softmax over the token probability distribution for each index and use the probabilities corresponding to respective tokens in the prediction as token probabilities.

Entropy:
  • Exhaustive Entropy: This method is used to calculate entropy for classification-like tasks. First, we calculate the probability of each of the possible labels and then calculate the 2-bit shannon entropy of the probability distribution over possible labels.
  • ā€Semantic Entropy: This method is used to calculate entropy for generation-like tasks. We prompt the LLM to produce N predictions at a temperature of 0.5 and then group them using a pairwise ROGUE score. Then, we will calculate the average probability of each group of predictions and calculate the entropy of that prediction distribution.
    ā€

Results

As explained in the previous section, we evaluated the performance of five different confidence estimation techniques across different NLP tasks and datasets.Ā 

How well do various confidence estimation techniques predict the correctness of LLM labels?

Takeaways:

  1. Token probability is still by far the most accurate and reliable technique for estimating LLM confidence. Across all the 4 datasets, this technique achieves the highest AUROC (0.832), and the lowest standard deviation (+/- 0.07).
  2. Explicitly prompting the LLM to output a confidence score is the least accurate. This technique had the lowest AUROC (0.58) and the highest standard deviationĀ  (+/- 0.13) across datasets.Ā  More often than not the LLM just hallucinates some number.Ā 
  3. Even with token probabilities, we do see some variability in how well it can work for a given labeling task. From early exploration internally with public and proprietary datasets, we have found that fine tuning the verifier LLM on the target labeling task improves the calibration significantly. We hope to share more details about this exploration in a future blog post.

Hereā€™s a breakdown of the performance by dataset:Ā Ā 

ā€

Qualitative Evaluation

In order to get a more intuitive understanding for which kinds of inputs LLMs are more confident, vs less confident about, hereā€™s what we did:

  1. Compute embeddings for all inputs in the dataset using sentence transformers
  2. Project all inputs into a 2D embedding map using UMAP
  3. Overlay this projection map with useful metadata such as confidence score and ā€œwhether the LLM label is correct i.e. matches ground truth labelā€ for each point.Ā 

We illustrate this with the Banking complaints classification dataset below. As you can see, clusters of low confidence inputs correspond quite well to incorrect LLM labels -Ā 

Overlaying confidence score and whether the LLM label (y) == Ground truth label (gt) for the Banking Dataset

ā€

Further, letā€™s examine a few rows with low confidence LLM labels (<= 0.2). Many of these inputs are ambiguous, and it is hard even for a human annotator to tell which of the labels (the one provided as ground truth in the dataset, or the LLM generated one) is ā€œcorrectā€.Ā Ā 

Input examples with low confidence

ā€

Estimating Confidence with Autolabel

Autolabel library relies on token level generation probabilities to estimate LLM label confidence. Generating confidence scores alongside labels is a simple config change - setting the key compute_confidence = True should initiate confidence score computation:Ā 

Enabling confidence estimation in the library is a one line config change

ā€

However, very few LLM providers today support extraction of token level generation probabilities alongside the completion. For all other models, Refuel provides access to a hosted Verifier LLM (currently a FLAN T5-XXL model) via an API to estimate logprobs as a post-processing step after the label is generated, regardless of the LLM that was originally used to generate the label.

You can request an API key for accessing Refuelā€™s Verifier LLM by filling out the form here. More information about confidence computation in Autolabel can be found on the documentation page.

ā€

Acknowledgements

We would like to thank Arvind Neelakantan, Ayush Kanodia, Joseph Turian, Neil Dhruva and Samar Khanna for their feedback on earlier drafts of this report.

If you discover any issues or potential improvements to the shared results, we would love your contribution! You can open an issue on Github or join our Discord.