AutoMix:
Automatically Mixing Language Models

Carnegie Mellon University Google Google DeepMind
IIT Delhi University of Southern California

TLDR: AutoMix optimizes cost and accuracy by routing queries between small and large language models,
through self and meta-verification.

Abstract

Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in AutoMix to refine the accuracy of these assessments. Our experiments using LLAMA2-13/70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 89%.

Key-Highlights

  • 🔁 Optimized Model Mixing: AutoMix optimally combines small and large LLM APIs for cost-effective performance.
  • 💡 Innovative Meta-verifier: A secondary verifier layer improves noisy few-shot self-verification.
  • 📈 Enhanced Efficiency: AutoMix improves the incremental benefit per cost by up to 89% over baselines.
  • 🔌 Ready for Deployment: Applicable out-of-the-box to black-box APIs without access to weights.

Results Summary

AutoMix outperforms baselines by wide margins, despite them using domain-specific training and low cost-verifier.


AutoMix provides a controllable and interpretable mechanism to adjust your cost and quality.


AutoMix's meta-verifier boosts performance even the few-shot verifier performs poorly or not well calibrated.


Using AutoMix in your code

Using AutoMix in your code takes only 3-4 line changes.

1. Installing

pip install automix-llm

2. Training and Inference

from automix import Automix, POMDP

mixer = Automix(POMDP(*args, **kwargs))
mixer.train(train_data)
restuls = mixer.evaluate(test_data)

3. High Customizability

Check out our Github repo for more details!

BibTeX

@misc{madaan2023automix,
      title={AutoMix: Automatically Mixing Language Models}, 
      author={Aman Madaan and Pranjal Aggarwal and Ankit Anand and Srividya Pranavi Potharaju and Swaroop Mishra and Pei Zhou and Aditya Gupta and Dheeraj Rajagopal and Karthik Kappaganthu and Yiming Yang and Shyam Upadhyay and Mausam and Manaal Faruqui},
      year={2023},
      eprint={2310.12963},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}