Loading...
Loading...
Help & Support
Everything you need to know about FindLLM — how it works, where the data comes from, and how to get the most out of it.
FindLLM is a free, independent platform for comparing Large Language Models (LLMs) by quality, speed, and price. It aggregates data from Artificial Analysis (benchmarks), OpenRouter (pricing), and HuggingFace (model metadata) to help developers and teams choose the right AI model.
FindLLM aggregates data from three sources: Artificial Analysis (benchmarks, quality scores, speed metrics), OpenRouter (pricing, context lengths, provider availability), and HuggingFace (downloads, licenses, open source metadata). Data is refreshed hourly to daily depending on the source.
The Quality Index is a composite score developed by Artificial Analysis that combines results from multiple benchmarks — including MMLU-Pro, GPQA, LiveCodeBench, and MATH-500 — into a single comparable metric.
Blended price is the cost per 1 million tokens assuming a 3:1 output-to-input token ratio. This reflects typical conversational usage patterns and makes it easier to compare model costs.
Consider your primary use case (coding, math, general knowledge), budget, and latency requirements. Use the Explore page to compare models across dimensions, or try the LLM Selector tool for personalized recommendations.
Yes, FindLLM is completely free to use. All comparison tools, leaderboards, and analysis features are available without registration or payment.
HuggingFace data syncs hourly, OpenRouter pricing every 4 hours, and Artificial Analysis benchmarks daily. This ensures comparisons reflect the latest changes across all sources.
FindLLM tracks MMLU-Pro, GPQA Diamond, LiveCodeBench, AIME 2025, MATH-500, SciCode, IFBench, Long Context Recall, TerminalBench Hard, and HLE (Humanity's Last Exam).
Check out the Methodology page for a deep dive into how benchmarks and metrics are calculated, or visit the About page to learn more about FindLLM.