Large Language Models
docAnalyzer.ai keeps pace with releases from Anthropic, DeepSeek, Google, Meta, OpenAI, and xAI, adding new models within days so you can pick the right fit for every workflow instead of being locked to a single AI source. We currently surface 19 models across 6 creators, each benchmarked for quality and speed so you can choose confidently. Explore the latest line-up below.
| Model | Quality | Speed | Latency* |
|---|---|---|---|
gpt-5.2 OpenAI | 51 | 109 token/s | 37.97 seconds |
Claude Opus 4.5 Anthropic | 49 | 65 token/s | 1.84 seconds |
Gemini 3 Pro Preview Google | 48 | 122 token/s | 32.61 seconds |
GPT-5.1 OpenAI | 47 | 107 token/s | 35.13 seconds |
Claude Sonnet 4.5 Anthropic | 42 | 70 token/s | 2.03 seconds |
GPT-5 mini OpenAI | 41 | 72 token/s | 117.56 seconds |
Grok 4 xAI | 41 | 41 token/s | 6.74 seconds |
DeepSeek V3.2 (Thinking) DeepSeek | 41 | 30 token/s | 1.38 seconds |
Claude Haiku 4.5 Anthropic | 37 | 84 token/s | 0.52 seconds |
Gemini 2.5 Pro Google | 34 | 153 token/s | 35.89 seconds |
GPT OSS 120b OpenAI | 33 | 341 token/s | 0.43 seconds |
DeepSeek V3.2 (Non thinking) DeepSeek | 32 | 28 token/s | 1.3 seconds |
GPT-5 nano OpenAI | 27 | 121 token/s | 115.38 seconds |
GPT OSS 20b OpenAI | 25 | 309 token/s | 0.51 seconds |
Grok 4.1 Fast xAI | 23 | 147 token/s | 0.75 seconds |
Grok 4 Fast xAI | 23 | 125 token/s | 0.61 seconds |
Gemini 2.5 Flash-Lite Google | 22 | 577 token/s | 7.28 seconds |
Llama 4 Maverick Meta | 19 | 132 token/s | 0.4 seconds |
Llama 4 Scout Meta | 14 | 118 token/s | 0.44 seconds |
* Latency progress bars use a logarithmic scale, where lower latency values indicate better performance.