A critical pre-authentication SQL injection vulnerability, CVE-2026-42208 (tracked as GHSA-r75f-5x8p-qvmc), has been actively exploited in the wild, targeting LiteLLM, the open-source LLM gateway with over 22,000 GitHub stars, used by organizations to front-end OpenAI, Anthropic, AWS Bedrock, and dozens of other AI model providers.
The Sysdig Threat Research Team (TRT) detected the first exploitation attempt just 36 hours and seven minutes after the advisory was indexed in the global GitHub Advisory Database, with attackers demonstrating surgical schema knowledge rather than generic scanning behavior.
CVE-2026-42208 is a pre-authentication SQL injection flaw in LiteLLM versions >= 1.81.16 and < 1.83.7, carrying a CVSS Base Score of 9.9 (Critical) according to CERT-Bund advisory WID-SEC-2026-1288.
CVE-2026-42208: LiteLLM SQL Injection Exploited
The vulnerability lives inside the proxy’s API key verification step, where the Authorization: Bearer header value is concatenated directly into a SELECT query against the LiteLLM_VerificationToken table without parameter binding.
Because the injection occurs before the authentication decision is made, any unauthenticated HTTP client that can reach the proxy on port 4000 with no credentials can execute arbitrary SQL against the PostgreSQL backend.
The maintainer’s security advisory was first published to the LiteLLM repository on April 20, 2026, at 21:14 UTC, then indexed into the global GitHub Advisory Database on April 24, 2026, at 16:17 UTC, where it became visible to Dependabot, OSV, and GHSA mirror feeds.
As of writing, the vulnerability has not been added to CISA’s Known Exploited Vulnerabilities (KEV) catalog a critical gap that left defenders relying on CVE-keyed or CISA KEV alerting without timely warning.
LiteLLM’s core architectural purpose, centralizing paid AI model-provider credentials, makes this flaw far more damaging than a typical web application SQL injection. Three specific PostgreSQL tables are directly exposed through a single UNION query:
LiteLLM_VerificationToken– stores virtual API keys, including the master keylitellm_credentials-holds actual upstream provider credentials (OpenAI, Anthropic, AWS Bedrock)litellm_config– stores proxy environment variables, including the PostgreSQL DSN, master key, webhook URLs, and cache backends
Stolen LLM API keys translate directly into free, high-value compute for attackers, access to conversation histories, the ability to inject responses into customer-facing AI products, and a significant financial drain on victim organizations, often representing thousands of dollars per month in stolen compute costs.
Additionally, once a virtual key or master key is exfiltrated, it can be replayed against /chat/completions from any IP address, as LiteLLM does not bind keys to a source by default.
| Time (UTC) | Event |
|---|---|
| Apr 20, 21:14 | Advisory indexed in the global GitHub Advisory Database |
| Apr 24, 16:17 | Advisory indexed in global GitHub Advisory Database |
| Apr 26, 04:24 | First SQL injection attempt from 65.111.27.132 |
| Apr 26, 04:24–04:45 | Schema enumeration: 17 UNION payloads across three target tables |
| Apr 26, 05:06 | Second IP 65.111.25.67 replays refined payload set; probes /key/generate and /key/info unauthenticated |
Phase 1 (Schema Enumeration): The attacker opened with POST /chat/completions requests carrying Authorization: Bearer sk-litellm'<UNION SELECT ...>--, using Python/3.12 aiohttp/3.9.1 as the user-agent across every request.
Two operator-level details immediately distinguished this from generic scanner activity: first, when the lowercase PostgreSQL table name returned no rows, the operator retried using Prisma ORM’s PascalCase "LiteLLM_VerificationToken" indicating prior study of the LiteLLM Prisma schema or LLM-assisted reconnaissance.
Second, the attacker targeted only the three highest-value tables on the very first probe, skipping benign tables like litellm_users or litellm_team entirely.
Phase 2 (Egress Rotation): After a 21-minute pause, a second source IP from the same autonomous system replayed a refined payload set, concluding with a terminal OR 1=1-- tautology.
The signature of an automated harness exhausting its payload list. The IP rotation pattern is consistent with a tool designed to evade per-IP rate limiting between targets, not separate actors.
Critically, no confirmed follow-through was observed. The Sysdig TRT detected no authenticated calls using exfiltrated keys, no virtual-key minting via /key/generateand no chained reuse of provider credentials, but the precision of targeting signals that AI gateway databases are now an explicit, high-priority extraction objective.
Patch and Remediation
LiteLLM v1.83.7 resolves the vulnerability by replacing string interpolation with parameterized queries. Defenders should take these steps immediately:
- Update to v1.83.7 or later from the stable release
- Rotate all virtual API keys, master keys, and provider credentials stored in any internet-reachable LiteLLM instance on a vulnerable version, and treat the database as compromised regardless of confirmed extraction
- Audit upstream provider billing for unexpected
/chat/completionstraffic from unfamiliar IPs in the exploitation window - Restrict LiteLLM proxies to internal networks or mutually-authenticated reverse proxies; block any
Authorizationheader containing single quotes, SQL keywords (UNION,SELECT,FROM), or--sequences as an interim WAF control - Monitor web server logs for
Bearer sk-litellm'prefix requests, even a single occurrence before patching, are a high-confidence exploitation indicator - Inventory all AI proxy deployments enterprise-wide; application teams frequently deploy LiteLLM outside standard security review, and such instances may hold production-tier provider keys without monitoring
This incident follows a March 2026 LiteLLM supply chain compromise in which attackers published backdoored PyPI packages that harvested API keys, cloud credentials, SSH keys, and Kubernetes tokens from runtime environments.
The acceleration from advisory publication to active exploitation in just 36 hours aligns with the broader collapse of patch windows documented across the AI infrastructure category.
Unlike traditional web application SQL injections, a successful extraction from a LiteLLM database is functionally equivalent to a multi-cloud account compromise, given that a single litellm_credentials row can hold organization-level keys to OpenAI, Anthropic, and AWS Bedrock simultaneously.
The GHSA advisory surface, currently outside most KEV-centric monitoring pipelines, must now be treated with the same urgency as CISA KEV entries for any AI infrastructure component.
Indicators of Compromise (IOCs)
| Indicator | Type | Detail |
|---|---|---|
65.111.27.132 | Source IP | AS200373, 3xK Tech GmbH, DE — UNION enumeration phase |
65.111.25.67 | Source IP | AS200373, 3xK Tech GmbH, DE Refined payload + /key/generate probe |
Python/3.12 aiohttp/3.9.1 | User-Agent | Present on all SQL injection requests |
Bearer sk-litellm' | Auth Header Pattern | Single-quote terminator indicating injection attempt |
UNION SELECT ... FROM "LiteLLM_VerificationToken" | Payload Fragment | High-confidence exploitation indicator |
FAQ
Q1: What versions of LiteLLM are affected by CVE-2026-42208?
All versions from 1.81.16 up to (but not including) 1.83.7 are vulnerable; update to v1.83.7 or later immediately.
Q2: Can CVE-2026-42208 be exploited without any credentials?
Yes, the injection occurs in the authentication check itself, making it fully pre-auth and exploitable by any HTTP client reaching port 4000.
Q3: What data can an attacker steal by exploiting this vulnerability?
Attackers can extract virtual API keys, upstream provider credentials (OpenAI, Anthropic, Bedrock), and the proxy’s environment variables, including the master key and database DSN.
Q4: Is CVE-2026-42208 listed in CISA’s Known Exploited Vulnerabilities catalog?
As of publication, CISA has not added this actively exploited vulnerability to the KEV catalog, underscoring the need for direct monitoring of the GHSA feed.
Site: https://thecybrdef.com
For more insights and updates, follow us on Google News, Twitter, and LinkedIn.