Kimi K2.5 API
Kimi K2.5 API.
$0.40/M input, $2.00/M output. OpenAI-compatible, no contracts, no minimums.
Point your OpenAI SDK at api.getlilac.com/v1 and request kimi-k2-5.
Kimi K2.5 pricing
Pay per token. No commitments.
Competitive speed with the lowest output-token price in the benchmark range vs. OpenRouter-listed providers.
Input
$0.40
per million tokens
Output
$2.00
per million tokens
25% off all tokens above 1B/month for 3 months. That is $0.30/M input and $1.50/M output above the threshold.
Integration
One base URL change.
Keep the OpenAI SDK and point it at Lilac. Your existing code just works.
from openai import OpenAI
client = OpenAI(
base_url="https://api.openai.com/v1",
api_key="sk_...",
)
response = client.chat.completions.create(
model="kimi-k2-5",
messages=[{"role": "user", "content": "Hello!"}],
)
# Same code. Same SDK. Fraction of the price.
Standard OpenAI client — just change the base URL.
Pricing visible up front. No aggregator markup.
More models being added over time.
Frequently asked questions
How do I call the API?
Set base_url to https://api.getlilac.com/v1 in the OpenAI SDK, model name kimi-k2-5.
How much does it cost?
$0.40/M input, $2.00/M output on the shared endpoint.
Is Lilac only for Kimi K2.5?
No. Kimi K2.5 is the first model. More are coming.
Start running inference in minutes.
No contracts, no commitments. Swap your base URL and pay less for the same output quality.
No commitment required.