Direct endpoints

    Skip the aggregator markup.

    Aggregators like OpenRouter add ~5% on top of provider pricing. If you know which model you want, a direct endpoint is cheaper and simpler.

    Lilac models are also available on OpenRouter — but going direct saves you the markup.

    Kimi K2.5 pricing

    Pay per token. No commitments.

    Same models, same speed. Direct endpoints cut the aggregator fee.

    Input

    $0.40

    per million tokens

    Output

    $2.00

    per million tokens

    OpenAI-compatibleShared warm endpointsNo contractsNo minimums

    25% off all tokens above 1B/month for 3 months. That is $0.30/M input and $1.50/M output above the threshold.

    Integration

    One base URL change.

    Keep the OpenAI SDK and point it at Lilac. Your existing code just works.

    inference.py

    from openai import OpenAI

    client = OpenAI(

    base_url="https://api.openai.com/v1",

    api_key="sk_...",

    )

    response = client.chat.completions.create(

    model="kimi-k2-5",

    messages=[{"role": "user", "content": "Hello!"}],

    )

    # Same code. Same SDK. Fraction of the price.

    01

    No aggregator markup — pay the base token price.

    02

    OpenAI-compatible. Same SDK, one URL change.

    03

    Also available on OpenRouter if you prefer it.

    Frequently asked questions

    Why go direct instead of using OpenRouter?

    OpenRouter adds ~5% to provider pricing. If you already know the model, going direct saves that fee.

    Is Lilac anti-OpenRouter?

    No. We serve on OpenRouter too. But if cost matters, direct endpoints are cheaper.

    How hard is it to switch?

    One base URL change in the OpenAI SDK.

    Start running inference in minutes.

    No contracts, no commitments. Swap your base URL and pay less for the same output quality.

    contact@getlilac.com

    No commitment required.