Skip to main content
Narev provides the infrastructure, middleware, and control plane to accurately measure and monetize AI usage across your entire stack. Whether you are a developer calculating token costs on the fly, a DevOps engineer standardizing cloud infrastructure, or a founder syncing dynamic AI prices to your billing provider, you are in the right place.

Choose your path

Select the product tier you are integrating to get started.

Narev Cloud

The control plane. Sync AI model prices dynamically, benchmark costs, and integrate natively with Stripe, Lago, and Polar.sh.

Narev SDK

The app layer. Drop-in Vercel AI SDK middleware to intercept LLM calls and calculate precise token usage on the fly.

Narev Self-Hosted

The infrastructure layer. A Docker agent that standardizes your raw compute and GPU bills into the FOCUS format.

How the ecosystem fits together

You can use any Narev product independently, but they’re designed to work together as a seamless, end-to-end FinOps pipeline.
1

Standardize Infra Costs (Self-Hosted)

Deploy the Narev Self-Hosted Docker agent to standardize your underlying cloud costs into the FinOps Open Cost and Usage Specification (FOCUS). This gives you a clear baseline for your Cost of Goods Sold (COGS).
2

Track App Usage (SDK)

Integrate the Narev SDK into your app. As your users generate AI completions, the middleware tracks the exact token counts and calculates costs instantly.
3

Bill and Benchmark (Cloud)

Feed your standardized infra data and app usage into Narev Cloud. Use the platform to pull the latest LLM pricing via API, attribute costs to specific user tags, and automate itemized billing through your preferred payment gateway.