@ai-billing/deepseek package.
This package provides an AI SDK middleware that automatically intercepts your DeepSeek usage, calculates the cost using the correct pricing (including cache hits and reasoning tokens), and sends the billing events to your preferred destination like Polar, Stripe, Lago, or your own custom backend.
Installation
First, ensure you have the necessary packages installed:Using generateText
Instead of a single large file, let’s break down how to set up the billing middleware and use it with the generateText function step-by-step.
1. The Setup & Configuration
Before making any AI calls, you need to set up the DeepSeek client and tell the billing system how much things cost.- The Pricing Map: AI models charge differently for reading your prompt vs. generating the answer. This map tells the system exactly how much
deepseek-v4-procosts per token. - The Middleware: This is the engine. It is configured to use your pricing map to calculate the cost, and right now it’s set to just log the final bill to your server console.
2. Preparing the Request
Inside your route handler (e.g., aPOST function), prepare the actual question you want to ask the AI.
3. Wrapping the Model
This is the most important part. Instead of calling DeepSeek directly, wrap the standard DeepSeek model inside thebillingMiddleware.
wrappedModel is used, the middleware will secretly watch the request, count the tokens that come back, do the math based on your pricing map, and log the cost.
4. Executing the Request
Finally, ask the AI the question using the wrapped model.Using streamText
The middleware works seamlessly with streaming responses as well. The setup (steps 1 through 3) is exactly the same as above.
The only difference is in Step 4: Executing the Request. Instead of generateText, you use streamText. The usage and cost will be calculated and dispatched automatically once the stream completes.