Baseten
Baseten is a provider of all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.
As a model inference platform,
Baseten
is aProvider
in the LangChain ecosystem. TheBaseten
integration currently implements a singleComponent
, LLMs, but more are planned!
Baseten
lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
- Rather than paying per token, you pay per minute of GPU used.
- Every model on Baseten uses Truss, our open-source model packaging framework, for maximum customizability.
- While we have some OpenAI ChatCompletions-compatible models, you can define your own I/O spec with
Truss
.
Learn more about model IDs and deployments.
Learn more about Baseten in the Baseten docs.