Fine-tune, compare, and serve open-source, proprietary, and custom models.
We offer a variety of open-source models for fine-tuning and serverless inference. Additionally, you can use your own OpenAI API key to fine-tune and serve OpenAI models directly through our platform. For custom fine-tuned models, FinetuneDB handles deployment and makes them accessible via the inference API. Below is a preview of the models we offer.
For teams managing their own infrastructure, integrate self-hosted models with FinetuneDB using VLLM. This setup allows you to leverage our platform’s capabilities while maintaining control over your hosting environment.Please contact us for more information regaring custom model deployment and integration.Note: Open-source inference API and OpenAI pricing is subject to change and may not always be up to date.