Skip to main content

Fine-tuning

Fine-tuning is the process of teaching an AI model to understand and respond to user queries. ToothFairyAI allows organisations to build, own and host their own AI models. This document provides an overview of the fine-tuning process.

Fine-tuning can be accessed from the following menu:

Fine-tuning > Start tuning

The fine-tuning section consist of the following tabs:

  • Conversational AI: This tab allows the user to fine-tune the AI model specific to the agents

Fine-tuning process

The fine-tuning process consists of the following steps:

  • Give a name to the fine-tuning process: The user can give a name to the fine-tuning process. The name of the fine-tuning job will be later used to identify the fine-tuned model in the Hosting & Models section in the agents configuration.
  • Provide a description: The user can provide a description to the fine-tuning process (optional)
  • Filter topics: Similarly to the Knowledge Hub and Agents configuration, ToothFairyAI allows the user to filter the training data based on the topics selected. By default, all topics are selected which means all training data available in the workspace will be used for the fine-tuning process.
  • Select the model: Selection of the model to be fine-tuned. The selection influences both the dataset and the model output. The user can select from the following model families: Llama 3.3 70B, Llama 3.2 1B,Llama 3.2 3B, Llama 3.1 70B,Llama 3.1 8B, Qwen 2.5 14B, and Qwen 2.5 72B.
  • Dataset only: Starter and Pro workspaces can only generate a downloadable datasets both for training and evaluation. Enterprise subscription can fine-tune the model and associate the fine-tuned model with the agents of their choice.
Function calling

Both Llama 3.3 and Llama 3.1 models support function calling and planning capabilities. Therefore, you can finetune not only text generation but also tooling such as API calls, DB queries and planning tasks!

In other words you can deeply customise even Planner agents to suit your use cases!

Fine-tuning completion

Once the fine-tuning process is completed, the user can download the fine-tuned dataset and model weights depending on the subscription plan. Regardless of the subscription type, the resulting datasets are formatted in a universal format that can be used for fine-tuning any model later on.

For enterprises ToothFairyAI allows the selection of even more base models (e.g. Mistral/Mixtral/DeepSeek) upon request. For Enterprise workspaces we can enable upon reqyest the finetuning of the models available to the agents in the platform - see here

Fine-tuning limitations

Based on our experience, fine-tuning the model with a dataset of less than 100 well curated examples will not yield the desired results. We recommend using a dataset of at least 1000 examples for fine-tuning the model. The process can take quite some time, depending on the size of the dataset, the number of fine-tuning epochs and the size of the model. The user will be notified when a fine-tuning job is completed via email.