Fine-tuning
Fine-tuning is the process of teaching an AI model to understand and respond to user queries. ToothFairyAI allows organisations to build, own and host their own AI models. This document provides an overview of the fine-tuning process.
Menu location
Fine-tuning can be accessed from the following menu:
Fine-tuning > Start tuning
The fine-tuning section consist of the following tabs:
- Conversational AI: This tab allows the user to fine-tune the AI model specific to the agents
Fine-tuning process
The fine-tuning process consists of the following steps:
- Give a name to the fine-tuning process: The user can give a name to the fine-tuning process to identify it later
- Provide a description: The user can provide a description to the fine-tuning process (optional)
- Filter topics: Similarly to the
Knowledge Hub
andAgents
configuration, ToothFairyAI allows the user to filter the training data based on the topics selected. By default, all topics are selected - Select the model: Selection of the model to be fine-tuned. The selection influences both the dataset and the model output. The user can select from the following model families:
- Conversational AI:
Mistral/Mixtral
,Llama 3
,Qwen 2
,Llama 2
- Conversational AI:
- Dataset only:
Starter
andPro
workspaces can only generate a downloadable dataset.Enterprise
subscription can fine-tune the model and associate the fine-tuned model with the agents of their choice.
Fine-tuning completion
Once the fine-tuning process is completed, the user can download the fine-tuned dataset and model depending on the subscription plan.
The resulting dataset is formatted according to the model selected and will enable the user to train their own AI model on their own infrastructure.
If the user has an Enterprise
subscription, the fine-tuned model can be associated with the agents of their choice.
For enterprises ToothFairyAI allows the selection of the actual base model to use - beyond the model family (e.g. Llama 3 8b and Llama 3 70b)
We support the finetuning of all models available to the agents in the platform - see here
Fine-tuning limitations
Based on our experience, fine-tuning the model with a dataset of less than 100 well curated examples will not yield the desired results. We recommend using a dataset of at least 1000 examples for fine-tuning the model.