Skip to main content

What's New in Version 0.676.0

Released on April 11, 2026

This release makes TF Code available to every subscription, introduces the Individual plan, Agentic Tooling 2.0 (Beta), 3 new fine-tuning methods, and a refreshed flat-design interface! 🚀

👩‍💻 TF Code — Now Available to All Subscriptions

Your AI Engineer, Unlocked for Everyone

TF Code — ToothFairyAI's personal AI engineer — is now available on every subscription tier, including Individual. Previously reserved for Business and Enterprise, TF Code lets you build, debug, and deploy using natural language directly from your terminal. Write Python scripts, API integrations, full-stack applications, and database queries through simple conversational instructions — natively integrated with TF MCP and SDK.

What You Can Do:

  • Build Anything - From quick scripts to full-stack apps, describe what you need and TF Code writes it
  • MCP Native - Direct access to ToothFairyAI MCP tools for agents, documents, and workspace operations
  • SDK Powered - Seamless Python SDK integration for programmatic workflows
  • Terminal First - Works in any terminal — no IDE plugins required

Install TfCode globally: npm install -g @toothfairyai/tfcode

📖 Documentation: TF Code

🎯 Agentic Tooling 2.0 (Public Beta)

An Adaptive Digital Worker for Complex, Long-Running Tasks

We're introducing Agentic Tooling 2.0 — a next-generation execution mode where your agent autonomously decides which tools to use, when to use them, and how to combine them, adapting its strategy as it goes.

Three Agent Modalities: When configuring an Operator agent, you can now choose from three modalities:

ModalityDescriptionBest For
Operator (default)Standard agent behaviour — responds directly without autonomous tool useSimple Q&A, quick lookups, ideal for widgets
Agentic Tooling (Legacy)Fixed pipeline — plan, execute, verifyPredictable multi-step tasks; backwards compatibility
Agentic Tooling 2.0 (Beta)Adaptive digital worker — autonomous tool selection, adaptive planning, parallel executionComplex, long-running tasks; maximum flexibility

Key Capabilities:

  • Adaptive Planning - The agent updates its plan on the fly as new information comes in, instead of following a rigid upfront plan
  • Parallel Execution - Independent tasks run simultaneously for faster results
  • Agent Delegation - Hand off tasks to built-in specialists (Researcher, Analyst, Verifier) or your own workspace agents
  • Skills - Built-in skills like Verify Output, Step Back and Rethink, and Structure Response are actively activated when needed
  • Memory - Save and recall important findings mid-task, not just at the end
  • Execution Presets - Control how long the agent works: Short (30 min), Medium (1 hour), Long (2 hours), Marathon (4 hours)
  • Image, Video & 3D Generation - Operator agents in AT 2.0 can now generate images, videos, and 3D models — previously reserved for Assistant agents only

Safety Mechanisms:

  • Time limits, tool call caps, stuck detection, failure circuit breakers, and duplicate detection keep things running smoothly
Reasoning Models Required

Agentic Tooling 2.0 works exclusively with reasoning models — advanced AI models that can think through problems step-by-step before acting.

📖 Documentation: Agentic Tooling 2.0

💡 New Individual Plan

Starter and Pro Merged Into Individual

We've simplified our subscription lineup by merging the Starter and Pro plans into a single Individual plan — a flat-rate subscription for one user that unlocks every feature at $14.99/month.

What's Changed:

  • Flat-Rate Pricing - One user, one price — no more per-seat calculations for individual users
  • All Features Unlocked - TfCode, MCP, API & Python Tools, Code Hooks, Agent Skills, Job Scheduling, Image/Video/Audio Generation, Private Benchmarks, and Multilingual Support
  • 1,500 UoI Per Month - Same UoI allocation as the former Pro plan
  • Up To 20 Agents - And up to 100 documents in the Knowledge Hub
  • 15 USD Free Intelligence Credits - Up from 5 USD — no credit card required to get started

Availability: The Individual plan is available now. Existing Starter subscribers will be transitioned automatically in 30 days.

📖 Documentation: Units of Intelligence

🎨 Refreshed UX

Flat Design, Sharper Experience

The Chats and Settings interfaces have been completely refreshed with a modern flat-design language — cleaner surfaces, refined spacing, and a more focused workspace that lets your conversations and configuration take centre stage.

🔌 MCP Server v0.7.5

New Skills, Member & Connection Tools, and Backend Fixes

The ToothFairyAI MCP Server has been updated to v0.7.5 with new tools for member and connection creation, comprehensive entity parameter alignment, structured output support, and backend fixes.

🐍 Python SDK v0.6.2

Custom User-Agent Header Support

The ToothFairyAI Python SDK has been updated to v0.6.2 with support for custom User-Agent headers — enabling better request tracing and identification across both REST and streaming API calls.

📖 PyPI: pypi.org/project/toothfairyai

🤖 New Models: Qwen 3.6 Plus & GLM 5.1

Two powerful new reasoning models join the ToothFairyAI lineup this release.

Qwen 3.6 Plus — 800K Context

Qwen 3.6 Plus brings an industry-leading 800,000 token context window — the largest available on ToothFairyAI — ideal for processing extensive documents, long codebases, and multi-document analysis. It features built-in step-by-step reasoning and interleaved reasoning support for Agentic Tooling 2.0 workflows.

GLM 5.1 — 200K Context with Tool Calling

GLM 5.1 is a reasoning model with a 200K context window, interleaved thinking, native tool calling, and planner support — making it a strong choice for Agentic Tooling 2.0 tasks that require both reasoning and tool use.

🔌 New /dispatch API Endpoint

Async Task Dispatch with File Support

We've added a new /dispatch endpoint alongside our existing agent methods, giving all Business and Enterprise customers the ability to dispatch just-in-time async tasks with full features support.

Key Capabilities:

  • Async Execution - Submit long-running tasks and poll for status via /dispatch/status/{taskId}
  • File Support - Attach images, audios, videos, and documents via S3 references
  • Lifecycle Tracking - Full status tracking: QUEUED → RUNNING → COMPLETED/FAILED
  • CronJobs Infrastructure - Leverages the existing CronJobs/Batch infrastructure for reliable execution

📖 API Documentation: apidocs.toothfairyai.com - Async Tasks

💻 Expanded Fine-Tuning

35 Trainable Models & Vision Fine-Tuning

Fine-tuning is now available across 35 models spanning 6 size tiers — from lightweight 1B models for fast experimentation to 120B models for maximum capability.

Supported Model Families:

  • Llama — 3.2 1B/3B, 3.1 8B, 4 Scout 17B, 3.1/3.3 70B
  • Qwen — 2.5 1.5B/3B/7B/14B/32B/72B, 3 4B
  • Gemma — 2 9B, 3 1B/4B/12B/27B, 4 E4B/31B
  • DeepSeek R1 — Distill Qwen 1.5B/14B/32B, Distill Llama 8B/70B
  • Mistral — 7B, Small 3.2 24B
  • GLM-4 — 32B
  • GPT-OSS — 120B

VLM Fine-Tuning: Vision Language Models can now be fine-tuned with image data:

  • Qwen 2.5 VL 3B/7B/32B/72B
  • Qwen 3 VL 4B/8B
  • Llama 3.2 11B Vision

Training Cost: Billed through the same UoI pool as API usage, with training charts automatically generated for every run.

📖 API Documentation: apidocs.toothfairyai.com - Fine-Tuning

💡 Three New Fine-Tuning Methods

GRPO, KTO, and Continued Pretraining

We've expanded fine-tuning from 2 methods (SFT and DPO) to 5 methods, giving you more flexibility in how you train your models.

MethodHow It WorksBest For
SFTSupervised Fine-Tuning with conversation dataTeaching the model specific behaviours and response styles
DPODirect Preference Optimization with paired preference dataAligning the model to preferred outputs over non-preferred ones
GRPOGroup Relative Policy Optimization — the model self-generates completions and scores them with reward functionsImproving output quality without human labelling; reinforcement learning from reward signals
KTOKahneman-Tversky Optimization — only needs binary good/bad labels per response, not paired preferencesWhen you have simple feedback (thumbs up/down) rather than preference pairs
CPTContinued Pretraining on raw domain text, with sequence packing for efficiencyBaking domain knowledge (legal, medical, financial) into the model's weights

Key Details:

  • GRPO uses configurable reward functions (format compliance, length penalty, and a default composite reward) — no human labels needed, just prompts
  • KTO unpairs DPO preference data into individual responses with true/false labels, making data collection much easier
  • CPT concatenates conversation content into plain text and uses sequence packing for maximum training efficiency
  • All three new methods reuse existing conversation or preference documents — no new data format required

📖 Documentation: Training Data

🔌 Fine-Tuning API

6 New API Endpoints for Training Lifecycle Management

Fine-tuning operations are now fully accessible via API, enabling programmatic training workflows.

Available Endpoints:

EndpointMethodDescription
/finetuning/modelsGETList all trainable models with instance types and estimated UoI cost
/finetuning/datasetPOSTStart dataset generation for a training job
/finetuning/start/{id}POSTStart training on a dataset-ready job
/finetuning/status/{id}GETGet detailed job status with metrics and download URLs
/finetuning/jobsGETList all training jobs for the workspace
/finetuning/cancel/{id}POSTCancel a running training job

📖 API Documentation: apidocs.toothfairyai.com - Fine-Tuning

Availability: Fine-tuning API is available for Business and Enterprise subscriptions. Individual users can create finetuning datasets only.


📋 Summary

This update includes:

  • TF Code for All - Your AI engineer, now available on every subscription tier including Individual
  • Individual Plan - Starter and Pro merged into a flat-rate Individual plan ($14.99/mo, all features, 1,500 UoI, 15 USD free credits)
  • Refreshed Interface - Flat-design overhaul for Chats and Settings with cleaner surfaces and refined spacing
  • Agentic Tooling 2.0 (Beta) - Adaptive digital worker with autonomous tool selection, parallel execution, agent delegation, skills, and memory (reasoning models only)
  • Three Agent Modalities - Choose between Operator, Agentic Tooling (Legacy), and Agentic Tooling 2.0 (Beta)
  • Qwen 3.6 Plus & GLM 5.1 - New reasoning models (800K and 200K context, interleaved thinking, tool calling)
  • Expanded Fine-Tuning - 35 trainable models across 6 size tiers, plus VLM fine-tuning with image data
  • 3 New Fine-Tuning Methods - GRPO (reward-based RL), KTO (unpaired preference), CPT (continued pretraining) added alongside SFT and DPO
  • Fine-Tuning API - 6 new REST API endpoints for programmatic training lifecycle management
  • /dispatch API - Async task dispatch with file support for long-running tasks
  • MCP Server v0.7.5 - New member/connection tools, structured output, entity parameter alignment
  • Python SDK v0.6.2 - Custom User-Agent header support for REST and streaming

Thank you for using ToothFairy! We're constantly working to improve your experience. If you have any feedback or encounter any issues, please don't hesitate to reach out to our support team at support@toothfairyai.com.

Keep building magic with AI! 🧚‍♀️✨

v0.676.0