Functions
Functions allow an AI instance (or agents) to consume data from an API endpoint. Administrators can apply settings to functions which determine how an agent makes requests to an API.
Menu locationβ
Functions can be created and or edited from the following menu:
Settings > Functions
Types of Functionβ
The following function types are available:
- API - Calls to retrieve information from an api
- Python - Calls to a python code template that gets executed dynamically with injectable parameters
- DB - Calls to predefined stored procedures on the SQL database of choice
- MCP - Model Context Protocol integration for connecting to remote MCP servers
- HTML - HTML embedding of elements in the chat
- CHAT - Scripts and predefined next steps for the agent to follow in a chat
HTML functions are available only from Pro plan onwards and DB functions are available only for Enterprise plans.
Function Configurationβ
Core Settingsβ
Every function requires these basic settings:
- Name
- Description
- Callback
When configuring the setting Description it is especially important to clearly state the specific purpose of the function, because the chat agent is basing its decision to invoke a function on this field. We recommend to include as much detail as possible in this field try to stay short and concise.
The Description field is used by the agent to determine when to call the function. The agent will call the function based on the scenario described in description field - not so much what it actually does as this information is inferred in the parameters and the code block (if any).
An example is Use this tool when the user asks for the weather in a specific location rather than This function retrieves the weather in a specific location.
Regarding the callback setting, when it is Off the function will be called upon the user asking a question, while when it is On the function will be called upon the agent answering the question.
This allows the user to have more control over when the function is called. For example if the function needs to log the user's details, it is better to have the callback setting set to On so that the function is called after the agent has answered the question. While insted if the function needs to retrieve some information from an external source, it is better to have the callback setting set to Off so that the function is called before the agent answers the question.
API Function specific configurationβ
- URL - The endpoint of the API for the associated Agent to consume
- Request Type - The type of request to be made to the API (GET, POST, PUT and DELETE) - GET is the default.
- Authorisation type - The type of authorisation to be used for the API request. This is optional. ToothFairyAI supports
OAuth2,API keyandBearer tokenauthorisation types - Authorisation - The authorisation token to be used for the API request. This is optional and can be set in the
Authorisationssection of the function settings.
API functions assume the response from the API will be a JSON parsable object. If the response is not a JSON parsable object, you may need to modify your code to handle the response appropriately as in the Tools details the request might appear in error!

Python Function specific configurationβ
- Execution hook template - The code execution template to be used for the function. This is required for the function to work. It is important to note that all dynamic parameters wrapped in
{{}}must be defined in theparameterssection of the function definition.
DB Function specific configurationβ
DB Functions allow your agents to execute stored procedures on your SQL databases seamlessly. Before setting up DB functions, ensure you have properly configured your database connections.
- Stored procedure - Name of the database procedure to execute (e.g.
getCustomers,updateOrderStatus) - Connection - Select from your configured database connections. The connection must be set up and network access configured with ToothFairyAI support before use.
Before creating DB functions:
- Contact ToothFairyAI support to configure network access to your database
- Set up a secure database connection
- Never use publicly accessible databases - ensure proper network security
How DB Functions Workβ
When your agent calls a DB function, ToothFairyAI automatically handles the complex database interaction:
Smart Parameter Detection: ToothFairyAI analyzes your stored procedure to understand its parameters and their correct order, regardless of how you define them in the function parameters.
Automatic Parameter Mapping: The system maps your function parameters to the stored procedure's expected parameters, warning you if any required parameters are missing.
Database-Optimized Execution: Each database type uses its optimal execution method:
- MySQL: Uses specialized
callproc()for better performance and error handling - PostgreSQL: Executes with standard CALL statements
- SQL Server: Uses EXEC commands for reliable execution
- Oracle: Wraps calls in proper BEGIN/END blocks
Smart Result Processing: All results are automatically converted to JSON format, handling data type conversions (decimals, dates, multiple result sets) seamlessly.
Secure Connection Management: Database connections are properly managed and closed after each operation for security and performance.
Just like API and Python functions, you need to define parameters in the JSON schema format. The system will automatically map these to your stored procedure's parameters in the correct order.

MCP Function specific configurationβ
MCP (Model Context Protocol) functions allow your agents to connect to remote MCP servers and access their tools and capabilities seamlessly.
- MCP Server URL - The full endpoint URL of the remote MCP server (e.g.,
https://mcp.example.com/server) - Authorisation type - The type of authorisation to be used for the MCP server connection. Supported types include
OAuth,API key, andBearer token - Authorisation - The authorisation credentials to be used for the MCP server connection. This must be set in the
Authorisationssection of the function settings - Additional headers - Optional headers to include in requests to the MCP server (e.g., custom authentication headers or content-type specifications)
- Static arguments - Optional constant parameters that will be included with every request to the MCP server
- Callback function - Controls when the MCP function is invoked (before or after the agent's response)
Model Context Protocol (MCP) is a standardized protocol for connecting AI agents to external tools and data sources. MCP servers expose their capabilities through a unified interface that ToothFairyAI agents can seamlessly integrate with.
When configuring an MCP function, ensure your MCP server URL is accessible from ToothFairyAI's infrastructure. The server should properly implement the MCP specification and handle authentication according to your chosen authorisation type.
HTML Function specific configurationβ
- URL - URL of the html that is to be included
- Message on successful submit
- Message on failed submit

Chat Function specific configurationβ
- Chat script - Script of the chatbot that is to be included. When the
Chat actiontype selected issuggestion- the user can include in the text a link in markdown format to redirect to user to a custom webpage overriding the normal behavior of thesuggestionin the chat. - Chat action - Whether the chatbot should suggest a different question or indicate the next step of the conversation

How ToothFairyAI Efficiently Handles MCP Integrationβ
ToothFairyAI implements a sophisticated MCP (Model Context Protocol) proxy architecture that enables seamless integration with external MCP servers. This universal gateway automatically discovers, loads, and proxies tools from remote MCP servers, making them available to your AI agents as native capabilities.
Architecture Overviewβ
The MCP integration follows a multi-stage workflow that handles all complexity transparently:
External MCP Servers β ToothFairyAI MCP Proxy β Your Agents
(Perplexity, Exa, etc.) (Universal Gateway) (Seamless Access)
Stage 1: Configuration Discovery
- Administrators configure external MCP servers in the ToothFairyAI platform
- Each MCP server configuration includes:
- Server URL endpoint
- Transport type (HTTP or SSE)
- Authentication credentials (securely stored in AWS Secrets Manager)
- Optional custom headers
- Static arguments for consistent parameters
Stage 2: Automatic Server Initialization When the ToothFairyAI infrastructure starts:
- Loads all external MCP server configurations from the database
- Retrieves authentication credentials securely from AWS Secrets Manager
- Establishes connections and initializes MCP sessions with each server
- Fetches available tools via the standard
tools/listJSON-RPC method - Registers each tool locally with a namespace prefix (e.g.,
perplexity_searchfrom serverfunc-mcp-perplexity)
Stage 3: Agent Tool Discovery When your ToothFairyAI agents connect:
- Agents call
tools/liston the proxy server - Response includes both native tools and ALL proxied tools from external MCP servers
- Each tool comes with complete JSON schema and descriptive metadata
- Tools appear as native capabilities to the agentβno distinction required
Stage 4: Transparent Tool Execution When an agent calls a tool:
- The proxy identifies which external MCP server owns the tool
- Forwards the request with proper authentication using cached session credentials
- Handles the response (standard JSON or SSE stream)
- Processes and formats results appropriately
- Returns clean, formatted response to the agent
Stage 5: Intelligent Response Processing
- HTTP Transport: Simple JSON response returned directly
- SSE Transport: Collects multiple event stream messages and extracts the final result
- All responses are formatted consistently for agent consumption
Supported Transport Protocolsβ
ToothFairyAI fully supports both official MCP transport protocols:
1. HTTP Transport (Recommended)
- Standard JSON-RPC over HTTP POST requests
- Simple request-response pattern
- Best for: Most use cases, reliable connections
- Status: β Production-ready
2. SSE Transport (Server-Sent Events)
- JSON-RPC over HTTP with streaming responses
- Supports real-time server-to-client messaging
- Best for: Streaming responses, long-running operations
- Status: β Fully supported with backward compatibility
- Note: While deprecated in favor of Streamable HTTP in MCP spec 2025-03-26, ToothFairyAI maintains support for existing integrations
Key Technical Featuresβ
Multi-Workspace Isolation
- Each workspace can configure its own MCP servers
- Tool namespacing prevents conflicts across workspaces
- Secure credential management per workspace
Session Management
- Automatic session caching and reuse for performance
- Intelligent session refresh when needed
- Connection pooling for high-throughput scenarios
Flexible Authentication
- Bearer token authentication
- OAuth2 flows
- API key-based authentication
- No-auth for public MCP servers
- Custom header support for proprietary auth schemes
Error Handling & Reliability
- Comprehensive error handling with detailed logging
- Automatic retry logic for transient failures
- Graceful degradation when external servers are unavailable
- Clear error messages propagated to agents
Transport Auto-Detection
- Automatically handles JSON or SSE response formats
- No manual configuration required
- Seamless switching based on server capabilities
Tool Namespacing
- Prevents naming conflicts using server ID prefixes
- Example:
perplexity_search,exa_search,tavily_search - Clear tool provenance for debugging and monitoring
Hot Reload Capability
- Can discover new MCP servers without redeployment
- Dynamic tool registration as servers are added
- Zero downtime for configuration updates
Benefits for Agentsβ
From an agent's perspective, MCP tools are completely indistinguishable from native tools. The agent doesn't need to:
- Know where the tool is physically located
- Handle authentication with external services
- Understand which transport protocol to use
- Manage different response formats
- Deal with session management or connection pooling
The proxy handles all complexity transparently, providing a unified interface for all tool capabilities.
Common Use Casesβ
Search Integration
- Connect to Perplexity for AI-powered search
- Integrate Exa for semantic web search
- Use Tavily for research-grade information retrieval
Data Access
- Link to database MCP servers for structured data retrieval
- Access CRM systems via MCP interfaces
- Query knowledge bases and documentation repositories
API Bridging
- Expose REST APIs through standardized MCP protocol
- Unify disparate API interfaces under common schema
- Add semantic search capabilities to existing APIs
Custom Tools
- Deploy proprietary MCP servers with specialized capabilities
- Create domain-specific tools for your agents
- Build reusable tool libraries across projects
Multi-Agent Collaboration
- Share tools across multiple AI agents in your workspace
- Create specialized agents with access to different tool sets
- Enable complex workflows with coordinated tool usage
External Service Integration
- Connect to SaaS platforms via their MCP endpoints
- Integrate third-party AI services and models
- Access cloud storage and computing resources
Configuration Exampleβ
To add an external MCP server to ToothFairyAI, configure a function with these settings:
{
"name": "Perplexity Search",
"description": "AI-powered search via Perplexity MCP server",
"type": "MCP",
"url": "https://perplexity-mcp.vercel.app/",
"authorisationType": "bearer",
"staticArgs": {
"transport": "http",
"enabled": true
},
"headers": {
"X-Custom-Header": "value"
}
}
Authentication credentials are stored securely in the Authorisations section and linked to the function.
Performance Considerationsβ
The MCP proxy architecture is designed for high performance:
- Session caching minimizes authentication overhead
- Connection pooling reduces latency
- Parallel request handling for multiple tool calls
- Efficient JSON-RPC message parsing
- Minimal proxy overhead (typically <50ms added latency)
Security Featuresβ
ToothFairyAI's MCP integration prioritizes security:
- All credentials stored in AWS Secrets Manager
- Encrypted communication with external MCP servers
- Workspace-level isolation prevents cross-tenant access
- Audit logging of all MCP tool invocations
- Rate limiting to prevent abuse
- Request validation and sanitization
This architecture enables ToothFairyAI agents to seamlessly leverage the entire ecosystem of MCP-compatible tools and services while maintaining enterprise-grade security, performance, and reliability.
Advanced settingsβ
Dynamic URL (API only)β
The dynamic URL setting allows the user to set a dynamic URL for the API endpoint. This is useful when the API endpoint is not static and needs to be generated dynamically based on the user input or other parameters. The dynamic URL can be set using the {{}} syntax, similarly to how it is done in the code execution settings. For example, if the API endpoint is https://api.example.com/weather/{{location}}, the location parameter will be replaced with the actual value provided by the user.
It is important that the name of the dynamic parameter in the URL matches the name of the related variable inside the parameters section of the function settings. The dynamic URL can be used in conjunction with the parameters section to create a fully dynamic API endpoint.
When a variable is used in the URL, it is automatically removed from the body of the request. This means that the variable will not be included in the request body, and only the remaining parameters will be sent in the request body.
Parameter settings (API, Python, DB and MCP only)β
When creating a function, by default a code block is provided which demonstrates the required settings needed to consume the API URL. The properties object is required and must include the parameters which the API needs to make requests. The code block which is provided by default when creating a function is below:
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius","fahrenheit"]
}
},
"required": ["location"]
}
For each object within the properties object, a key (or property) which relates to the API's parameters needs to be included. In the example code block above, location is a required parameter for consuming the API. Within each property is a type which must be included. A type is the data type which is used by this parameter eg ("string", "number", "array", "object" types).
These properties (or parameter objects) should also contain a description key with details about what the property is. The more descriptive and accurate the description, the better the results an agent can provide. The agent will use these description items as a reference for how to answer questions and consume the API.
The required array is used to list out the mandatory properties needed for making requests.
Each dynamic parameter wrapped in {{}} must be defined in the parameters section of the function definition. This will effectively instruct the agent on how to replace the dynamic parameter with the actual value. The parameters defined in the function should be dictated by the Code execution template selected.
A Python function always requires an Environment to be selected.
Each dynamic parameter wrapped in {{}} defined in the url must be defined in the parameters section of the function definition. This will instruct the agent on how to construct the dynamic URL.
When the API requires a multipart/form-data or application/x-www-form-urlencoded content type, you simply need to include the Content-Type header in the additional headers section of the function settings. The Content-Type header should be set to multipart/form-data or application/x-www-form-urlencoded respectively.
Unless specified otherwise in the Headers section, the Content-Type header is automatically set to application/json when the request type is POST or PUT.
Scope (API and DB only)β
Scope is extremely useful when the function is required to persist data for the duration of the conversation and especially when the agents need to authenticate the customer or the case prior to disclosing any sensitive information.
ToothFairyAI abstracts the two key components of such interactions in the chat with the concept of Customer and Case.
This allows complex service interactions or internal workflows to be handled by the agent without overloading the agent with unnecessary information.
The scope setting allows the user to define what kind of the data is being persisted within the chat session for any given agent and function.
The available options are Customer retrieval, Case retrieval, Customer authentication, and Case authentication.
Customer retrieval- This scope is used to persist data related to the customer for the duration of the conversation. The data is stored in thecustomerInfoobject and can be accessed by the agent throughout the conversation. To take effect, the function must have the customer id populated manually at the creation of the chat, via widget URL query parameters (tfCustomerId), or aCustomer authenticationfunction must be accessible by the agent to store the customer id at runtime based on the information provided by the user in the conversation.Case retrieval- This scope is used to persist data related to the case for the duration of the conversation. The data is stored in thecaseInfoobject and can be accessed by the agent throughout the conversation. To take effect, the function must have the case id populated manually at the creation of the chat, via widget URL query parameters (tfCaseId), or aCase authenticationfunction by the agent to store the case id at runtime based on the information provided by the user in the conversation.Customer authentication- This scope is used to persist only the customer id within the session (use JSON path extractor to correctly set up this workflow when the response is a JSON object). The data is stored in thecustomerIdfield and can be used by any agent with a function withCustomer retrievalas scope to retrieve the customer information.Case authentication- This scope is used to persist only the case id within the session (use JSON path extractor to correctly set up this workflow when the response is a JSON object). The data is stored in thecaseIdfield and can be used by any agent with a function withCase retrievalas scope to retrieve the case information.
It is recommended to couple the scope setting with the Agent hand-off feature to implement additional information segregation with unauthenticated customer/case.
Thanks to these settings virtually any customer/employee interaction can be handled by the agent with focus on a customer and/or a case (quotation, ticket, meeting etc).
Widget URL Context Integrationβ
Functions with Customer retrieval and Case retrieval scopes can be seamlessly integrated with widget URL context parameters. When users access the agent widget through URLs containing tfCustomerId or tfCaseId query parameters, these IDs are automatically made available to the relevant functions:
Example Integration Scenarios:
Customer Support Widget:
https://agent.toothfairyai.com/workspaceid/agentid?tfCustomerId=12345Functions with
Customer retrievalscope will automatically have access to customer ID12345and can retrieve the associated customer information without requiring additional authentication.Ticket Management Widget:
https://agent.toothfairyai.com/workspaceid/agentid?tfCaseId=TICKET-789Functions with
Case retrievalscope will immediately have access to case IDTICKET-789for retrieving ticket details and related information.Combined Context Widget:
https://agent.toothfairyai.com/workspaceid/agentid?tfCustomerId=12345&tfCaseId=TICKET-789Both customer and case context are available to their respective function scopes simultaneously.
This integration eliminates the need for manual customer/case authentication in many scenarios, streamlining the user experience and enabling immediate access to contextual information from the moment the conversation begins.
JSON path extractor (API and DB only)β
The JSON path extractor is an optional setting which allows the user to extract specific data from the API response. The JSON path extractor is used to extract data from the API and DB stored procedure response at a specific path. For example if the API response is:
{
"customer": {
"name": "John",
"tickets": [
{
"id": 1,
"status": "open"
},
{
"id": 2,
"status": "closed"
}
]
}
}
And we are only interested in the id of the first ticket, the JSON path extractor would be customer.tickets.0.id.
Request type (API only)β
The available requests options for the request type is limited to GET, POST, PUT and DELETE.
Authorisation type (API only)β
This optional setting is available to authenticate the request if it is required.
API keys, OAuth2 credentials, and Bearer tokens must be set in the authorisations section of the function settings.
Once an authorisation is created, select the authorisation type using the dropdown menu.
After an authorisation type is selected, the Authorisation dropdown will appear allowing the user to select the authorisation created.
Additional headers (API only)β
This optional setting is available to provide additional headers when making requests if it is required. This can be used to set the Content-Type header to application/x-www-form-urlencoded or multipart/form-data when the API requires it.
Multipart Request Handlingβ
When sending multipart requests, the data should be structured as a dictionary with specific formats for different types of content.
Base64 File Dataβ
For file data, use base64 data URI format:
{
"field_name": "data:[content-type];base64,[base64-encoded-data]"
}
Static argumentsβ
This optional setting is available to allow constant settings for the predefined properties when the agent consumes the API. For example, if a property is the same for all requests (eg a userID), then adding those details in this area will ensure that this property is always set to the same value.
Dynamic Context Keywordsβ
ToothFairyAI provides special keywords that can be used as values in static arguments to dynamically inject customer and case IDs when available in the chat context:
tfCustomerId- Automatically replaced with the current customer ID if available in the chat contexttfCaseId- Automatically replaced with the current case ID if available in the chat context
Usage Examples:
{
"customerId": "tfCustomerId",
"caseId": "tfCaseId",
"apiKey": "your-static-api-key"
}
When the function is called:
- If a customer ID is available in the chat context (from widget URL parameters, authentication functions, or manual assignment),
tfCustomerIdwill be replaced with the actual customer ID - If a case ID is available in the chat context,
tfCaseIdwill be replaced with the actual case ID - If the respective IDs are not available, the keywords will be passed as empty strings
Real-world Example:
{
"customer_id": "tfCustomerId",
"ticket_id": "tfCaseId",
"workspace_id": "your-workspace-123",
"action": "get_customer_details"
}
This feature is particularly useful for APIs that require customer or case identifiers in every request, allowing functions to automatically adapt to the current conversation context without requiring manual parameter configuration.
Hand-off between Agents via Functionsβ
The hand-off setting is available to allow the user to pass the conversation to another chat agent. This is useful when the workspace has multiple agents with multiple skills and the user needs to be transferred to another agent to answer a specific question. A common flow for this setting can be seen below:
Receptionistagent is created to answer general questions with very limited access to the customer database (e.g. the agent can only answer questions about the company's products and services or retrieve customer and case id based on the information provided by users in the chat). In other words, the agent is not able to retrieve any information from the customer database aside from the customer or the case id.Customer service officeragent is created to answer questions with access to the customer database - therefore it can retrieve personal information about the customer and the case.- The agent is assigned to a
functionwhich allows the agent to consume an API to retrieve information from the customer database to match name and phone number or order id to the customer ID. - The information required to retrieve customer and case information is completely unopinionated and it can be easily customized to meet any verification requirement.
- For the function to persist the customer and the case information, the function must have
Customer authenticationorCase authenticationset asscope.
- For the function to persist the customer and the case information, the function must have
- Regardless of the information required, the function must have the
Agent hand-offenabled by selecting the agentCustomer service officerto hand-off to.
- The agent is assigned to a
- A customer makes an inquiry about their order status providing the order number or their name and phone number.
- The
Receptionistagent will consume the API to retrieve the customer ID and case ID. The agent will then hand-off the conversation to theCustomer service officeragent which has access to the customer database. - Moving forward the
Customer service officeragent will be able to provide the customer with the information they require.
The hand-off setting allows an agent to hand off the conversation with multiple agents in the workspace depending on the interaction with the users in the chat. As it is part of the function settings, the hand-off can be chained with multiple agents in the workspace. Simple chain Agent A > Agent B > Agent C > Agent D Complex chain Agent A > [Agent B > Agent C] > Agent D
The agent hand-off poses a security risk if the function is not correctly configured. It is important to ensure that the function is correctly configured to prevent any data leakage as, due to the hand-off, data will be passed to another agent that should not have access to the information accessible by the initial agent.
Hand-off with Orchestrator agentsβ
Orchestrator agents override the hand-off setting of any function; in other words no hand-off is executed during the execution of a plan designed by a orchestrator agent.
This means that only if a orchestrator agent determines as part of the plan execution that the task needs to be assigned to another agent, the hand-off will be executed. Our recommendation is to use hand-off for more deterministic tasks where the agent is not able to answer the question and the hand-off is required to pass the conversation to another agent; while instead orchestrator agents should be used for more complex tasks where the agent needs to determine the next step is a pure agentic behaviour.
Hand-off To Humans via Functionsβ
Overviewβ
Human handoff functions in ToothFairyAI enable seamless transition from AI-driven interactions to human agent support when necessary. This feature is crucial for handling complex queries, providing personalised assistance, or managing situations beyond the AI's capabilities.
How It Worksβ
- Configuration: Administrators can set up handoff rules similar to other agent functions in ToothFairyAI. These rules specify which users or teams should be notified when human intervention is required.
- Triggering: When the AI agent determines that human assistance is needed, it activates the handoff function. This action temporarily disables all AI agent behaviors in the conversation.
- Transition: The chat seamlessly switches to human interaction mode. Designated human agents can now provide input and respond to the user directly.
- Notification: Configured users receive an email notification with a direct link to the conversation requiring attention.
- Access and Interaction: Human agents can access these conversations through the
Agentssection, similar to standard chats. The key difference is that handed-over chats (e.g., from a Widget) become exclusively interactive for human agents. - Priority and Handling: Human handoff takes the highest priority over other agent configurations. Once triggered, all other automated agent handoffs are disabled for that conversation.
- User Experience: From the user's perspective, the transition from AI to human agent is designed to be smooth and unnoticeable. Users continue their conversation, now interacting with a human agent instead of the AI after being clearly notified that the conversation has been handed off to a human agent.
- Management and Monitoring: All human handoff interactions are logged and can be reviewed in the
Agentssection. This allows for quality control, training improvements, and performance analysis.
Agent settings for Functionsβ
An agent needs to be created in order to assign a function to it. One agent can be assigned to multiple functions of multiple types.

When configuring the function settings for an agent, it is possible to provide context to the function when choosing which one to call and how to call it. In other words, context allows to dramatically increase the accuracy of the request by providing the agent with additional information.
Context settingsβ
The available options are:
- Customer - The customer information that is passed to the function. This is useful when the function needs to retrieve information about the customer. This assume the customer information has been retrieved by the agent using a
Customer authenticationfunction or a previous function withCustomer retrievalscope. - Case - The case information that is passed to the function. This is useful when the function needs to retrieve information about the case. This assume the case information has been retrieved by the agent using a
Case authenticationfunction or a previous function withCase retrievalscope. - Chat - The chat information contains metadata about the conversation such as current date, time, summary of the conversation and user id. This is useful when the function needs to use and manipulate information present in the chat to correctly consume the function.
It is highly recommended to use Chat context when the function is stateful. This will ensure that the function will have access to the chat history and the current state of the conversation including the agent configuration and the metadata of the chat.
Passing information from HTML functions to ToothFairyAIβ
Information from a displayed HTML element in the chat can be passed back
to ToothFairy's AI agent via window.top.postMessage. The data that
gets send to ToothFairy needs to be a JSON object where all elements to be
sent are given as a value in JSON object form to the key "data". To ensure
that ToothFairy knows that the data is coming from an HTML element, the
key "tf_event_type" needs to be set to "form_submit". An example of how
this can be done is shown below:
var message_form = {
"data":{
"name":name,
"surname":surname,
"driving_license":drivingLicense
},
"tf_event_type" : "form_submit"
}
window.top.postMessage(message_form, '*')