The training evaluation section allows you to analyze the quality of your NLU agent’s performance. It helps compare expected and actual results of intent and entity recognition, and identify problem areas for improving the model.
Unlike manual phrase testing, training evaluation allows you to:
- Run checks for a large set of phrases at once
- Automatically calculate the percentage of successful and unsuccessful recognitions
- Visually assess the model’s performance statistics
- Eliminate the need for manually testing each phrase one by one
- Log in to your dashboard
- Create an NLU agent
- Add intents and entities
-
Go to the NLU agents section
-
Select the desired agent
-
Open the training evaluation tab
-
Add test phrases:
- Click Create training evaluation or the
icon in the top-left corner
- Enter the test phrase
- Select the expected intent from the dropdown list
- (Optional) Select expected entities from the dropdown list
- Click Save. The phrase will appear in the list
-
Click the Train button. Wait for the training process to complete
-
Refresh the page if needed and check the evaluation results.
If recognition accuracy is insufficient, adjust the agent settings and retrain
Whenever you add new evaluation phrases, always retrain the agent
- Regularly update the list of test phrases to reflect new user communication scenarios
- Analyze performance metrics and make improvements to enhance model quality
- Use the evaluation tool to compare different model versions and choose the best configuration