How to succesfully train your Agent

icon picker
Testing the Agent script

After the
@Script
is assembled and the
@Agent
is trained (for information on how to successfully train the
@Agent
see ), it must be tested.
To do this, open the
@Debug Widget
, where you can chat with the
@Agent
, by switching the DEBUG switch. ​
image.png
Important: if an hour or more has passed since your last message to the
@Agent
in the
@Debug Widget
, the
@Dialog
will be closed and the
@Agent
will stop responding. You will need to refresh the page for the
@Agent
to start responding again.
In order to test the
@Agent
as effectively as possible, you must complete the following steps:
Make the most complete list of questions for the
@Agent
. It is necessary to think about how the real
@Bot User
will formulate questions, and try to provide for most of them. Also, the list of questions must include phrases that the
@Agent
should not recognize and should send to
@fallback
.
Run a list of questions through the
@Agent
, check the recognition of
@Intent
s and calculate the percentage of correct answers of the
@Agent
. If a phrase is not recognized (ends up in
@fallback
) or ends up in the wrong
@Intent
where it should, write it down and indicate the name of the
@Intent
where it should have ended up. After all the
@Intent
s have been checked, you need to enter the written phrases into the corresponding
@Intent
s and retrain the
@Agent
. If irrelevant phrases that the
@Agent
should not recognize end up not in the
@fallback
, but in the
@Intent
s, after completing regression testing, you should try to select a more optimal
@Confidence Threshold
value. More details in the section .
After retraining the
@Agent
, go through the list of
@Intent
s again and check each one. Recalculate the percentage of correct answers. If the
@Intent
s continue to get confused, check the
@Training Dataset
. If the
@Training Phrase
in the selection of the
@Training Dataset
of different
@Intent
s is too similar, you need to either make the
@Training Dataset
of these
@Intent
s more different from each other (remove similar wording, add more different phrases), or combine the confusing
@Intent
s into one.
If the
@Vocabulary
is involved in the
@Script
, the work of the
@Vocabulary
is checked as follows: you need to ask questions in order to get into a specific
@Script Branch
— one of the branches with a reference — or the ‘true’
@Script Branch
. If a phrase does not end up in the
@Script Branch
it should, then go to the
@Vocabulary
and check whether the word used in the phrase is in the
@Vocabulary
dataset. If not, add it; if there is, check whether the same word is in other
@Entity
.
After saving the changes to the
@Vocabulary
, go through all the
@Script Branch
es again and, if you do not fall into the desired
@Script Branch
, add the missing words to the
@Vocabulary
.
It is also necessary to check the operation of all functionality. Particular attention should be paid to slots with complex functionality —
@Regular Expression Slot
,
@Memory
etc.
Check the speed of transfer to the operator, if such a transfer is provided.
Check the operation of the
@External Request
s and the speed of integration with external services, if provided.
Conduct a test in each involved
@Project Channel
l: messengers, widget, etc.
Check spelling, punctuation and grammar.
Important: in order for the changes to take effect, do not forget to retrain your
@Agent
using the Train button in BotBuilder.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.