Configuring Quality Items

Quality items assign a numerical score to your interactions (calls, e-mails, chat) in order to monitor the performance of your advisors.
Three formats are available:

  1. Binary all-or-nothing criterion (OK / NOK / NA)
  2. Levels (multi-levels): several degrees of compliance
  3. Composite: score calculated from other items

1. Binary item

1.1 When to use it

When the criterion is verified or not.
Ex: "Did the advisor announce the amount of the deductible?"

1.2 Possible outputs

OK - NOK - NA

NA (Not applicable) is automatically detected if the subject is not covered. You can add an NA exception optional ("This item is not applicable if..."), but it's not mandatory.

1.3 Recommended structure

Rule: The advisor must explicitly tell the customer the exact amount of the deductible (€150).

OK examples:
- "You will be responsible for the €150 deductible."
- "You will have to pay a fixed deductible of €150."
- "You will be responsible for only the €150 deductible; you will be compensated for the rest."

NOK examples:
- "Your expenses will be covered. (amount not specified)
- "There will be a small amount left to pay." (amount absent)
- No mention of the deductible.

Guidelines:
- If the amount of €150 is clearly announced → result="OK"
- If the amount is not announced or remains ambiguous → result="NOK"
- If the subject "deductible" is never raised → not_applicable=true; result=null

2. Item Levels (multi-level)

2.1 When to use it

When the criterion admits several degrees of quality and you want nuanced feedback (Excellent / Average / Insufficient...).

2.2 Parameter setting

  1. General description context and applicability rules (NA).
  2. Levels :
    • Name (Compliant, Partial, Non-compliant...)
    • Score (e.g. 6 / 3 / 0)
    • Description + examples

2.3 Example

Assess only if the customer has not already provided all the necessary information concerning the claim. Otherwise: NA.
Level Score Criteria & examples
Level 1 - Compliant 6 pts Remember to ask open-ended, pertinent questions:
"What is the date of the damage?
"Is the vehicle immobilized?
Level 2 - Partial 3 pts Closed or awkward questions, but sufficient to deal with the request
Level 3 - Non-compliant 0 pt No useful questions or questions already covered
Simple follow-up call: no questions required ⇒ NA.

3. Compound item

3.1 When to use it

When the score depends on a logical combination of other items: bonus, malus, thresholds...

3.2 Visual settings

  • Select all / any conditions.
  • Select the expected status of source items (OK / NOK / NA).
  • Define Level name + dots.
  • Add Else if for other combinations.

3.3 Example

IF all of:
▸ Item "Empathy" is OK
▸ Item "Adapted Solution" is OK
THEN
Level name : Service Premium | Points : +2

ELSE IF any of:
▸ Item "Empathy" is NOK
▸ Item "RGPD Compliance" is NOK
THEN
Level name : Penalty | Points : -5

4. Grading Method - how points are applied

Each Binary or Levels item has a "Grading Method" parameter that determines how the points are finally awarded:

Grading Method How it works Typical use case
Automatic (default) The platform analyzes the item, decides OK / NOK / NA and applies immediately the defined scale (points or level). All standard criteria to score without human intervention.
Supervisor The AI first ranks the item (OK / NOK / NA), then a human supervisor must validate or correct before the score is included in the call. Sensitive criteria (legal compliance, fraud, etc.) where double validation is required.
Ungraded The AI classifies the item (OK / NOK / NA), but no points applied. The result can nevertheless be used in a item Compound or in your dashboards. - Avoid double counting when the item is used as a condition in a Compound.
- Track an indicator without impacting the overall score.

Good reflex

  • Use Ungraded for your "gatekeeper" items (e.g. Fiber Proposal / Customer Acceptance) so that they don't add duplicate points to a Compound item.
  • Reserve Supervisor for the rare criteria where the sanction or bonus must be confirmed manually.

New feature: Dynamic evaluations with call metadata

Radically improve the accuracy of your automatic evaluations by integrating each call's unique data (metadata) directly into your criteria.

Move from generic verification ("Did the agent ask for the phone number?") to factual, dynamic validation ("Did the agent confirm that the number was 769965772?").

How does it work? The {{ }} syntax

Le principe est simple : utilisez des doubles accolades {{ }} pour insérer une variable correspondant au nom exact de l'une de vos métadonnées existantes.

Par exemple, si vous envoyez une métadonnée nommée client_phone avec chaque appel, vous l'utiliserez dans votre grille en écrivant : {{client_phone}}.

During evaluation, our system will automatically replace this placeholder with the actual value of the metadata for the call being analyzed.

Putting it into practice

  1. Access your evaluation grid configuration.
  2. Modify a criterion. Include your variable in the description.
    • Exemple concret de la vidéo :
      Le gestionnaire vérifie que le numéro de téléphone du client est {{client_phone}}
  3. Save. That's all there is to it!

Advanced use: All types of evaluation

This {{ }} syntax is not limited to general item descriptions. You can also use it in the text fields of other notation systems to define even more refined conditions.

  • Exemple pour une condition d'un niveau "moyen" :
    L'agent demande l'email {{client_email}} mais ne le répète pas pour confirmation avant de conclure l'appel.

The result: improved precision

Once your grid has been updated, the AI will validate the information against the actual call data.

In the video, the item's rating is NOK because the AI's comment confirms it:

"...it has not been repeated or explicitly confirmed as 769965772."

The system compared the call transcript with the actual value of the {{client_phone}} metadata, providing an objective and accurate assessment.

Resolve "Auto evaluation failed" error

This error occurs when the name of the metadata you've inserted in your {{ }} grid doesn't exactly match the actual name of the call metadata.

As the system cannot find the requested data, it cannot evaluate the criterion.

Video example:

The error Cannot evaluate item without metadata cliente_phone1 is caused by a simple typing error.

  • Variable incorrecte dans la grille : {{cliente_phone1}}
  • Correct metadata name: client_phone

The golden rule: Copy and paste!

To avoid errors, never type the name of the memory metadata.

  1. Go to the call page and view your metadata.
  2. Copy the exact name of the metadata (e.g. client_phone).
  3. Paste it into your evaluation criteria between the {{ }} braces.

This method guarantees a perfect match and error-free evaluations.