Campaign creation

Creating a campaign

The AlloBrain solution lets you create an unlimited number of campaigns at no extra cost. This flexibility is a major advantage, enabling you to organize your evaluations according to your organizational structure. You can create separate campaigns by language, by Team Leader, by partner, or even by service line. Each campaign is associated with a specific quality evaluation grid, enabling you to tailor your evaluation criteria to your needs.

Creation process

Start

To create a new campaign, go to the "Campaigns" tab in your AlloBrain interface. You'll find a"Create a campaign" button that will guide you through the configuration process.

Basic configuration

The first step is to define the title of your campaign. Choose an explicit title that will enable all users to easily identify the objective and scope of the campaign. You'll then need to select the type of interaction you wish to evaluate: calls, emails or tickets.

Campaign Metadata

Metadata are personalized attributes that enable you to organize and filter your campaigns efficiently. It's important to note that they are distinct from call metadata. For example, you could define a "Region" metadata with the value "Europe", or a "Business Unit" metadata with the value "Customer Service". This feature will enable you to quickly find all campaigns sharing the same characteristics.

Customization and Access

The visual aspect of your campaign can be customized by adding a brand image. This image will appear when viewing the evaluation details, reinforcing your organization's visual identity.

The last crucial step is access management. You need to determine which users in your organization will have access to this campaign. You can define different levels of access depending on individual responsibilities, from read-only to full administrative rights.

Good Practices

To get the most out of your campaigns, we recommend a few key practices:

  • Adopt a consistent nomenclature for naming your campaigns
  • Maintain clear documentation of the metadata used
  • Systematically check access rights before launch

Once these elements have been configured, your campaign is ready to be used to evaluate the quality of your customer interactions.

Quality Items

Quality Grid Structure

The quality grids form the core of your assessment. They are organized into thematic sections, each containing specific items. An item represents a specific criterion that will be analyzed for each interaction. The system's flexibility enables you to create different types of items to meet your evaluation needs.

Item types

Item Binary

The binary item represents the simplest evaluation: the item analyzed is either compliant or non-compliant. The score assigned follows this binary logic: in the case of conformity, the item receives the points defined in the configuration; in the case of non-conformity, the score is 0.

For example, to evaluate the greeting phrase, you could create a binary item "Use of standard greeting phrase". If the agent uses the correct phrase, he gets the maximum score (e.g. 10 points). If they don't, they receive 0 points.

Multi-level item

This type of item offers a more nuanced assessment, allowing different scores to be awarded depending on the level of compliance observed. This approach is particularly useful for assessing skills that can be partially mastered.

Let's take the example of customer reformulation:

  • Excellence (10 points): Complete, personalized reformulation
  • Satisfactory (7 points) : Partial but correct reformulation
  • Basic (4 points) : Minimal reformulation
  • Non-compliant (0 point) : No reformulation

Compound item

The compound item represents a sophisticated evolution of evaluation, where the score depends on the combination of several other items. This feature is particularly valuable when evaluating complex sequences or interdependent processes.

Use cases

The compound item is useful in several scenarios:

  1. In the case of a termination procedure, success may depend on three elements: verification of identity, confirmation of the reasons for termination, and explanation of the consequences. The composite item will only give the maximum score if these three aspects are correctly handled.
  2. Conditional quality When evaluating a proposed solution, you could create a compound item that is only activated if the agent has first correctly identified the problem. In this way, the relevance of the solution is assessed only in the context of a good initial understanding.
  3. Performance bonus You can use a compound item to award bonus points when an agent excels in several related aspects. For example, a bonus could be awarded if the agent succeeds in empathy, technical solution and customer satisfaction.

This ability to create composite items enables you to build sophisticated evaluations that accurately reflect the complexity of your customer interactions.

Calibration

Calibration objective

Calibration is a crucial step in setting up your quality assessment system. Its aim is to align the assessments made by AI with those made by your human supervisors. This phase ensures consistent, reliable results that accurately reflect your quality standards.

Why calibration is important

Although the AI on the AlloBrain platform is capable of evaluating calls according to your imported quality grid, it requires an enriched context to achieve performance comparable to that of a human evaluator. AI needs to understand not only the evaluation criteria, but also the subtleties of your processes and the specific expectations of your organization.

Writing item descriptions

To optimize AI comprehension, each item must be carefully described. Imagine you're training a new employee: what information would he or she need to fully understand your expectations?

Here are the elements to include in your descriptions:

  • General context of the item
  • Precise compliance criteria
  • Possible exceptions
  • Concrete examples of compliant and non-compliant situations

Test and Adjustment Processes

Evaluation Test

To check the relevance of your descriptions, use the evaluation test function available on the platform. To access it, click here:

  1. Select your campaign
  2. Go to the Evaluation section
  3. Choose an item to test
  4. Click on "Test evaluation

During this test, the AI will analyze your item and provide you with detailed feedback. In particular, it will indicate whether it needs additional information to make an accurate assessment. If the results don't match your expectations, it's time to enrich the item description with more context and details.

The video above is an example of an unclear description that does not allow for accurate call analysis. The test feature gives indications of what the AI is missing to evaluate the call as a supervisor would.
Tips: you can think of the AlloBrain platform as a new employee who needs to be trained. You need to give it as many indications as possible so that it can meet your expectations.

Thus, the description of the listening item can change from "The agent listens to the customer" to "The advisor clarified the information provided by the customer in an effective manner when necessary, thus guaranteeing good understanding and avoiding any misunderstanding during the conversation. NOK if: the advisor failed to clarify certain information, causing confusion during the conversation. It's important that he doesn't try to repeat exactly what the customer said, for example by saying: "If I've understood correctly...". NA if the conversation was fluid, with no need to clarify details. note: the assessment is applicable regardless of the language of the call."

This second version gives more indications and will produce results equivalent to those of a human supervisor.

Improvement Cycle

Effective calibration requires several iterations. We recommend testing each item on 5 to 10 different calls. This iterative process allows you to :

  • Identify areas for improvement in your descriptions
  • Understanding situations where AI may have difficulties
  • Gradually refine evaluation criteria
  • Achieving optimum consistency with human assessments

Good Practices

For successful calibration, keep these essential principles in mind:

  • Test your items on calls presenting a variety of situations
  • Document the adjustments made for each item
  • Involve your experienced supervisors in the calibration process
  • Schedule regular review and adjustment sessions

Calibration is not a single step, but a continuous process of improvement. The more time you invest in this phase, the more accurate and relevant your automated evaluations will be.

Analysis of Conversation Phases

Introduction to Conversation Phases

In-depth understanding of the temporal structure of calls is a fundamental element in optimizing customer experience and operational performance. Conversation phases enable each call to be broken down into distinct, measurable segments, providing a detailed view of the customer-agent interaction.

Definition and significance

A conversation phase represents a specific period of the call during which the agent accomplishes a specific task or objective. Analysis of these phases provides crucial information on time management and interaction efficiency. For example, the greeting phase establishes the first contact with the customer, while the discovery phase enables a precise understanding of the customer's needs.

Phase configuration

After selecting a campaign, click on Settings in the top right-hand corner, then select Configure conversation phases.

conversation phases

Configuring conversation phases in AlloBrain is intuitive and flexible. For each phase, you define two essential elements:

Phase name: This should clearly reflect the stage of the call concerned. For example: "Welcome", "Discovery", "Solution Proposal", "Conclusion".

Detailed description: This allows you to specify precisely what this phase should contain. This description guides the automatic analysis and enables relevant results to be obtained. For example, for the "Discovery" phase, the description could include the expected elements of questioning and the information to be collected.

Analysis and Insights

Once the phases have been configured, AlloBrain automatically analyzes each call to identify and measure these different stages. This analysis produces several types of strategic information:

Average duration per phase: For each defined phase, you obtain the average duration observed across all calls. This metric can be used to identify stages that could potentially be optimized.

Time distribution: You can visualize the distribution of call time between the different phases, enabling you to identify any imbalances in the structure of conversations.

Variations by agent: The system compares phase durations between different agents, highlighting best practices and training opportunities.