The Crystal Ball Approach to QA

The Crystal Ball Approach to QA

Gaze into my crystal ball. At last weeks' ACCE Conference, I had a number of conversations with call center professionals who reported that their Quality Assessment (QA) programs were making a major shift towards the subjective. My collective understanding from all these conversations is that companies are getting tired and frustrated trying to behaviorally define their expectations in a workable form and are weary of haggling in calibration over every jot and tittle. So, they threw away the form and asked supervisors, managers and QA analysts to simply listen to the call and answer a few questions like:

  • "Did you exceed the customer's expectations?"
  • "Did you represent the brand?"
  • "Did you resolve the call?"
  • "Did the customer have a good experience?"

On the surface it appears simple and less cumbersome. On the back end, I'm afraid it is rife with obstacles. Here are a few concerns:

  • Outcome is analyst dependent. Depending on where the analyst sits on the continuum between "QA Nazi" (e.g. "QA is my opportunity to identify and weed out every single flaw you have and ensure you submit to my personally unattainable high standards in the interest of an altruistic goal of exceptional service through call center domination.") and "QA Hippie" (e.g. "QA is my opportunity to build self-esteem, spread a positive attitude, and avoid any conflict by giving you glowing remarks on every call and politely ignoring those pesky 'rough edges' which I'm sure you'll change all by yourself without me having to say anything and risk you going postal."), the simple, subjective approach to QA will create radically different feedback to CSRs across the call center.
  • Crystal ball required. Trying to determine an individual customer's thoughts, ideas, expectations of a single call requires a magic crystal ball or ESP.  Unless you have a specific customer IVR survey tied to the specific call you're coaching (which, some centers do), you don't know for certain what the customer thought (even if you do have an IVR survey attached, an individual customer's feedback to the agent can be highly correlated to their overall experience with the product or company – which could lead to unreliable feedback). The bottom line is that, in most cases, the QA analyst is just making an educated guess filtered through their own bias. Whenever you start guessing about what the customer thought, your analysis has lost any reliable objectivity. Any performance management data made on the analysts' subjective perception of the customer experience can create all sorts of HR problems.
  • You still go back to behavior. If a CSR gets poor marks on a call, he or she still wants to know "what do I specifically need to do in order to do a better job?" The analyst must still create a behavioral definition of what a "good job" is to them (i.e. "You need to be more polite by using 'please' and 'thank you.' You need to use the customer's name more often to make it more personable." etc.). However, now that behavioral feedback is analyst dependent. You have multiple analysts giving their personal prescriptions for what they consider a 'good job.' You haven't escaped the behavioral list. You just let each analyst create and control their own individual list. Now you have multiple people in the center applying their own personal definition of quality rather than driving the center to a unified understanding of quality and what is expected of CSRs.
  • You have poor data on which to make decisions. CSRs on the Inbound Order team are getting better QA marks this month, but why? What made the difference? Which behaviors did they modify to make the improvement and what percentage of the time are they displaying those behaviors? How do you know the supervisor isn't just in a better mood now that his divorce is final? If the Training team wants to know which service skills need remedial training and focus, they can see how many times a CSR did not represent the brand, but what specifically the CSRs are doing or not doing is up to the analyst's best recall, highly dependent on analysts definition of what represents the brand, and likely requires someone to go through every form and try to pull some kind of meaningful data from it. You may have simplified the front end of the process, but you have very little actionable data or information on the back-end to benefit your team.

This isn't to say that there isn't a silver lining to simple, anecdotal feedback. There is a place for listening to a call and providing an initial reaction based on what was heard. The approach does provide feedback. It does give the CSR a general idea of where they stand and provides an opportunity for communication about service and quality. The subjective approach is, however, a poor substitute for a systematic, data-driven, objective analysis of what CSRs are and are not saying across a random set of calls.

Creative Commons photo courtesy of Flickr and Kraetzsche

^ Back to top

Contact us

Work with us

  • We have the opportunity to work with some of the finest people and organizations around the world. If your interested in seeing how c wenger group can help improve customer satisfaction and overall service quality, please contact us today.
  • Contact us via the form to your left or email us.

Twitter

    Connect