The basic notion of business-to-business CRM is usually described as allowing the larger business to be as responsive to the needs of its customer as a small business. In the early days of CRM this became translated from “responsive” to “reactive”. Profitable larger businesses understand that they have to be pro-active to find [hearing] the views, concerns, needs and levels of satisfaction from their customers. Paper-based surveys, such as those left in hotel bedrooms, usually have a low response rate and are usually completed by customers that have a grievance. Telephone-based interviews are often influenced by the Cassandra phenomenon. Face-to-face interviews are pricey and can be led by the interviewer.
A sizable, international hotel chain wished to attract more business travellers. They decided to conduct a customer satisfaction survey to discover what they necessary to improve their services for this sort of guest. A written survey was put into each room and guests were motivated to fill it up out. However, when the survey period was complete, your accommodation learned that the only real individuals who had filled in the surveys were children and their grandparents!
A large manufacturing company conducted the first year of what was made to be Customer satisfaction survey. The first year, the satisfaction score was 94%. The second year, with similar basic survey topics, but using another survey vendor, the satisfaction score dropped to 64%. Ironically, at the same time, their overall revenues doubled!
The questions were simpler and phrased differently. An order from the questions was different. The format in the survey was different. The targeted respondents were with a different management level. The Entire Satisfaction question was placed at the end of the survey.
Although all customer care surveys are used for gathering peoples’ opinions, survey designs vary dramatically long, content and format. Analysis techniques may utilize numerous charts, graphs and narrative interpretations. Companies often use a survey to test their business strategies, and many base their business strategy upon their survey’s results. BUT…troubling questions often emerge.
Are the results always accurate? …Sometimes accurate? …At all accurate? Are available “hidden pockets of customer discontent” that the survey overlooks? Can the survey information be trusted enough to adopt major action with confidence?
Since the examples above show, different survey designs, methodologies and population characteristics will dramatically modify the results of a survey. Therefore, it behoves a business to create absolutely certain that their survey process is accurate enough to produce a true representation of their customers’ opinions. Failing to do so, there is absolutely no way the company are able to use the results for precise action planning.
The characteristics of the survey’s design, and also the data collection methodologies employed to conduct the survey, require careful forethought to ensure comprehensive, accurate, and correct results. The discussion on the next page summarizes several key “rules of thumb” that really must be followed in case a survey is to become a company’s most valued strategic business tool.
Survey questions needs to be categorized into three types: Overall Satisfaction question – “How satisfied are you currently overall with XYZ Company?” Key Attributes – satisfaction with key areas of business, e.g. Sales, Marketing, Operations, etc. Drill Down – satisfaction with issues that are unique to each and every attribute, and upon which action may be come to directly remedy that Key Attribute’s issues.
The Entire Satisfaction question is placed at the conclusion of the survey so that its answer will likely be impacted by a much more comprehensive thinking, allowing respondents to possess first considered solutions to other questions. Market research, if constructed properly, will yield a wealth of information. These design elements should be taken into consideration: First, the survey should be kept to some reasonable length. Over 60 questions in a written survey will become tiring. Anything over 8-12 questions begins taxing mdycyz patience of participants in a phone survey.
Second, the questions should utilize simple sentences with short words. Third, questions should demand an opinion on just one topic at the same time. For example, the question, “how satisfied are you currently with the services and products?” can not be effectively answered just because a respondent might have conflicting opinions on products versus services.
Fourth, superlatives including “excellent” or “very” must not be found in questions. Such words tend to lead a respondent toward an opinion.
Fifth, “feel happy” questions yield subjective answers on which little specific action may be taken. For instance, the question “how will you feel about XYZ company’s industry position?” produces responses that are of no practical value in terms of improving an operation.
Even though the fill-in-the-dots format is one of the most typical types of survey, you will find significant flaws, which can discredit the outcomes. As an example, all prior answers are visible, which leads to comparisons with current questions, undermining candour. Second, some respondents subconsciously tend to look for symmetry within their responses and become guided from the pattern with their responses, not their true feelings. Third, because paper surveys are generally categorized into topic sections, a respondent is a lot more likely to fill down a column of dots in a category while giving little consideration to every question. Some INTERNET surveys, constructed within the same “dots” format, often lead to the same tendencies, particularly if inconvenient sideways scrolling is essential to reply to a question.
In a survey conducted by Xerox Corporation, over one third of responses were discarded as the participants had clearly run along the columns in each category instead of carefully considering each question.
TELEPHONE SURVEYS Though a telephone survey yields a much more accurate response than a paper survey, they could likewise have inherent flaws that impede quality results, such as:
First, when a respondent’s identity is clearly known, concern over the chance of being challenged or confronted with negative responses later on produces a strong positive bias in their replies (the so-called “Cassandra Phenomenon”.)
Second, studies show that individuals become friendlier being a conversation grows longer, thus influencing question responses.
Third, human nature says that people enjoy being liked. Therefore, gender biases, accents, perceived intelligence, or compassion all influence responses. Similarly, senior management egos often emerge when attempting to convey their wisdom.
Fourth, telephone surveys are intrusive on the senior manager’s time. An unannounced phone call may create an initial negative impression in the survey. Many respondents might be partially focused on the clock instead of the questions. Optimum responses are depending on a respondents’ clear mind and free time, two things that senior management often lacks. In a recent multi-national survey where targeted respondents were offered the choice of a phone or any other methods, ALL select the other methods.
Taking precautionary steps, like keeping the survey brief and using only highly-trained callers who minimize idle conversation, may help minimize the previously mentioned issues, and definitely will not eliminate them.
The objective of a survey is to capture an agent cross-section of opinions throughout a group of people. Unfortunately, unless most the individuals participate, two factors will influence the outcomes:
First, negative people tend to answer a survey more frequently than positive because human nature encourages “venting” negative emotions. A small response rate will usually produce more negative results (see drawing).
Second, a smaller percentage of a population is less representative of the whole. For example, if 12 people are required to take a survey and 25% respond, then your opinions in the other nine people are unknown and may be entirely different. However, if 75% respond, then only three opinions are unknown. One other nine may well be more prone to represent the opinions in the whole group. One can assume that the larger the response rate, the greater accurate the snap-shot of opinions.
Totally Satisfied vs. Very Satisfied ……Debates have raged within the scales employed to depict degrees of client satisfaction. Lately, however, studies have definitively proven which a “totally satisfied” customer is between 3 and 10 times more likely to initiate a repurchase, and that measuring this “top-box” category is quite a bit more precise than any other means. Moreover, surveys which measure percentages of “totally satisfied” customers rather than the traditional amount of “very satisfied” and “somewhat satisfied,” provide an infinitely more accurate indicator of economic growth.
Other Scale issues…..There are many rules of thumb that are often used to ensure more valuable results:
Many surveys give you a “neutral” choice on the five-point scale for people who might not exactly wish to answer an issue, or if you are unable to create a decision. This “bail-out” option decreases the quantity of opinions, thus diminishing the survey’s validity. Surveys designed to use “insufficient information,” being a more definitive middle-box choice persuade a respondent to produce a decision, unless they just have inadequate knowledge to reply to the question.
Scales of 1-10 (or 1-100%) are perceived differently between age brackets. Those who were schooled utilizing a percentage grading system often consider a 59% to get “flunking.” These deep-rooted tendencies often skew different peoples’ perceptions of survey results.
There are some additional details that will enhance the overall polish of a survey. While market research ought to be a fitness in communications excellence, the experience of having a survey ought to be positive for the respondent, as well as valuable for the survey sponsor.
First, People – Those in charge of acting upon issues revealed in the survey needs to be fully engaged in the survey development process. A “team leader” should be accountable for making certain all pertinent business categories are included (approximately 10 is good), and this designated individuals be responsible for responding to the results for every Key Attribute.
Second, Respondent Validation – Once the names of potential survey respondents have been selected, they are individually called and “invited” to sign up. This method ensures anyone is willing to accept the survey, and elicits an agreement to do so, thus enhancing the response rate. Additionally, it ensures the person’s name, title, and address are correct, a location where inaccuracies are commonplace.
Third, Questions – Open-ended questions are typically best avoided in favour of simple, concise, one subject questions. The questions also need to be randomised, mixing the topics, forcing the respondent to be continually thinking of an alternative subject, rather than building upon a solution from the previous question. Finally, questions should be presented in positive tones, which not merely helps maintain an unbiased and uniform attitude while answering the survey questions, but allows for uniform interpretation in the results.
Fourth, Results – Each respondent gets a synopsis of the survey results, either in writing or – preferably – in person. By providing in the outset to discuss the final results from the survey with each respondent, interest is generated along the way, the response rate increases, as well as the company is left using a standing invitation to come back to the customer later and close the communication loop. Furthermore that provide a way of dealing and exploring identified issues on the personal level, but it often increases an individual’s willingness to participate in in later surveys.
A well structured customer care survey can offer a wealth of invaluable market intelligence that human nature will never otherwise allow use of. Properly done, it can be a means of establishing performance benchmarks, measuring improvement with time, building individual customer relationships, identifying customers vulnerable to loss, and improving overall customer satisfaction, loyalty and revenues. In case a clients are not careful, however, it could become a source of misguided direction, wrong decisions and wasted money.