Our recent webinar, Delivering Value from CSat Tracking Research featured experts from Radius, alongside Karen Fuller from Expedia. Watch the recap to how Karen improved their overall business model with a refined approach to customer satisfaction (CSat) tracking research. The discussion delves into best practices, challenges, and strategies for leveraging CSat data to drive organizational success. Whether you’re looking to fine-tune your current tracking program or seeking insights to implement one, this transcript offers valuable takeaways from industry leaders. Read on for the full conversation.
Paul Donagher:
Hello, everyone. My name is Paul Donagher, and I’m a Director of Client Services here at Radius. Thanks to all of you for attending today’s webinar on delivering value from CSat tracking research. The webinar today will be given by Mike Jennings, who’s Senior Vice President here of Client Services and Karen Fuller, who is Senior Director Global Market Research at Expedia. The webinar is scheduled, as always, to last for a maximum of 45 minutes. And if you’d like to do so, please type in a question onto the screen. Of the 45 minutes that we’ve got allotted, Karen and Mike will speak for about 30 minutes once I’ve finished the introduction. So we should have some time for Q&A at the end, and I’ll collect questions as you’re adding them and ask them to the guys at the end.
So, by way of introduction, during today’s webinar and amongst other topics, you’ll come away with an enhanced understanding of best practices in creating corporate value in tracking and CSat tracking in particular. Some of those best practices include how to be fully prepared prior to launch, why it’s important to have a flexible framework, how to exceed expectations, how to consider the corporate information ecosystem, how to avoid survey fatigue across all programs, and how to promote and educate colleagues and stakeholders as to what the tracking system provides. And as always, and I’m sure like some of you, we may be in our home offices, so please forgive the odd noise or even minor technical snafu.
So there are two speakers. Mike is a senior member of our Client Services team and lives in West Palm Beach, Florida with his wife and daughter. Mike is a keen follower of all things motorsports and likes nothing more than to drive his own car around local speedways at quite terrifying speeds—though amazingly, for all the time that I’ve known Mike, I’ve actually never seen a scratch on his car, which is quite something. Karen lives in Austin, Texas, and has 30 years of experience in various research sectors ranging from public policy to education, to consumer goods and services. First at HomeAway, at VRBO, and now Expedia, Karen has led all market research activities including customer set, market sizing, copy testing, brand positioning, and product development. Karen has also worked for other great brands in the industry such as Dell and Reebok. She has been a valued client to us at Radius for many years, and I’m absolutely delighted that she will share her expertise with us throughout this webinar. Karen has many nieces and nephews and just informed me that she, in fact, has a full basketball team of godchildren—so nieces, nephews, and godchildren of Karen. And with that brief introduction, Mike will take it from here.
Mike Jennings:
Thanks, Paul. Appreciate that. Yes. I don’t have scratches on my car, it’s just because I get them fixed promptly before you visit. Okay, <laugh>. So first off, I want to say that I’m delighted that Karen has joined us. We’ve been working with her and her team over at VRBO, formerly HomeAway, for more than a decade. And it includes our customer satisfaction tracking program, which we’ve been working with Karen on since 2010. That program itself is administered online. It’s continuously run throughout the year with very tight quota and sample management controls in place, to make sure that we’re not duplicating contacts. We have readable response bases for decision-making, and we’re not over-serving folks. We are currently deployed to multiple audiences that are of interest to VRBO, including owners of properties all the way down to travelers staying in those properties. We are deployed across 13 different countries, and the data and insights that we provide are available via web portal, and we produce quarterly insights reports. The program is ever-evolving, and by that, I mean we, Karen, her team, and Radius, are continually looking for ways to improve the program so that we’re yielding the most actionable insights from the data.
As you see here on this screen, we’re showing some of the types of studies that we work on, and a lot of this is ad hoc work and includes different projects. We’re showing two here. One is on tiered pricing, and another is on vacation rental market size. And I guess Karen, along with the tracker, I view these as extremely large initiatives, not very small, and I’m wondering to myself exactly, how do you get it all done?
Karen Fuller:
<laugh> Well, with a lot of support. I like to tell people that while we probably do most of the projects ourselves internally—writing programs, programming, and writing reports—the biggest projects that we do are things that we contract out for, right? And two of them are highlighted here. You mentioned the tiered pricing work. This was a really important project that allowed us to confidently make a dramatic shift in our underlying business model. It was a really complex menu-based choice model that was so realistic for our participants that some even thought they had changed their subscription with HomeAway and had called customer support to confirm that change. But this might be a case of being careful what you ask for because, as a market research practitioner—not really a statistician—when we were asked to present this work at Sawtooth, it was a little nerve-wracking to say the least. Fortunately, we had very good support from the Radius statistician who did the work. And then, really importantly, was this global market sizing that provided the key information we needed to size the vacation rental market in terms of both dollars and supply in advance of our initial public offering. Previously, this work had been done by Phocuswright, which is really considered the world’s leading travel industry research authority. They had issued reports on the size of the market, but our study indicated that it was 50% higher, which was great, right? If you’re going to an IPO, but maybe also brought a little more scrutiny. Fortunately, we were able to convince the leadership that we had a lot of confidence in our methodology. Ultimately, the image here is from our roadshow deck for that IPO, which was very successful. So, just two of the really big projects that we’ve done in addition to our CSat tracker.
Mike Jennings:
Yeah, so thanks for sharing that insight into those studies. And again, for today, we’re going to focus on our customer satisfaction tracker. I want to talk with you a bit about how we maximize the value coming out of that program. So, for those listening, as you probably know or are practicing with any research project, there are some things that you need to get lined up at the start. Of course, that includes budget, which I didn’t list here—that is always critical. These starting items include setting your basic objectives, what you intend to learn, identifying any supporting information that you may already have available, such as other market research that you’ve conducted, any sales team data, industry reports, et cetera. Getting those things all lined up. Determining what the insights will solve. So how are you going to use the data within the organization? And more uniquely, for tracking programs such as what we’re talking about, how often do we need to refresh our view of the data? How often do we need to refresh our insights? Karen, beyond those basics, are there some other important things that one should be taking into account when launching a customer satisfaction program? And, as we say in sprinting, getting on our marks?
Karen Fuller:
Yeah, I’d say it’s really important to get support across your organization, not just from the team sponsoring the work, right? It can’t just be seen as something for marketing. You need to get buy-in from the strategy team, product marketing, product, whoever you think might ultimately use some of these insights. I also think it’s very important in that same context to position the work not as a report card, right? But as a tool that allows you to keep your finger on the pulse of the customer, maybe uncover some things that you might be blind to, right? Often, you have customers who might have the ear of your CEO, and therefore your CEO thinks that’s how all things work. So this helps you uncover and understand if some of those things are larger trends or not. And really, to use it as feedback for continuous improvement. It’s not just a report card to tell someone if they’re doing their job well or not.
Mike Jennings:
Yeah. Those are great additions to the initial basic set of things. So thinking back to 2010, if we can for a moment, when you started out the tracking program, did you take all of these things into consideration at that time, or were those learnings along the way?
Karen Fuller:
Well, I’d like to say that I did, but the program had already started when I joined the company in 2008, so I really can’t speak to that. In fact, the original program that was in place when I joined was supported by our third-party email marketing partner, and quite frankly, it wasn’t clear to me if the program began out of specific goals or if we had really set the objectives and done all those things that you had outlined earlier. Whether it was a vision to try to lead a company that’s insights-led, or simply it was a tool that the email marketing partner had and said, “Hey, we should take advantage of this. Let’s send surveys to your customers.”
Mike Jennings:
Right. So, since the program was already in place, what did you end up changing along the way?
Karen Fuller:
I was thinking about this morning and thinking how challenging it can be sometimes when we work with somebody who’s already written a survey for us to help us out, and then they want us to program it. We spend a lot of time undoing things because we’re trying to really figure out what their objective is. This was a little bit the same way—we had to sort of fix the basic things, those obvious problems, and then try to really align the program more with the objectives of the organization. So the first step was to put some controls in place. Surveys were sent to every customer who made an inquiry about a trip. And you’re going to make multiple inquiries about trips. “Hey, is your property available?”—say five of them, right? You’re going to get surveys on those. There wasn’t a sampling plan, so really at the beginning, just solving that. But even solving that problem, the program spanned, as you said earlier, multiple countries and multiple audiences, and it really couldn’t be managed by one person using Excel. So that’s when I brought Radius in to not just address that workload issue, right? But to try to build out a stable infrastructure for the whole program. So that helped us address those basic things—instituting random sampling, creating rules around contact frequency—but also allowed us to focus on analyzing the data in a way that would turn it from a report card into a tool that we could use for strategic decision-making. And I’d say that driver analysis that we did early on to understand how to improve our NPS probably had the biggest impact and justified that investment early on.
Mike Jennings:
Yeah. Another thing that I think is important when you’re developing and running programs such as this is to make sure that all of the internal stakeholders within the organization are using the insights. I find it valuable to promote what the program is delivering, and that’s something that needs to be considered right from the get-go—getting those stakeholders on board. What were some of your early wins from your efforts?
Karen Fuller:
So early on, we had the opportunity to present this research at our quarterly business reviews, but it was really that second-level effort to understand some of the data better, I think, that delivered those early wins. So back in 2010, our business was really different, right? What I would call “window shopping” took place online, but really the rest of the transaction took place offline. People would call a property owner, send them an email to try to book a property, maybe negotiate the price of it, but we didn’t have any visibility to what was happening, right? All we knew was that maybe they contacted them, and based on that initial exchange, the owner would confirm a booking. And again, we had no visibility to this, but from our driver analysis, we understood that how quickly that owner responded to the inquiry had a huge impact on the NPS and whether or not somebody booked. So we kept digging. We added some additional questions to really drill down and understand how quickly owners responded, and that allowed us to see this really clear linear relationship between response time, likelihood to book the property, and likelihood to recommend HomeAway. That gave us the opportunity to change some of the messaging to owners along the lines of, “if you respond within three hours, you’re twice as likely to get a booking as if your response takes 12 hours.” So being able to deliver a tactical insight that someone could act on, I think, was really important. So more than just product marketing, it had an impact on the product direction too because we understood we wanted to build a product that allowed owners to respond quickly. So we put in the app an ability for owners—this seems silly now, but remember, in 2010 this wasn’t as common—to allow them to respond from anywhere they were. So I’d say that was a big win early on.
Mike Jennings:
So was there any resistance at all with the changes you were making—how you were pushing the insights out to the organization to be used for more than just the marketing part of it?
Karen Fuller:
I have to say, not really. Fortunately, whether it was HomeAway, VRBO, or now Expedia Group, these are organizations that really value data and insights. I used to work somewhere where I wasn’t convinced that my colleagues would act on the insights we provided. Now I worry a little bit more about making sure that we deliver the right insights because people will take action. So if we tell them that the number one message you have to give is, “tell owners to respond quickly,” the next thing you know, that will be in every piece of communication to our owners—which is a great place to work for that reason.
Mike Jennings:
So we’ve thought about setting those basic rules, objectives, putting our controls in place, and building out that platform with analytics that will promote continuous improvement. Now that the groundwork is sort of laid out for us, it’s all smooth sailing, right? I like the “Maybe, maybe not.” It requires a significant amount of continuous effort to ensure that your tracking program is collecting the right data and serving the organization in the right way. To minimize bumps in the road, we often must balance between being singularly focused—measuring satisfaction with our customers on their interactions—versus including other questions that address topics of the moment for various stakeholders. We should manage involvement, encouraging input to the program, yet avoiding too many cooks in the kitchen, so to speak. Now that your program is serving both marketing and product teams, is there a need to set expectations with them on what value can and cannot be delivered from the program?
Karen Fuller:
Yeah, absolutely. As I mentioned, our organization is very measurement-focused, and we go through a quarterly process of setting objectives and key results. And for a while, it seemed like everyone included improving NPS as one of the key results of their initiative, and we knew from the driver analysis that was a pretty unlikely outcome. So I spent a lot of time counseling colleagues not to use NPS improvements as their key result. That said, we could look for other ways to help measure the progress of their initiative, at which point I would counsel colleagues to have patience in seeing the results. Perceptions don’t generally get built or change overnight, so even though it might be the most important thing in the world to somebody in our organization, it might be something that, whether it’s a traveler or a partner, they might only see once a month, once a year. And therefore, those things are going to take time to change. So that’s probably the area where I’ve had to manage expectations the most.
Mike Jennings:
Yeah. So I know over the years we’ve added questions, we’ve removed some questions, but have pretty much kept our survey tight and true to the core. I believe this is due to your team’s ability to manage internal tasks and determine if there’s an appropriate answer from the tracking data, or you guys need to field some ad hoc research internally for those hot topics. Is that an accurate assumption?
Karen Fuller:
Yes. I think we have to assess whether the issue is literally an issue of the moment. Is this going to be over now, and we can address it some other way? Or is it something that we want to see how it trends over time? So as you said, we’ve tried to be judicious about those changes, and it is hard to distinguish issues of the moment. I think I’ll speak to that a little bit later. I often joke that we can trace the evolution of the company and its leadership by looking at the attributes that have been added and removed from the tracker. And while we have stayed true to the core, we have flexed as our business model has changed, and certainly as the vacation rental market has changed. For example, we added a competitive aspect to understand how our travelers and our partners are using multiple platforms to search for and book vacation rentals, and how owners are listing properties on more than one platform. Understanding those dynamics of competition was something that we hadn’t considered at the beginning, but we added because it is something that we care about long term. It wasn’t just something we could do on an ad hoc basis.
Mike Jennings:
Yeah. It’s one of those continual evolutions of improvement to the program, right? If you were to start a program like this from scratch today, how would you go about it? What teams or people within the organization would you have reached out to get their input and their support to make sure that what you’re delivering to them would be of the highest value?
Karen Fuller:
Highest? I think the obvious ones are brand marketing and product marketing, but one that was not included to the best of my knowledge—and I think was missed—is customer support. Both the general support and account managers. I think it’s helpful to get their input because they’re so close to our customers and they know what issues they care about. And honestly, it’s a little bit of a preemptive move to keep them from setting up their own feedback systems. But the real purpose is that they’re great resources for understanding the issues, needs, and problems that are faced by our customers. In addition, depending on the size of the organization, I would also include user experience and analytics. Some of the feedback we get could be addressed by our user experience and analytics teams, and I think being able to tie those things together always helps improve the insights that we’re able to deliver. And the last thing I’d say is to make sure you thoroughly vet the attributes that we have included. I get many questions about how respondents interpret a specific attribute, and I’m not convinced that all the attributes were as clear or singular as they should have been at the outset.
Mike Jennings:
Hmm. In what ways have you been extending the usefulness of the program? That is, staying on top of the needs of the business, changes in the marketplace perhaps, and addressing those through this specific program?
Karen Fuller:
Right. So as I’ve already mentioned, one of the great aspects of this program is that it’s an existing framework for getting feedback in. We can add and tweak things as our business changes and evolves. Over the past 12 years, we’ve moved from a website where most transactions happened offline to one where most transactions take place on the platform, and we’ve been able to adapt to those changes without ever having to go down or offline with our survey. For example, when HomeAway introduced pay-per-booking as an alternative to annual subscriptions for owners, we simply started including those pay-per-booking customers into the program, weighted the data appropriately, and were still able to deliver our CSat for partners. But at the same time, as a program that was instituted when we were largely offline, we were able to deliver some other insights before we became more integrated with the data warehouse.
What we see here is when we launched, we started asking customers about how many bookings they were getting, what their average rate was, how many bookings they got from us, and this allowed us to develop an estimate of potential earnings. It was featured on what we call the “list your property” page: “Earn up to,” or “our top customers earn this much money.” It became such a popular stat that every quarter when this data got updated, people from all over the world in the different countries where we did this would ask, “Do we have the new number? Do we have the new number?” It was really powerful in helping draw customers in—something that we never saw at the beginning when we launched this thing. And the CSat tracker is really the only source of truth that we have for our customer demographics, and really importantly, our traveler demographics. It’s obviously important for our marketing team as they think about who they’ll be marketing to. But as issues of inclusion and diversity became a key focus for Expedia, we were able to deliver the first insights into race and the vacation rental industry simply because of information that we’d already been collecting.
Mike Jennings:
That’s great. That’s great to hear that all the different ways beyond what the original intent probably was have turned into something broader. So, broadening the scope by including different customer groups, and you’re looking at where else the data might be able to support the business. We’re learning that CSat programs should really be more than just asking questions about satisfaction; they should be used to answer wider business questions and keep the business interested. How have you tried to keep the program itself of interest to your internal stakeholders? What has worked out really well?
Karen Fuller:
So I like to think of this as the gift that keeps on giving because we constantly find new insights from it. It’s almost like an endless supply of things that we’re able to pull out of the data. Whether that’s something that we’re proactively looking for in the data ourselves or someone who asks us a specific question, it just seems like we’re often able to uncover things with data that we’re currently collecting. But what’s really powerful is we have the flexibility to add questions, right? This has proven again and again to be a way we can address some really important issues. And the pandemic is a perfect example of that.
So, March 13th is the day that most people would cite as what they see as the start of the pandemic for them, and by April 8th, we had added questions to our tracker for both travelers and partners related to the pandemic. At this point, I have to give a shoutout to Rachel Kaufman on my team because it probably would’ve been August if you’d left it up to me, but she really drove getting this thing changed. And that was at a time, remember, when we didn’t know how long this thing was going to last. But we got some really general feedback on how VRBO was handling the crisis—whether the customers felt supported, whether they received the right communication from us. But we also collected some more tactical information that helped future strategy and communication throughout the pandemic.
For example, we asked travelers why they had booked a vacation rental. The number one reason was they wanted to get away from their current environment or that they had canceled a different type of accommodation. So we could use that in our marketing as we started to reach out to customers and encourage them to use vacation rentals. But then we also got trip information: Why did they change their trip? Did they have dates? Did they have other reasons to cancel? And really importantly, we asked owners what they were doing in response to the pandemic. One interesting response was that some of them were using that downtime to make some changes to their property—maybe some repairs and upgrades that they’d been thinking about. But the number one thing was providing flexible cancellation, and that was a response that grew dramatically over time. So we knew we had to change our product to support that because that was coming as a demand from our partners that we could see through these questions we had added to the tracker.
Mike Jennings:
That’s great. It’s great that we’ve been able to include some of that in the tracking program because it’s such a large program reaching so many different folks, so many different audiences across different countries. To be able to deploy a question or series of questions like that, that can get some deep insights quickly, speaks to the flexibility of the program. I want to touch on how we might avoid trouble when starting a tracker. So, given your experience and your expertise, Karen, have you any precautionary tales, surprises to be avoided, or things to take into account when starting and maintaining a customer satisfaction tracking program?
Karen Fuller:
Before we get to the precautionary things or avoiding the trouble, I think the number one thing is to try to get it set up right to begin with. Have that cross-functional support and buy-in, and really understand what your objectives are. Understand how people are interpreting the questions that you’re asking them. But on the precautionary side, I would say it is a delicate balance between being flexible and responsive, but always with an eye to not allowing the tracker to become overloaded with your issues of the moment. Whether that’s moving things in and out, or really honing your ability to understand what’s an issue today that’s not going to matter tomorrow. And along those same lines, really keeping an eye out for those strong personalities in your organization. You can have an outsized influence on what’s included in the survey. This is basic research stuff, but I think you see it a lot more in tracking research. I mean, you can look back across 10 years of data and cringe a little bit that something had snuck in, and you know why. And then finally, I would say work closely with HR. You are responsible for onboarding new employees. We know there’s a huge workforce turnover everywhere, and you really can’t rely on your tenured employees to know about the tracker and keep that information and insights front and center in the business discussion, so make sure that you are educating your new employees about the resource, what’s available, and how they can use it.
Mike Jennings:
Great stuff. So, look into your crystal ball and think about what might be next, whether it’s something you’re actively doing now or what might be the future for your CSat program. What are you thinking about next that you think others listening in today can learn from?
Karen Fuller:
I think we need to really think hard about the role of customer satisfaction tracking in an environment with multiple listening posts. Just from a tool perspective, there’s a proliferation of tools that any organization can use to get customer feedback, whether that’s a feedback widget, onsite surveys, email surveys, text surveys—there are many, many ways. And there’s third-party external resources, whether that’s social listening or third-party feedback tools like Trustpilot or Feefo. So it’s really important to understand where your CSat tracker sits within that whole ecosystem and to educate people on what’s unique and different about it and how it delivers value. And then, of course, ensuring that it fulfills that value—that it’s not just more noise. But really, as I think about the future, I’m increasingly troubled by the proliferation of feedback requests I get as a consumer. I can’t go to the doctor, the dentist, the hardware store, or an online retailer without being asked for feedback, and this is the environment in which we’re launching these kinds of things.
Mike Jennings:
Yeah.
Karen Fuller:
Recently, I went out to West Texas and spent one night in the SpringHill Suites, and I got a survey from Expedia asking me about my check-in experience. It wasn’t an experience. I just got the key and went to the room. And I was only there one night, and I checked out the next morning. I wasn’t even there for 12 hours, and I got a survey from Expedia asking me how my stay was. Then, when I got back two days later, I got a survey from SpringHill Suites asking me about my stay. That all felt like a bit too much, right? But while all that was going on about my one stay, my inbox was also filling up with a request from Ace Hardware because I bought leaf bags, and then a scam one from Ace Hardware. Apparently, $90 is some special thing that gets you through the spam filters—from Costco, where I don’t shop, and Southwest, where I do. So this is the environment in which we’re launching customer satisfaction requests, and I think it’s really hard for consumers. One, there’s too many, and it’s hard for them to differentiate between what’s real and what’s spam. The three on the right that are spam probably look like spam to everybody on this call, but they don’t look like spam to the rest of the world. And I think it’s a lot to ask. So that’s what’s on my mind—how do we ensure that we’re adding value within the other things we’re doing? And then how do we judiciously use the tools given the environment we’re launching them into now?
Mike Jennings:
Yeah, I concur. When I get a survey for every leg of my travel on an airline, I get the, “How was your trip from Atlanta to Houston? From Houston to LA?” It’s a single trip, but I’m getting surveyed for every leg. I think it’s certainly an area the industry is thinking about, and it’s an area of concern for many. Just to recap our tips for delivering the most value from a CSat program very quickly here: First, we follow the Boy Scout rule. Be prepared, set out your goals, get input and buy-in from teams throughout the organization as early as during development. You want to anticipate those needs upfront. Ensure that your program is sufficiently flexible in supporting changes within the organization, within the marketplace, et cetera. Continually look for ways to deliver more than what’s expected, ways to improve the program and its insights. Consider where the program sits in that ecosystem of information, and what information can support it.
Paul Donagher:
Hi, everyone. Looks like we just lost Mike, although we are on the last page, Karen, so that’s fortuitous. If I may, I’ll just jump straight to a few questions that we’ve got, and hopefully Mike will join us back. Karen, I’ve combined a few of the questions that came through for us. The first one speaks to your point about a flexible framework. I’m kind of combining this: How do you stop the survey from becoming the kitchen sink for everything that everybody wants to know? In particular, I think the point is, how do you add something but then have the conversation, “We’re going to take it away”? You’ve had your two waves of that question, and now we’re going to take it out. So, do you have any tips or best practices around that?
Karen Fuller:
Yeah, we’re really fortunate because we also manage a research community, a research panel with travelers and partners. So for those sort of one-off things, we have a way to get that feedback pretty quickly, which helps us avoid overloading the survey. With respect to, “how do you then get it out?” I think one way to think about it is I have real estate, and when I tell you, you can put it in. You can put it in for one quarter or two quarters, and then I have to give that real estate right to somebody else, that kind of idea.
Paul Donagher:
So you manage expectations upfront about what may happen with their question of the month. It’s not going to be in there come December, but you can have it for now, just to manage their expectations.
Karen Fuller:
Yeah, because even though we have a tool, we’re in 13 countries, so every time we want to put something in, we have to go through translation. We do the translations, we get them approved, and it’s not the easiest thing. We have a framework for doing it, but we do want to be careful about it as well, not just throw things in all the time.
Paul Donagher:
Second question here. Again, I’m combining a couple, and thanks for all the questions on the screen, folks. So you’ve had this tracker for more than 10 years or at least 10 years. This is a very practical question. How do you ensure that it always gets funded every year? How do you beat the budget cuts? Any best practices that you can give for some of your corporate peers as to how to make sure a key data source is funded year after year?
Karen Fuller:
You know, this was incredibly important, especially during Covid. We’re in the travel industry, so you can imagine that budgets got slashed, but the fact that we had this framework already up and running and it was the only way for a while that we were getting any feedback at all, right? So they slashed budgets, and then they started asking us questions. “What do travelers think about this? What are partners doing?” I’m like, “Come on, we just took away all the money,” but to be honest, I think a lot of it is that we’re a little bit under the radar somehow. While we deliver these insights to everybody, I think research, while important, the budget belongs to us. It doesn’t belong to the marketing team. That’s probably the answer, right? We own the budget. The budget’s not for this, the budget does not sit with the marketing team or the brand team or the supply team. It’s one of the few things that is just ours, so it’s a little bit easier for us to protect it.
Mike Jennings:
We just add to it.
Karen Fuller:
<laugh>
Mike Jennings:
We express the value of the project and the data that it provides, and add, “You guys want to cut, but we’re thinking we need to add more countries or add more equipment.” <laugh> You’re thinking in the wrong way. <laugh>
Paul Donagher:
I like that, Mike. You touched on this a little bit, Karen, but there’s a question here. How do you ensure that stakeholders focus more on the nuances of why an NPS or a CSat or key metrics score might change, and not get overly focused on minor changes in these outcome variables? Any tips or best practices around thinking of why something changed as opposed to focusing on a small change in the outcome itself?
Karen Fuller:
This has been a key way in which we try to deliver the information each quarter. It’s not just saying how the NPS has moved, but looking at those key drivers and understanding how those have moved as well. We also implemented, partway through, some open-ended feedback, so if people give really low or high scores on those attributes (we don’t do it for all of them, and we’ll only ever ask anyone about one attribute), but if someone is saying that something is really bad, we ask an open-ended question so we can give some color to that story. So if someone is saying something like, “Well, your payment processing, I’m not satisfied with your payment processing,” we’ll say, “Why?” So we can come back and say, “Look, NPS is being driven down by dissatisfaction with the payment processing, and most notably with someone’s ability to get what we call early payout,” versus some other issue or the fact that it’s too expensive or something like that.
Karen Fuller:
That’s really what we focus on a lot because people will get hung up on whether NPS is the right measure, whether it’s satisfaction, whether it’s loyalty, and I think having the driver analysis—regardless of which dependent variable you choose—keeps them focused on really understanding the whys of it versus what that actual ultimate dependent variable is.
Paul Donagher:
That’s helpful. Maybe one last question in the couple of minutes that we have left, Karen, and this speaks to what you were talking about in terms of an ecosystem of data and also how you had used data for the initial public offering. Do you utilize and statistically align any operational metrics to your survey data? Are you using anything else internally, maybe to triangulate or to add flavor to the survey data based on your other internal metrics?
Karen Fuller:
It’s interesting because when this started, this was actually the only place we even got some of those, what you might consider to be operational metrics in terms of how many bookings partners were getting, how many properties a traveler was inquiring on, or even whether they took a trip. We didn’t even know if they went on a trip in the past. So initially, this provided some of those operational things as well, and we’ve transitioned away from that. We’re trying to pull that information from the data warehouse now into the contact information that we send to Radius, so we can then go back and look and say, “Okay, this person is saying that they’re satisfied. Where do they fall within? How much revenue are they really earning from us?” Versus, “and how do we compare that to their satisfaction?” So we can do those kinds of things.
Paul Donagher:
Go ahead, Mike.
Mike Jennings:
Looking at one of the things we’ve been trying to tie together is movement in NPS and movement with revenue. As you asked in the question, triangulate that relationship. So we’re always looking at ways we can beef up what we’re doing and explain things. It’s a nonstop process.
Paul Donagher:
That’s time, folks. Mike and Karen are great. Thanks for doing this, Karen. We appreciate the relationship, as always. Thanks everyone for your time this afternoon. Hope you enjoyed the webinar. We’ll be back with another one over the summer. In the meantime, enjoy the rest of your day. Thanks everybody. Bye-bye.
Mike Jennings:
Bye.
Is your tracking program fully aligned with your business and growth goals? We can help — contact us.