Business data management system concept background.

Optimize Survey Response Quality

by Radius

Advanced technologies are expanding research possibilities, especially in survey methodologies. But how much trust should researchers place in these tools? Paul Donagher, Director of Client Services at Radius, moderates a discussion with Michael Patterson, Chief Research Officer at Radius, and Georgeanna Liu, Head of Global Market Research and Insights at AMD. Together, they explore strategies for designing primary research that ensures data integrity, sharing real-world examples and best practices.


Paul Donagher:

Good afternoon, everybody. My name is Paul Donagher, and I’m responsible for client services here at Radius. And thanks to all of you for attending today’s webinar, which will be on the topic of optimizing respondent quality and survey research. The webinar today is a discussion between Mike Patterson, who is Chief Research Officer here at Radius, and Georgeanna Liu, who is head of Global Market Research and Insights at AMD. As always, the webinar today will last for a maximum of 45 minutes. You can take a question on the screen if you have one. If you’d like to ask Georgeanna and Mike a question that is, and of the 45 minutes allotted the discussion should last for around 30 minutes or so. And so we should have some time for Q&A at the end when I will pick the questions that are in there and ask them.

By way of introduction, today’s webinar among other topics you’ll come away with an enhanced understanding of is methods to use and questions to ask to ensure that the respondent’s completing the survey are qualified, strategies for finding representative target populations to ensure results are generalizable, types of questions to ask in order to keep respondents engaged and obtain quality data, and also approaches that minimize bias and the results you get.

So as an introduction to today’s speaker, it’s Mike Patterson as Chief Research Officer here at Radius, Mike actually built his own business before merging that in to be part of the Radius family. Mike lives in Texas. He loves the game of golf when he is not developing techniques for us to use here at Radius and to Georgeanna.

Georgeanna manages all of EMD’s global primary research programs, and she supports various product lines and business functions globally. The insights generated from the research studies are widely adopted to inform product planning, branding, messaging, go to market execution and partnership development. Georgeanna loves to travel, and one of her favorite things to do is to sample different flavors of KitKats, Oreos and Lay’s chips from around different parts of the world. I have to ask Georgeanna where is your favorite KitKat from? Do you have a favorite KitKat?

Georgeanna Liu:

I don’t have a favorite one, but I do have a most surprising one, which is when we were in Japan, we picked up some sake flavored KitKats.

Paul Donagher:

Sake flavored KitKats of course.

Georgeanna Liu:

Mmhmm

Paul Donagher:

Well, thanks very much for being part of this Georgeanna <sic>. And so with those brief introductions, Mike will take it from here.

Michael Patterson:

Yep. All right. Thanks Paul. So Georgeanna just to get us kicked off, would you give us just a high level overview of yourself, the role that you play at AMD and then the types of research that you get involved in?

Georgeanna Liu:

Sure. So the research programs we have at AMD really run the whole gamut, right? On the branding front, we have long-term annual brand tracking and on the products side we have product testing and we’ve done messaging, testing, creative testing. We’ve done audience research, audience mapping, segmentation, pricing, research. So it really just depends on what the business questions we’re trying to answer. And also, in addition to this, we also do research in multiple countries and US usually, obviously want to include. And for us China is a big market as well, and Germany is a big market as well. And so these are our usual suspects. But we often involve other markets in India, Japan, Latin America. So it just really depends on what programs we have in these countries. Besides this, we do a mix of consumer research as well as B2B research, and we do a mix of qual and quant. So we really are the research <laugh> the go-to supporting research support team for internal team. Yeah.

Michael Patterson:

Okay. That’s great. I’m sure our audience would like to understand some of the real key issues that you face when you’re trying to ensure that you get accurate, reliable results that you can then use to make decisions there at AMD So could you talk about one of the key factors that you would consider when you’re really initiating a research project?

Georgeanna Liu:

So when we initiate research projects, the first thing I think about is always the objectives, right? Starts with not just the research objective, but the business objective. What problems are we trying to solve here using research? And what insights do we need to solve these problems? Knowing the business problems will help you better define the research objectives, because at the end of the day, the goal is never to run a bunch of focus groups, right? The goal is to have insight that drives actions and the actions will lead to business results. So that’s where I always start. Business objective and then research objective.

And then the second place we think about is, okay, then who do I need to go to? Who can I talk to? Who holds the answer to the questions we have, right? And where are they? What country are they? Are there any particular demographic groups? Can I reach them? How do I reach them? The third thing we think about is, okay, once you identify those people, the third one is a tricky one, can I possibly get honest and accurate answers from them? Because that will inform the type of questions you ask. Because often people want to know a lot of things, but like, can you accurately ask those questions? Can people really tell you, can they recall? Right? So that involves question design. Are the questions clear? And are the questions worded in a neutral way? So by answering the question, they won’t feel they’re being judged one way or the other. And like other response options, did that cover the whole spectrum of the possibilities? So these are the things we tend to think about when we are designing.

Michael Patterson:

Yeah. Okay. So you mentioned trying to make sure that you’re finding qualified respondents. Can you give an example of the steps that you would go through to make sure that you’re identifying the right type of respondent?

Georgeanna Liu:

So for us, the typical targets are buyers of computers or servers. So then you are talking about consumers buying PCs or business buyers, IT buyers for PCs or servers. These are generally easy to target using third party panels. But we do go through additional steps to make sure that they are who they say they are. So here’s some examples. When you think about buyers, what are buyers? Are these people just pulling out the credit card or are they ones that authorize payments? Or do you want people who are also actively involved in making a decision by evaluating different options? And we tend to use both.

And sometimes when we do qualitative, we even have additional qualifying questions, tighter criteria, essentially to make sure that in qual, because we have far fewer samples. So in qual we definitely want to involve the type of users that really know what they’re talking about, and they’re probably more involved in purchase decisions. Just to give you an example, once in a while in focus groups you’ll have people come in and say, “oh, we did this, we did all the research before we buy.” But when they say “we”, they really mean other members of the household. They’re just the one that pulls the credit card. So what additional questions do you ask to make sure that they are the hands-on one that was doing the shopping and online research, right? So <laugh>,

Michael Patterson:

Are there other things that you do in terms of trying to make sure you’re targeting the right person?

Georgeanna Liu:

You know, other things we think about is, besides targeting, how else do you want to analyze the data?

Michael Patterson:

Hmm, right.

Georgeanna Liu:

What are the subgroups we want to look at? Is it sub-grouped by age, by demographics, or is it by purchase history users, non-users, or sub-grouped by brand preference and what have you. Because you want to make sure that you have sufficient samples for these subgroups. So when you collect the data, you have enough to analyze these groups properly.

Michael Patterson:

Yeah, exactly. So I mean, you bring up a really great point that once you’ve laid out all of these various audiences, you then want to translate that into a sampling plan so that it’s clear who you want to have included in your survey, but then because you are using quotas and various mechanisms to look at these subgroups right off the bat, we need to start thinking about weighting the data.

Now, when we roll all of these groups together, we want to make sure that they’re proportionate to their size and the population. And so one of the things that I’m always cognizant of is when we’re weighting the data, I want those weights to be light weights that is not too extreme. So I don’t want some respondents, for example, to have a weight of say 0.2, so a small weight, and then others to have a larger weight, like a weight of 10 because you’re weighting some people up far too much and weighting down people too much. So I like to look at that ratio, the smallest to the largest weight and have maybe a ratio of five or something. Does that make sense?

Georgeanna Liu:

Yeah, that makes perfect sense. And I just remember Mike, you are the one that taught me weighting because I remember we were doing a study, we were trying to get samples and I forgot what dimension we were looking at. So the sample we collected, the distribution does not match with the actual POS, the sales data. So I remember panicking that the distribution didn’t match, but you’re the one that introduced, “hey, we can weigh the data according to the actual sales distribution”. And that has become something we do almost all the time, especially for international studies that involve multiple countries, right?

For countries that represent a bigger proportion of your sales or a total available market, you probably want to weigh that country heavier than others. You don’t want to just do a straight average across, across all the GOs or across all the sub-segments. Because if you do that, you may end up underrepresenting some of the bigger markets.

Michael Patterson:

Yeah, that’s exactly right. I think that’s a crucial point. So what are some other considerations that you have in order to make sure that you’re getting good accurate data?

Georgeanna Liu:

So you definitely want to make sure you do the best you can <laugh> that people you let in are who they say they are. This is basic research hygiene. And I remember earlier on in my career getting asked during presentations, people are like “did you just send out random surveys? Like the credit card surveys my wife gets all the time”? What you don’t want is spending the majority of your presentation defending the data quality, right? So that’s hygiene, it’s very important, especially with a B2B audience, the incentives are higher, so you have higher chances of getting fraudulent respondents. So you really want to make sure that people are qualified.

So to do that, there are a couple of steps we take that’s something definitely you want to work very closely with your research agency and often they’re the ones that work with third party panel partners. So making sure that these are high quality panels and making sure that they don’t use river sampling. Again, this is something I learned from Mike and Susan a few years ago. So river sampling, essentially it’s the equivalent of standing at the intersection at the corner of the streets with a sandwich board inviting people to take surveys and we really don’t know who’s going to show up. And where you stand makes a big difference on who you invite. So the online version of that is similar to the idea. People may put up web banners on different websites to invite people, right? To take surveys and where the banners go and what people are doing when they see the banner largely influences who you are inviting and you have no control of that. And that just makes quality control much, much more difficult and then increases the chance of getting non-qualified respondents.

Michael Patterson:

Right. Exactly.

Georgeanna Liu:

So then there are other things you can do in terms of quality control in terms of design of your screener. So some of the ideas like making sure the right answers are not always obvious. If you want to target people who have made a purchase in the past 12 years, show them different options, options of past purchase, six months, 12 months, 18 months or two years. And sometimes we even include quiz or test type of questions in there with absolute right or wrong answers mixed together. And sometimes we may even include ghost brands or decoy answers. If someone picks a brand that completely does not exist in your category, that’s a red flag right there, right then that’s an easy red flag.

With a B2B audience, we may also ask the same questions twice. For example, job title, that’s an easy one to fake, right? Ask them once in the beginning, once again at the end, do the answers match? If not, again, that’s another red flag. And with job title on top of that, we may even add an open end question asking them to describe what their responsibilities are.

I want to spend a little bit of time talking about open ends. We’re increasingly using that as a way to gauge respondents fit for this study. Because open ends can tell you a lot for one, do their answers make sense or is it just gobbledegook. And do their answers contain proper terms and phrases as in the right context? And are their open end answers consistent with other close end options? If they say they work in IT as the department, but in their open end they say their job responsibilities are about preparing, for example, financial statements or managing inventories, then you know, that’s a red flag.

And that’s an area I would encourage research practitioners to get really hands on. Because you know your audience, you know your product, you know the category, so you’re a lot more sensitive to pick up any off smell, right? Because it’s really a sniff test. And in my experience looking through the open ends, they also gave me an idea of other potential ways to enhance the screener to make it easier to weed out non-qualified respondents. So these are just things that I look for. And I know you and your team, you have whole other host of things that you guys look for that I don’t even get involved in.

Michael Patterson:

<laugh>. Yeah, that’s right. I mean, I do think that all of those are critical things to make sure we’re getting qualified respondents. But there are things, I mean, as you’re pointing out that we have different checks that will include both during data collection as well as after data collection. Some of them everyone’s doing right? So we’re going to look at the survey completion time and we’re going to remove people that are speeders that have completed the survey too quickly. In some cases we may stay in the field longer, especially with these more difficult to reach audiences to make sure that we’re really getting a better, more diverse set of respondents.

And you mentioned the open ends. I think open ends have really just become crucial to include and making sure that you’re getting quality open ends, as you were saying. But then also looking to see are there respondents in here? And we’ve seen this often where you’ll see an identical or just a very similar open end. And it’s clear that that’s someone that’s trying to game the system to get those higher incentive B2B incentives as you were talking about. You know, and then we’re also going to look at straight line responses, inconsistent responses across questions, things like that. So anything that you’d like to add there?

Georgeanna Liu:

The other thing I would add is you want to clean out the data as you go. So as you are collecting, you want to replace the best samples and that will give you the opportunity to tweak your screener quickly. So if you wait until the very end to clean the data, you may realize, well, I have to throw out 20% and I don’t have time to replace. And also that will be a lot more records that you have to read through, right? And that’s something you want to do. And you may want to ask your research agency because different agencies have different practices and some do it more frequently than others. So that’s just something you want to discuss upfront.

Michael Patterson:

Yeah, I think that’s a great point. When we’re in the field, we’re literally looking at the data, if not every day, say every other day and going record by record to make sure that the respondents that we have in our data set and our sample are qualified respondents. There’s some other things that are behind the scenes as you mentioned, that you’re not necessarily aware of.

You know, we’re always working with our panel partners to make sure that we’re getting, as I said, good quality respondents. And there’s a number of different things that the panel as well as Radius will do even before those panels are sending us samples. So some of the things that they’re going to do is, one, they’re going to look at the IP address, which is the, the internet address of each of the respondents, and they’re going to compare that to the country, say, or the state that we’re interested in. So for example if we’re looking for respondents in the US well, I want to see that those IP addresses originate from the US and not some other country. Or we’re targeting certain states, we’re able to make sure that the respondents that we’re looking for come from those states. Our panels are also going to exclude respondents that have performed poorly previously on surveys. And so they’ve not given what was judged to be good responses, and so they’re going to exclude them.

And then we also encourage our panel partners to work with other third party vendors. So companies like Relevant ID, Research Defender, and these companies have algorithms that allow them to essentially develop a fraud profile for each of the respondents. And that’s going to help us weed out those bots and individual companies, clients, things like that, that aren’t real people. So they’re just computers. Other things that we’ll do to weed out bots are honeypot questions. And so those are questions that are hidden in the actual survey. So our respondents will never see those questions in the survey, but the bot or the computer will try and answer those questions. And so we know that’s a fraudulent person.

And then finally we’ll also include CAPTCHA or these touring test questions that help us weed out bots. So there’s just a whole host of things that we’re always doing and vendors should be doing to make sure that we’re getting real qualified respondents. And I think at the end of the day it’s important as we do with you to work hand in glove with your panel partner and work with your research agency, as you were saying, to make sure that you’re coming up with a solid screener identifying those list of disqualifying criteria. And some of those disqualifying criteria, you’ll terminate the respondents immediately.

So for example, the IP address. But in other cases, what we’re going to do is we’re going to have a list of criteria and we’re going to essentially count up the number of red flags. So the number of times that they answer something in a disqualifying fashion. And if they exceed a certain number, say three or so, we’re going to exclude that respondent.

Georgeanna Liu:

Yeah.

Michael Patterson:

There’s just a lot of things that we have to do. So switching gears a little bit, are there certain things that you do in the actual survey itself to make sure that you’re getting quality data or really minimizing bias?

Georgeanna Liu:

Right? So now what we’re talking about is the importance of asking the questions and getting honest and accurate feedback. I think you always have to worry about, I don’t want to say “worry” because my kids always say, “oh, ma, you worry too much”, but I think these are things you want to consider, you want to be conscious of.

I’m just going to rattle off some of the examples and ideas. You want to keep the question wording neutral. Instead of asking, “why don’t you shop at this store?” Or, “why don’t you consider this brand?” Maybe asking, “what are the reasons that may lead you to shop at this store?” Or, “what are the reasons that may lead you to do that?” So it’s more neutral and also have a range of possible response options, plus “don’t know” or “others”, because you never know if you are able to capture all the possibilities. If you don’t, you want to give people a way out so they aren’t forced to select what you provide and then that may skew the results.

And sometimes populating those response options can get tricky, right? Especially for an audience or topic that we’ve never done research on. So sometimes we would do some qual study prior to that to help us identify those possible response options. Or sometimes you can talk to your colleagues and friends and families. I talk to my kid, he’s a gamer, right? So that’s a handy way. Or sometimes just do Google research and now you can even use Chat GPT to populate those response options. So these are just some ideas. I want to make sure Chat GPT is thrown into any conversations I’m having this week. <laugh>.

Also you want to make sure that the skill is balanced. I think we’ve all probably seen or heard those political polling questions, like, “how likely are you to vote for candidate X?” And then the response options were “extremely likely”, “very likely”, “likely”. And it was that <laugh>. Also, I did not write that survey question <laugh>. And also, you want to make sure that the questions are clear and easy to answer. There’s no jargon, there’s no special terminology. They don’t need to have an EE degree to take your survey, okay? If you are not sure, provide a description or send a picture.

Here’s an example. We do research with different types of computers and there’s one type of computer that’s called “all in one”. Some of you may or may not know, right? That’s essentially a desktop component, but all the inner workings of computers sit behind your monitor or your screen. So you don’t have a separate box sitting underneath your desktop. And we call that “all in one”, but obviously we can’t assume everyone knows what that is. So we show a picture and some descriptions.

Another thing to think about is make sure that the questions are actually possible to answer. So what do I mean by that? So here’s an example, Fire Journey has been a very hot topic recently and I get asked by people, they always want to know, “oh, can you do a survey or can you tell me exactly which websites people go to that trigger them to realize, ‘Hey, I need a computer’. What’s that moment? You know, what websites did they go to? What did they look for during the awareness phase versus what websites did they go to during the consideration phase?” And my question has always been, “do you really think people can accurately recall?” Is a survey the best way to get to that? So maybe there are other methods we should explore.

And then the last one, just do a pretest with your colleagues, with people who are not intimately involved. And I know the Radius IT team has done quite a bit of pretesting for us. We’re talking to IT, but does that make sense to IT people? So there’s just some examples to help you spot any potential risks and ways to improve.

Michael Patterson:

Yeah, I think those are great. And in terms of minimizing bias, one of the things that we will often do, especially with these really hard to reach audiences, not with consumers, but low incidence audiences, is to use a mixed methodology approach. And so that’s where we will collect data both using panels, so online samples as well as doing telephone interviews or maybe a phone to web recruit. And again, that’s just going to help minimize that bias, expand your population, and give you a much richer and better set of respondents.

Georgeanna Liu:

Yes, definitely.

Michael Patterson:

So we’ve touched on a number of different topics. So defining the appropriate population, making sure that we have good respondents. Minimizing, as we were just talking about, survey bias. Are there other things that say, keep you up at night, that you have concerns about.

Georgeanna Liu:

<laugh>. Another thing I feel like we have to think about more and more lately is respondent fatigue. Because I noticed during the pandemic, it just has taken longer and longer to hit the number of quota we want. And I see that across the board, right? With consumers with B2B and B2E that really forces us to think about are we designing questions for us? I mean, what’s in it for them? Are we also providing a positive experience for them? So that’s something you want to pay attention to, to make sure that the survey experience is positive, it’s engaging, and then that way they’re more likely to give you accurate, honest, and also thoughtful responses.

So on that front, the things we pay attention to is length. First thing, is the survey too long? And that’s always a struggle. Internally sometimes when people hear, “oh, you’re doing a survey, you’re paying for it can I throw in some questions?” And you end up with the kitchen sink with all the topics that may or may not be relevant, right? That’s just a tendency, and I don’t blame anybody. I would do that, but I think this is when it’s really important to have very clear research objectives upfront. Are we asking questions that are relevant to what we set out to answer? And that makes it easier to justify when you have to cut some questions. These are nice to have or versus these are must have.

So length is one, another one is question flow. Does the question flow make sense? Are the questions jumping around? You probably want to group similar questions together so they’re in the mode of answering. For example, their social media activity, and that’s where you want to ask questions about YouTube, Instagram and Facebook, what have you, instead of jumping around and that makes it difficult for them.

And then another thing we started doing more and more was just gamify. Questionnaire, gamification, just change things up. Some questions, multiple choice, some question grid, some questions points allocation. And points allocation maybe you can do a slider scale if it’s an allocation between two items. And then ranking. We all like ranking, but sometimes we do for a shorter list you can do drag and drop, but when the list gets really long, you may want to use MaxDiff. And MaxDiff is a very handy tool. So in MaxDiff, instead of having people rank say, 15 or 20 items from one through 20, you present them with four items at a time. And then out of the four they pick their favorite and least, least favorite. And then their exercise repeats multiple times, and by the end you piece together the ranking. Again, that’s something Mike, I learned from you, right. <laugh>. I mean, Mike comes up with all kinds of new ideas all the time.

Michael Patterson:

<laugh>, yeah, I mean, I just love MaxDiff. You’re right, because it’s such a great technique for prioritizing items or ordering lists especially in comparison to ranking. So I think all of those are really great points. We’re getting close to time here, so is there anything else you’d like to share?

Georgeanna Liu:

I would say things you may want to ask your research partners. So based on what we say, like how are you sourcing your panel of respondents? What methodologies are you using to detect fraud and validate authenticity? And I can tell you from my own experience, not everyone is equally diligent. Some agencies will do the bare minimum like speed test or straight lining, but nothing else. So you want to ask those questions like, “how often do you review and purge the data and who’s the one that does the quality check?” You want someone that’s familiar with your product category so they understand the nuances.

And then last one, I would say treat your research vendors as partners. This relationship is not transactional. You are not here to buy pounds of data. You are here to build a relationship. Because at the end of the goal, you have a business goal to achieve, and the more they learn about your business then that also gives you the opportunity to learn more about research practices and it’s rewarding for both parties.

Michael Patterson:

Yeah, I love that you bring up the idea of partnership, because I do think it’s fundamental for us getting quality data. We understand your business, what you’re trying to achieve, the audience that you’re interested in, and then we identify the best ways to get that audience, the best way to ask questions, the type of techniques and approaches that we should be using. So I think all of those are great. Hopefully our audience feels like they’ve been provided with some really good information. So Paul, do we have any questions or anything like that from the audience?

Paul Donagher:

We sure do so thanks everybody who typed in a question. There really are a lot here. I’m going to try and get through as many as I possibly can. The first topic that so many of you have asked about is AI and really sort of generally what AI might mean in terms of respondent quality. But I think I’m trying to piece together a few questions here. How can researchers minimize AI generated responses or be aware of them and open-ended questions, and how can we guard against those types of fraudsters and open end and what they’re putting together. There’s a lot of questions around that, Mike and Georgeanna, but thoughts on AI and responding quality and open-end specifically?

Michael Patterson:

Yeah, I’ll jump in and then Georgeanna, if you want to add anything. I really think AI is a blessing and a curse. From a fraud perspective, certainly we worry about bots as we’ve talked about. We worry about if we ask an open-ended question and someone uses Chat GPT to answer that question, it’s going to look like a good response. So there are lots of issues. In terms of open ends. There are ways of essentially scoring or assessing the probability that that response is AI generated. There’s tools out there now, and so one of the things that we’re going to be doing is building those tools into our survey so that we’re catching those fraudulent respondents right off the bat. But in terms of the benefits of AI, I think it also offers a lot of benefits because it allows us, as I was saying, to start to weed out potential non-good actors. but also as Georgeanna was pointing out, it helps us with our questionnaire design. Do we have an exhaustive list of items? Is there a better way of phrasing things? So I think we just need to be very cognizant of both the benefits as well as the potential drawbacks associated with AI. Georgeanna, do you have anything to add?

Georgeanna Liu:

I would only add, I think, Mike, you guys are definitely ahead of me in terms of this, and I’m just thinking, me as someone that’s using the data. I think that’s why you want to have multiple checks in places right? You want to look for different possible red flags. And I think the other possible way to look at is the pattern and the data, and hopefully you’ve done enough study with these audiences, you get a good sense of the trend, the pattern of their responses over years. So when something wildly off comes up, it won’t pass your sniff test. And then that will maybe alert you to, to dig deeper or find opportunities to verify. And I think this is where qual can come in handy. There are a lot of ways of doing qual now a lot faster and more cost efficient. So just some thoughts.

Michael Patterson:

And related to that, another way that we’re looking at AI is to integrate qualitative more with our quantitative. And so what I’ve been calling it is intelligent probing. So for example, in a quantitative survey, we always ask open ends “what did you like about the concept?” Things like that. And oftentimes the results that we get back, the open-end responses are, “eh, they’re not great. They can be okay, but they’re not great.” And so I think we can use this AI-based more intelligent probing to dig deeper. What was it about the concept that you liked? “Oh, I liked the color blue.” Well, what was it about the color blue that you really liked and asking question after question to really peel back the onion and gain a much deeper and richer understanding. Paul, any other questions?

Paul Donagher:

Yeah, so there were other AI related, but I hope those answers helped answer a few of those other questions around AI. I’ll move on to some other topics here. A few people have written in that they feel that the issue of fraudulent responses has been getting worse over the last couple of years. And combining, again, a couple of questions here, I would say, how important do we feel technology versus human approaches are to make sure that panel participants are who they say they are. So I think you guys talked around that, but how important do we feel technology versus human catches are in the fight that we have against fraudulent responses?

Michael Patterson:

Again, I think we’ve touched on a lot of the technology related approaches and criteria that we will embed in our surveys to try and weed out people even before they complete a survey. And there’s other things that we can do after they’ve completed the survey to score them. But at the end of the day, I think, and Georgeanna, you noted this, is you really have to dig into the data. Unfortunately, go record by record respondent by respondent, and look at the data to make sure that you’re getting quality open ends that results make sense. So if I’m looking at one question answer in comparison to another answer, those should align. And so unfortunately, I do think there’s still the onus on all of us as researchers to make sure that the respondents that we have in our data set are good quality respondents. So it is definitely a mix of, I think, technology, but then also just human intervention required.

Georgeanna Liu:

Yeah. So I think for research, we need to be ready to set up time to go through those records. Right? And there’s no easy fix, like Mike said, going through record by record. You just need to make sure you do that and then you set aside time to do that.

Michael Patterson:

Right. And as you pointed out, that’s why we feel it’s important to make sure that you’re looking at this data while you’re in field and not waiting until the last minute because then you’re sort of behind the eight ball. You don’t really have the time to go in and really adequately clean and examine your data.

Georgeanna Liu:

Yes. And don’t do that when you are cranky and tired. <laugh>

Michael Patterson:

<laugh> Yeah. Otherwise you’d throw everybody out. Right?

Georgeanna Liu:

That’s right. And I’ve done that. <laugh>

Paul Donagher:

Probably time for another couple of questions here folks. Sort of related to this we’ve got one here, we’ve talked about what goes on prior to survey. There’s a combination of questions here about good respondents that go bad during a survey, for example. They come in, they’re who we want them to be, but then their attention span or whatever that might be goes away. Is there an ideal survey lens or what can we do to mitigate against good respondents going bad during a survey

Michael Patterson:

Georgeanna, do you want to jump in or do you want me to start off?

Georgeanna Liu:

Sure I can start. I would say survey length has a lot to do with incentive. I think my experience for ITDM, if we’re paying quite a bit, maybe we can push it a little bit, but you really don’t want to push it and I definitely have heard a few years ago, “maybe we can do 20 minutes.” Now it’s more like 15 and 12 minutes. So you really, really have to prioritize what’s important to you. And also another way is the way you design the survey questions. If you’re worried about people getting tired, then make sure that the key questions you want get put in the front. And put like demographics, income, education level, those questions, put it towards the back. And we talked about making that experience pleasant. And I know the Radius team, you always ask this at the end of every survey, you ask for feedback from the respondent, right? How the survey experience is. So Mike, I don’t know if you have anything to add.

Michael Patterson:

Yeah, I think you’re exactly right. And part of it is just making the survey engaging as you were pointing out earlier, Georgeanna really asking different types of questions and not just screen after screen of a monotonous sort of question after questions sort of thing. Mixing it up, asking those more engaging types of questions. And then just encouraging the respondent, “oh, you’re doing a great job we really appreciate the information that you’re giving us”, and things like that. The other thing is to make the survey a little more interactive. And so, as I was saying, this intelligent probing where, where you’re really encouraging them to provide you with good information and having more of a conversation rather than just question after question after question. So I do think there’s a number of things that we can do but yeah, short, shorter is generally better. If it’s a “nice to have” question, probably don’t include it. Or if you do include it, ask it at the very end. Things like that.

Paul Donagher:

Well folks, I have 12:45 on my clock here. That was the 45 minutes that we had allocated to this. Thank you all for your questions. Apologies if I didn’t get to one. And Georgeanna, thank you so much. Mike also, thank you and thanks everybody for attending. We will be sending out a link to this so that you can see it or share it with colleagues. And then we’ll be back with another webinar probably in start Q3. Thanks everybody.

Georgeanna Liu:

Thank you. Thanks. Bye.

 

Survey quality is essential to your brand’s success. Contact us to learn more about our approach.

Contact Us