Photo of a young black woman drinking milk after a workout

AI and Synthetic Respondents: 3 Case Studies with MilkPEP

Michael Patterson, Radius Insights, Radius Global Market Research 2021/10/michael-patterson-bio.jpg

by Michael Patterson, PhD

Chief Research Officer

Kikke Riedel, Vice President, Strategy & Insights at MilkPEP recently sat down with us to talk about her experiences working with AI. Watch the replay now, or read the transcript, below.


Mike Patterson:

Hello, everyone. My name is Mike Patterson, and I’m the Chief Research Officer here at Radius. Thanks to all of you for joining and attending today’s webinar. The topic that we’ll be covering is called “Testing the potential of AI-generated Synthetic Respondents.” I have the pleasure of being joined today by Kikke Riedel, who I’ll introduce in just a minute. First, I wanted to let you know that the webinar is scheduled for about 45 minutes. As we’re going through the presentation, please feel free to ask any questions using the chat function. At the end of the presentation, we’ll answer any questions that were posed. So, in today’s webinar, we’ll be reviewing our thoughts on synthetic respondents, and we’ll share with you three case studies that demonstrate their use in market research.

So let me introduce our speaker. Kikke Riedel joined MilkPEP in 1998 and provides consumer insights and strategic direction across all the national program elements for the iconic “Got Milk?” campaign. From consumer immersions to commercial analytics, Kikke helps MilkPEP create award-winning campaigns that continue to resonate with audiences and move the needle. Originally from Denmark, Kikke came to the States in 1989. She’s a graduate of American University and lives with her family in Bethesda, Maryland. As for myself, over the years, I’ve spent time both on the supplier side and the client side of the business. I live in Texas and I enjoy golf in my free time. And with that, let’s begin the presentation. Kikke, would you like to kick us off?

Kikke Riedel:

I sure will. So first of all thank you so much for having me, Michael, and thank you to the whole Radius team who helped put on this webinar and helped put together these case studies behind the scenes to get us ready for today. So I expect that many in the audience are facing some of the same challenges I am in leading insights over at, at MilkPEP and managing importantly in internal stakeholders, right? So agencies want insights yesterday, the CFO wants cheap and cheerful, right? And the marketing team are always looking for that elusive aha or that “unlock what we all want” is true insights, right? So as you can see here, some of these challenges that we laid out here, those are what we face all the time in managing internal stakeholders and coming up with insights that are true and accurate. So the question that we asked ourselves is, is there a better way? There must be, right, Michael?

What are the different meanings of “synthetic respondents” for market research?

Mike Patterson:

That’s what we’ll be covering today. So what we want to do is introduce the idea of synthetic respondents, and as I mentioned, we will share some results from some studies that we’ve done. But before that, I want to give you some background on how we view synthetic respondents and how we think about them. The term synthetic respondents can actually refer to a number of different things. There’s a number of different ways of describing these. So it could be virtual respondents, digital personas, silicone sample digital twins. So there’s a number of different names by which these, this, this concept and these, this approach can go by. But the general idea is that they’re “individuals” and I, we use air quotes that are generated using AI. So generally a large language model. And those can be constructed either using real data or simulated data.

And once they’re constructed, they can be asked to answer qualitative questions or potentially go through a survey and, and, and answer survey questions just like a real respondent would do. And the way that we approach it here at Radius is we take real survey respondents where we know their demographics, their attitudes, and their behaviors. So we take those real respondents and essentially feed them into a large language model. So we feed in those demographics, those attitudes, et cetera, into that large language model. And for each of the respondents, we ask the AI to take on the persona to take on the characteristics associated with that real respondent and essentially become that respondent. And then we can ask questions of that now, synthetic respondent. And what we’re able to do, and we will show you, is the results of when we have those synthetic respondents, how they compare to, to our real respondents.

Case Study 1:
Testing concepts with synthetic respondents.

Mike Patterson:

So the natural question is, you know, how comparable are they? And can we really replace respondents with these synthetic respondents, and various companies will suggest that they can. And so there are companies out there now that are selling synthetic respondents, and there’s also been various publications that have, that have purported that they’re able to kind of match the results associated with synthetic respondents and, and real respondents. And so what we wanted to do was to test that out. So in our first study, we wanted to look at a quantitative concept test, and we do a number of concept tests here at Radius. And so essentially we took one of our existing concept tests, and this was, this was a, a test of various products that are related to gaming controllers. And in the study, we had six different concepts that we evaluated. Each of the concepts was evaluated by about 200 consumers. And in that, we asked, we would display the concept and then asked the real respondent how likely they would be to seek information about the concept. So after we had completed the study, we then took our synthetic respondents and we used the characteristics associated with the real respondents in order to create those synthetic respondents. And then we expose them to the concepts and we ask the same question. What I want to do is share the results with you.

So on the left hand side, we show the individual level results. And what I mean by that is, because we have real respondents and we know what their ratings are, and we generate corresponding synthetic respondents, we can compare the results. And so if a real respondent gave a rating of a five, and the synthetic respondent gave a rating of a five, well, that’s a match. But if the real respondent gave a rating of a five and synthetic respondent, say, rated a four, that’s not a match. Because we have five scale points by chance, we would expect about 20% of the time for those to match. And if there was perfect alignment between our real respondents and synthetic respondents, that overlap would be a hundred percent. So as you can see here on this left hand side, about a third of the time, our real respondents match the synthetic respondents.

So it’s not great. It is above chance, but it’s not great. I’d really like to see that number about twice as high. I’d like to see more congruence between our real respondents and our synthetic respondents. On the right hand side, we show aggregate level results. And as most of you know, this is normally how we would report our data. So we might show top two box scores, and that’s what we’ve done on the right hand side. So we’ve looked at those proportions that are fives and fours on using our scale. And as you can see, there’s really nice congruence. There’s very, the results are very similar for four out of the six concepts. So for example, with concept A, we see that 76% of our real respondents gave a top two box score, and 75% of the synthetic respondents in aggregate gave a top two box score. So very close. And that’s true for four out of the six measures. For two of the measures, we do see some differences. So it’s not, there’s not perfect congruence in aggregate, but it is, but it’s pretty close. So that encouraged us, but we wanted to take the results and the research further. And so we see with concept tests, it might work. We also wanted to try other types of research. So in this case, we explored a quantitative messaging or positioning study, and we’ll talk about that next. And we also wanted to look at this in the realm of qualitative research to see how similar they were.

Case Study 2:
MilkPEP Positioning and quantitative testing.

Mike Patterson:

So this next case study is actually one that we did with MilkPEP and involved positioning and a quantitative test. Kikke, would you like to give us some background on this?

Kikke Riedel:

Sure, Michael. So milk has been around for ages, right? And, and milk messaging has been around for almost as long. I’ve been with the campaign for 25 plus years. And I think “body by milk,” there’s all sorts of stuff that went on for decades, even before I joined it. So we are always looking at how we can freshen our messages, make them more compelling, and make them resonate more with our target audiences. In this particular case study, we were talking to moms and, and rather than just going ahead and, and asking you know, do you like it or, or are you aware of this? Have you seen it? So we really wanted to come from slightly different angles, right? So believability is always important.

It doesn’t have necessary to be believable before you start messaging something, right? You can message in a way that makes it believable. But we wanted to get a sense of that. What was important to us was how surprising it was. Again, having talked about milk for a long, long time. We have there’s more benefits that has been discovered about milk in the, in the period I’ve been at MilkPEP, we went from nine nutrients to 13 nutrients that are recognized, for example. So there’s other things we can say about it, but if, you know, a lot of our audience don’t think there’s anything, much surprising to say about milk. So that was an important factor for us. And then frankly the reason why we exist is to drive the consumption of the milk with American consumers. So that was also a factor in this thing. So we did this study with our actual respondents, and then Michael did a version of it.

Mike Patterson:

Right? So we took, we took the real respondents from our, from our MilkPEP study, and we had demographics, attitudes, behaviors. And so just like in that first study, we then characterized those real respondents and had AI take on those characteristics, and then we were able to compare the results. And so that’s what we’ll share with you now. So on this first slide, we show you the results that are related to believably, and you can see we’ve got our real respondents and our synthetic respondents again. And if we look at this, this first statement, we see that the real respondents and our synthetic respondents, both of those two groups rated this statement as being the most believable. If we look at the bottom statement, we also see that that is the least believable statement for the two groups. And then in the middle, there’s, there’s some similarities, but also definitely some differences between the two groups. So overall, we find, as I just said, some similarities, some congruence in results, but also some differences.

Kikke Riedel:

So in looking at these results, what we thought were really interesting, do you hear me okay?

Mike Patterson:

Yes.

Kikke Riedel:

Okay, good. Suddenly, I got a pop-up that said, I had been muted, which I thought was interesting. <Laugh>. So if looking at this, right, as on the milk side, you look at the top statement, right? So, I mean, these should have been matching and, and they are fairly close to each other, right? So this is what everybody’s grown up knowing, that milk can help you to grow up healthy and strong, it’s what our mom told us. It’s what the doctors told us and what they told us in school. It’s what the marketing has been able to tell us. So, that one we would frankly have expected to be the same. If you sort of then just jump down in the bottom, already there I think that’s where at least I started to sort of raise some eyebrows.

I would not necessarily have expected that the real respondents would’ve found this very believable. It’s a relatively new claim for us, right? We have not put a ton of dollars against it. It is not sort of part of the vernacular, right? You’re not being told to drink eight glasses of milk a day, you’re being taught right to drink eight glasses of water. So this didn’t surprise me in the real respondent, but it did surprise me somewhat and sort of flagged it to just keep a close eye on the synthetic respondent here, because frankly, you would think with AI that they had access, right? To all the information that’s out there and, and the literature. So for the synthetic responders to sort of go even lower than the real respondents here was a little bit of a flag for me.

Mike Patterson:

Right. So we also measured surprise, as Kikke mentioned. And what we see here is, is really kind of little congruence or similarity between our real respondents and our synthetic respondents. And you see that we’ve highlighted two in particular. So, this top statement, the one related to hydration, the real respondents overwhelmingly found that statement to be surprising, and, as it should be, is noted by Kikke. But interestingly, you see a zero here. So none, literally, none of the synthetic respondents rated that as a surprising statement. So huge miss by our synthetic respondents there. And if you look down below, conversely very few of the real respondents felt that this statement was a surprise again, because it’s messaged often. Whereas the synthetic respondents really felt like that was very surprising. In fact, it’s the most surprising statement to them. So little match in terms of the consistency between the two groups.

Kikke Riedel:

So here, I think what happened is we had the first sort of dimension that we just talked about before, what is sort of out there in the ether everybody has known versus something that maybe few people know on this one. So I think you have the same dichotomy here, and then you have this extra layer, I think, of us evaluating here on surprise. So I know I’m sort of oversimplifying AI and synthetic or whatever, but like, surprise, I mean, that’s a human emotion, right? That’s not an artificial reaction. So I just wonder if that sort of led the synthetic respondents even more astray on this factor, as I mentioned.

Mike Patterson:

Yeah. The final thing that we wanted to evaluate is the extent to which each of these statements would increase consumption. And as Kikke noted, this is kind of the key metric, right? And again, we don’t see that there’s a lot of overlap or similarity in these results. And particularly, we’ve highlighted this one statement, again, the hydration statement. And you can see for the real respondents, it’s compelling. They really do feel like that statement is going to lead them to increase their consumption of milk relative to the other statements. But look at our synthetic respondents. Hardly very few selected that as being a statement that would sort of increase consumption of milk. So again, it’s a pretty big miss for our synthetic respondents compared to the real respondents.

Kikke Riedel:

It also would’ve been a pretty big miss if we had just used synthetic respondents from here, right? Because then we would not have been out without a hydration message, which when you looked at the three factors combined was the top performer. So it delivered the combination of surprising, believably, and compelling that really made it rise to the top versus all the other ones. So again, we would not have completely missed that opportunity if we’d gone down the synthetic route.

Case Study 3:
Evaluating synthetic respondents in qualitative research.

Mike Patterson:

Yeah, exactly. Okay. So we focused on a couple of quantitative studies. Well, we also, as I mentioned, wanted to evaluate synthetic respondents in the context of qualitative research. And so Kikke you wanna explain this research?

Kikke Riedel:

Yes, absolutely. So, milk is a very good recovery beverage after exercise. We’ve been involved with rock and roll marathons over the years, the Ironman in, in recent years we have decided we made sort of a shift in that we wanted to sponsor women not races, right? And we sort of got to that after being part of some of the big marathons out there and talking to a lot of women and, and understanding the challenges and what they faced and why they were not participating, whether they’re not runners or or what the challenges were when they’re on the course. So, we decided that we wanted to put on our own marathon, and it was solely for women. And we obviously wanted to design that around women in a way that reflected what they needed in a way that connected with them.

So what we did, we did a tremendous amount of research. So A, we had a very short time to put this race on if it was to happen. But we put together an online board and recruited 36 women at different sorts of stages of running ability. So you had season runners, you had aspiring runners, and then you had the runners that, I can’t quite remember what we called them, but it was basically like, I highly doubt that I’m gonna do this, but I’ll never say never. That’s basically the group I would fall into, right? So it was only like, there’s absolutely no way over my dead body, I can’t do it, whatever, that they were not in here. So we had a breadth of women in here, right?

And really trying to get to understand their needs if they were thinking about running in a marathon, perceptions, preferences for even running races in general, or a marathon. And then we also had some creative ideas that we wanted to get their feedback on. So we had our 36 respondents. We had, it was led by an all female research team, from the moderator to the, to the whole Radius team, to the whole MilkPEP and the agency team. It was an exciting week. And we had some very engaged women participating in, in, on this board. So we launched the marathon with Amanda Gorman, and we’re off to the races, so to speak,

Mike Patterson:

Right? And so once this research was conducted and completed, we then wanted to understand, you know, if we were to have conducted it with synthetic respondents, what would the results have looked like? So we’ll walk you through that. So what we did is we took the attitudes, the demographics, and so forth of six of the women that participated in those online boards. And we took those characteristics, we fed them into AI, into the large language model, again, asked AI to assume each of those personas. And then we asked it a series of questions, just, just like we did with our real respondents. And, and so what we’re able to do is then compare and contrast the results that we give back.

So one of the things that we wanted to understand was, when you’re as a, as a woman, as you’re exercising and, and doing things related to fitness, what are your primary goals? And so, on the left hand sort of middle section, we can see what our synthetic respondents, our synthetic personas suggested to us. And, when we looked at the results, we found that the answers that those synthetic respondents gave us were very functional in nature. I want to build my endurance, I wanna increase my strength, my flexibility, et cetera. So very, very functional in nature. And we also, in that research, we had different groups as Kikke alluded to, and, and we found very little difference in the synthetic respondents. They all were very similar in terms of the responses that they gave back. There wasn’t a lot of nuance in comparison to the actual respondents. And Kikke, would you like to explain what we heard from the real respondents?

Kikke Riedel:

Yes, yes, absolutely. So first, it’s a build on this functional thing, right? I mean, it’s very, it’s highly unlikely, right? That, a couch potato who may be playing with the dream of running a marathon has the same benefits in mind, right? That it’s quite a different driver than somebody who runs 5, 10, 15 races a year. So that sort of one tone, one note thing was very interesting because we had a really broad range, right? We also had an age range. So that was just very one note. And meanwhile, you saw actual respondents here. And if you just read over these bullet points that we have here, and we are not even writing out sentences, but it’s all about emotional benefits. It’s very holistic, it’s very personal, it’s very internal to themselves. One that’s not actually on here, but was a verbatim that we saw in a couple of places. Like, why, why do you wanna run a race? “I Wanna tell my daughter I could”. You know, you would not get that from a synthetic respondent. Synthetic respondents do not have synthetic children. So I mean, even that, right? That, I mean that emotion, like, I want to set a good example for my daughter. This is why I’m doing it, right? So you can see the distinct difference here between a synthetic persona and an actual respondent.

Mike Patterson:

And then we ask some other questions and, and honestly kind of found for our synthetic respondents similar sorts of results. So we asked them what their concerns were and, and we found for the synthetic respondents, again, very functional in nature. So, you know, I wanna make sure I can stay motivated. I don’t get injured. We also didn’t see the nuance within this group of synthetic respondents. They were all very similar in terms of the answers and responses that we got back.

Kikke Riedel:

The other thing that you can, you can see here, when you compare the synthetic to the actual respondent, and, and I can say this, maybe Mike could say it too, but he wouldn’t, is that some of these some of these concerns that the actual respondents have, right? Some of these are sort of tied to some of the stereotypes about women also, right? “Am I good enough?”, “I don’t wanna fail”, “I don’t wanna do these things either”, right? It’s too, I don’t wanna be crowded in. So some of these things you see, right, right? It’s almost like, you know, AI or Chat GPT if, if it’s a male, right? I mean, like that, getting into sort of some of the things, right, that are, that are very sort of innate to being a woman or to being a mom or, or, or, or not feeling that you’re good enough in some of, of those sort of maybe stereotypical, but still more prevalent. The synthetic personas did not, this could be anybody, right? Answering the question on the left for the synthetic personas.

Mike Patterson:

Yeah, great point. And then finally, we also wanted to test potential names. And, and Kikke, do you want to talk about this?

How synthetic respondents perform in ideating creative.

Kikke Riedel:

Yeah. So again, like, so this is maybe less important and it doesn’t, I think, highlight the same differences that we just had already. It’s just to say again, that the synthetic personas, they were all over the place where I think with the actual respondents, they all thought that every woman’s marathon was sort of a clear pick among them. So it shouldn’t be a surprise to people on, on this webinar that, that that’s the name that we went to. And to that, here we are Every Woman’s Marathon powered by Team Milk. It takes place November 16th in Savannah. We already have over 6,000 registrants for it. We are actually capping it at 7,000. We want to, this is our inaugural, right? So we want to make it a really good experience. We wanna land it and have all our women loving every second of it and, and, and catering to them.

So beyond the name, right? I think you sort of sense a theme here. There’s a lot of things we would’ve missed, right? In whether or not even to, to, to create a marathon, but also in terms of how, how we defined it. So it’s all about authenticity. It is designed for real women by real women, the running coaches we have, the, the, the, the team that is creating the race, everything. And it’s based on, on runners of all different levels and, and experience on it. So that authenticity really comes through. Also, we already talked about this emotional connection, right? So what is it? How can we design a race that pulls a woman in making her feel like this is something she can do or should do, even the non-runners out there, right? What is it that we need to deliver to, to really inspire women to sign up for such a crazy endeavor? We also, because we got all that nuance right? Across all the different levels and abilities of racing that is not just one size fits all. The race is being designed to be very inclusive, cater to women. Again, of all levels, abilities, there’s a very generous time limit on the course.

In almost walk it. You have to walk like a, I’m going to be misquoting here, like a 14 minute mile or something like that to do it. We actually have 41% of the registrants who are first timers, and have never won a marathon before. So I think we really designed it well for that. And then it’s all built around community. There’s Team Milk popping up everywhere, running clubs for women. And then finally we want it to be more than a race. Oh, the last thing, I almost forgot my little sneakers down here. Actually, in part of the research also, we found out that obviously there’s women’s running shoes, but what they are, they are just men’s running shoes in a smaller size. And so we actually worked with this company, and we have a special Every Woman’s Marathon sneaker, and it is designed around the shape and the arch of a woman’s foot.

Mike Patterson:

That’s great.

Kikke Riedel:

Mm-Hmm, <affirmative>. And then finally, this is probably one of my favorite pieces of what sort of came out of all the ideation and the research and the insights, and just the conversations that the entire team and the agency at home has had with women over the last couple of years, is that idea, again, of being able to connect aspirational with reality, right? We’re not all standing at the top of a mountain and putting a flag down and, and, and yelling like a superhero or something, right? It’s a lot of people that are not actual natural runners, right? So one of the things that we came out with is that we actually partnered with this wonderful woman, Abi Ayres. She’s an influencer, and she was a committed non runner.

So just sort of certified non runner, I think she calls herself. And we have actually been following her on this journey. We have started here, it’s like three episodes. I highly suggest that you check them out, they’re on YouTube. We did it with the cut of three short little episodes. And it’s all about how running sucks. And yet how she’s getting exposed to it. She runs for women all over Manhattan. She has an episode that’s called “What the Hill” instead of “What the Hell”. And it’s just, it’s a very charming, real authentic representation of somebody who has the aspiration to run, even though they never felt like a runner.

What we learned comparing synthetic respondents and real respondents.

Mike Patterson:

Yeah, that’s great. We will be sharing this deck, so you’ll have the link, and in addition, I believe we’ve copied the link into the chat, and so you can access this video. I really encourage you to watch it. It’s very funny. Very moving, really. So it’s, it’s great, great series that they put together. Okay, so let’s wrap up and, and conclude. So what do we think, you know, what, what are the conclusions that we’ve drawn having done these three series of studies? And I’ll also note we’re presenting these three studies, but here at Radius we’ve also done other research comparing synthetic respondents and real respondents and found similar sorts of things. So pretty much universally we’re finding that real respondents just perform better than synthetic respondents, at least at this point. You know, maybe synthetic respondents can evaluate existing ideas and concepts under certain conditions, but really trying to generate new ideas and new concepts, don’t really know if they’re there at this point.

We do feel like the results work in some cases, well, in aggregate, but in this study, the first study that we showed you, as well as other testing that we’ve done, when you’re looking at an individual, an individual respondent, we don’t really find that synthetic respondents can mirror that particular respondent. So individual level data is not great. As we noted with the qualitative, synthetic respondents might help you uncover some of the functional benefits, but really trying to get to emotion and, and those things that really make us humans, synthetic respondents just aren’t uncovering those more emotional benefits yet. And we also lose a lot of nuance with synthetic respondents that, that, that’s important for research with consumers, you know, ’cause we’re all unique and you lose that uniqueness. So what are the implications that we’ve drawn? So I do feel like at some point, and in some cases, synthetic respondents can be used, but I really feel like you have to validate the results. You have to take real respondents, characterize them, compare their answers against the synthetic respondents to make sure that there is this match either in terms of the industry that you’re in, the topic area, the type of study you’re conducting, the metrics that you’re assessing, and things like that. Kikke, do you have some, some concluding thoughts?

Kikke Riedel:

Yeah, well, you know, as a, as a research buyer, and I think there’s probably a lot on, on this webinar you know, just, you know, be on your toes if it sounds too good to be true. It probably is. I think we’ll all walk to floors at the different conferences and there’s a lot of people plucking a lot of AI solutions. And I think there’s probably use cases out there in terms of them actually generating respondents. Not where I would be going right now. Obviously if you are going to explore this work with trusted partners such as Radius who I’ve worked with for 20 years, and then explore potential different use cases, you know, by all means, like play around with it. Seek out opportunities to experiment like we did here. But frankly, in short, I sort of prefer my respondents, like, I prefer my milk, and that’s real.

Mike Patterson:

Great. So, you know, I guess the bottom line when it comes to synthetic respondents, I don’t know that we’re really there yet. We might get there in the future. We’ll have to see. But, AI will obviously fundamentally shape and change the way that we’ve traditionally done market research. And there’s, there’s a number of ways I think that it will benefit us as researchers. So it’s going to help us increase efficiency, you know, whether that’s taking a bunch of open ends or unstructured texts and analyzing it, taking audio transcripts or videos and, and transcribing those and then, and analyzing that data. You know, using AI to analyze vast amounts of data both quantitative as well as qualitative. I think it’s all of that is, is, is fair game and, and, and things that, that we’re currently seeing.

And will see. We’ll also see, I think, increases in data quality. And so we’re doing some things ourselves to enhance the respondent experience, which then increases the quality of the data that we get from our respondents. We can also use it in, you know, making sure that we’re screening the right respondents and getting good information from them. And then also I think there’s other tools that will be developed. So we’re looking at incorporating AI into, say, our choice models, our conjoint studies so that we get better data from our respondents and it’s potentially a less taxing experience. And so I think there will be huge developments in the future, one of which may be some more use of synthetic respondents, but really at this point, it’s early in the game and I do believe validation testing is completely warranted and necessary. Okay. I believe that was our final slide. Kikke, you want to say anything else about the marathon?

Kikke Riedel:

Well just consider joining us. There’s I think maybe 500 spots left or counting. We are gonna shut it down at 7,000, maxing out 7,000. It’s November 16th. Beautiful Savannah. Our entire team will be there and we’ll have a Radius representative maybe running part of the race with us. So I hope to see you. You can use the little QR code in the bottom also to go to the website.

Mike Patterson:

Yeah. Great. we’ve got a couple more minutes and got some questions I see. Let me pull those up. Okay. Do you think there will be a difference in using synthetic respondents for consumer based studies versus B2B research? My personal opinion is that we’re more likely to have success with this kind of consumer based studies given the training that these models have undergone. So I think they’re more consumer based models and have been trained more on consumers than a B2B, more technical audience. That’s not to say that you couldn’t train models using B2B, but I do think you know, at this point, this juncture, I would think of these models as more consumer based than, than consumer based. Let’s see. Then, I think Kikke, this would be a question for you. It sounds like you are very wary about conducting research with synthetic respondents. Can you envision changing your mind? And if so, what would persuade you to do so?

Kikke Riedel:

Oh, boy. I think I would need to do the work, whether it was Radius or somebody else to do a proof of concept on it. I also think it sort of depends maybe on this study and, under the circumstances of the work that we just shared here, it’ll take a while. You never say never, right? Five years ago somebody said “synthetic responders”, I would’ve thought you talked about pantyhose, perhaps, right? So it’s just, you know, things can change and things can evolve and get better at this point with, you know, the layer that we are trying to dig into. I think there’s other use cases for it, right? I think we can use it to generate hypotheses. I think we can, you know I think even just as a researcher using well that’s not really a synthetic respondent, like AI in general, right? Just in a way for how we, how we store and make our insights available to our teams and our members and stuff, I think there’s room for that. Synthetic respondents, I’m in a “wait and see” mode, I’ll let Michael convince me in a couple of years.

Mike Patterson:

That’s great. Yeah. I agree. I think we’re still pretty early on in their use and, as I said, I think, you know, we really do need to, to validate them. And then there’s one, I think we’ll have time for this real quick. Do you believe AI will eventually replace real respondents? I mean, who knows? Way, way down the road potentially. But you know, if I’m to give my honest opinion, I could see sort of synthetic respondents supplementing real respondents. I do believe we always need to put the human element first. Whether that’s as researchers, or using respondents, we need to base AI on humans, and apply our human knowledge and intelligence. And so I don’t know that they’re completely ever going to replace real respondents, but I do think they might supplement. Kikke any, any thoughts on that question?

Kikke Riedel:

I’m aligned with you, Michael.

Mike Patterson:

<Laugh>. Perfect. Alright, well, as I mentioned, we will share this deck out to everyone. Kikke, really appreciate you joining us today. It’s great to hear your perspective. And thank you for being on this journey, and being a good sport to participate. We really appreciate it. And thank you to everyone that’s joined us today. Sorry if we didn’t get to your question. We will take the questions and reply to you individually. Thank you everyone.

 

Does your AI research need assistance from humans? Contact us to explore options to enhance your research.