Episode 102
Synthetic Research vs Human Insights in Marketing
A recent study comparing AI-driven research to traditional focus groups found synthetic research is faster, more efficient, and better at spotting trends in massive datasets than human-to-human methods.
Elena, Angela, and Rob explore how synthetic audiences are revolutionizing market research, from creative pretesting to consumer behavior prediction. Learn how Marketing Architects developed ScriptSooth, an AI-powered creative testing platform proven to predict TV commercial success, and discover practical ways marketers can leverage synthetic audiences to enhance their strategies. Plus, the team plays 'Guess the Focus Group Fail' to highlight why even traditional research methods aren't infallible.
Topics Covered
• [01:00] How synthetic audiences simulate real customer behavior
• [04:00] Why ScriptSooth took thousands of iterations to perfect
• [06:00] Using AI for consumer focus groups and insights
• [09:00] Quick wins for marketers using synthetic research
• [14:00] Overcoming skepticism about AI-driven insights
• [17:00] The future of agentic AI in marketing
• [19:00] Historic marketing research failures and lessons learned
Resources:
2025 WARC Article
Today's Hosts

Elena Jasper
VP Marketing

Rob DeMars
Chief Product Architect

Angela Voss
Chief Executive Officer
Transcript
Angela: There's the argument out there that synthetic isn't real or isn't human. And I think that just misunderstands how AI models are trained and what they actually represent. AI generated synthetic audiences are not invented from thin air. They're built on the aggregation and analysis of real human behavior.
Elena: Hello and welcome to the Marketing Architects, a research first podcast dedicated to answering your toughest marketing questions.
I'm Elena Jasper. I run the marketing team here at Marketing Architects. And I'm joined by my co-hosts, Angela Voss, the CEO of Marketing Architects and Rob DeMars, the chief product architect of Misfits and Machines.
Angela: Hello.
Do we have a real Snoop Rob or is it a synthetic Snoop rob with us today?
Rob: Domo arigato, Mr. Roboto.
Elena: So it's the real thing.
Angela: Oh, got it.
Elena: We're back with our thoughts on some recent marketing news. Always trying to root our opinions and data research and what drives results.
Today, we're talking about synthetic audiences or an AI generated group of virtual consumers that simulate real customer behavior, and then which helps marketers test strategies or find messaging and predict outcomes before launching. We did cover this topic, but it was about a year ago. And as you can imagine, a lot has changed and we have some firsthand experience using synthetic audiences to pre-test marketing creative that we wanted to talk about.
So should be interesting and an important topic for really any marketer. So let's dive in. I chose an article from Work to feature today. It's titled "Comparing the Quality of Synthetic Research to Human Insights." This article summarizes a study on synthetic research by digital design and engineering studio Siberia in partnership with Synthetic Users and the James Beard Foundation.
They set up a head-to-head test, synthetic research powered by AI versus human-to-human research like focus groups and interviews, and this all used the same psychographic and research. Synthetic research relies on AI, large data sets, and natural language processing to generate insights.
This is all without direct human interaction. They wanted to know if AI could truly replace the deep, nuanced understanding that human research provides. And what they found was AI-driven research was faster, it was more efficient than human research when it came to gathering raw insights. It was great at sifting through massive data sets and spotting trends that a human might miss. They did find that AI couldn't replicate body language or emotional cues.
So for research that required deep understanding, or research that was highly competitive, they recommended human-to-human methods. So that is what just one study found. Rob, I wanted to ask you, how much do these findings align or not align with our own experience with synthetic research? And do you share sort of those same concerns about more like the human areas of research not being ideal for AI.
Rob: Yeah. AI is definitely, I mean, I know I'm going to probably state the obvious here, but it's a quantum leap forward in what you're able to do with research. I think there's really three key components they talked about in the research: speed.
Right. When you compress what can take weeks, sometimes months into seconds, it's like comparing a sundial to an Apple watch. You know, when you think about just the leap forward, cost, I mean, you're literally taking the entire research process that can cost hundreds of thousands of dollars and driving that cost down to zero.
And then there's finally the accuracy part, and we've definitely found that synthetic audiences are oftentimes more accurate at predicting outcomes in market than when it comes to the body language part, I don't have a lot to offer there. I've never seen a huge consumer insight that transformed the growth of a brand come through like somebody wiping their brow or what is this called? When you do this with your arms,
Elena: It's an audio podcast, Rob. So
Rob: I know, but what is that? When people do that, when they cross their arms, right. In a focus group, I'm like, I don't know. I guess I don't have a strong opinion there, and I'm totally open to being corrected if that led to a huge insight, just the mass data that you're able to aggregate with AI and synthesize is just not even a comparison in my opinion.
Elena: Yeah. That is a good point about body language. Like how good are humans even at reading someone's body language. Well, we're going to talk more about what we've found these synthetic audiences are good at, but before we get too deep into that, I thought it might be helpful to talk through how we actually use this technology so that listeners know where we're coming from and what our experience is. So Ange, would you mind walking through how Marketing Architects is currently using synthetic audiences?
Angela: Yeah, I mean, I think we could spend a lot of time in this area, but I think I can hit a couple of several key ways to enhance our marketing strategies and optimize client outcomes. I think one of the powerful applications has just been in that consumer focus group, right type environment for brainstorming and uncovering new insights.
You know, it's funny in the marketing space. Somehow we got to a place where we've felt like if we were to put 12 people in a room, we're going to produce gold insights. It's sort of funny to think about it's 12 people or it's 20 people, like, of all of the consumers, you know, and you go, are you really getting quality out of there?
But in this case, you know, we can use synthetic groups where AI-driven research allows us to rapidly test early stage concepts, campaign directions, emerging trends, instead of relying solely on that traditional focus group method, we can simulate diverse consumer segments as we think about broad marketing principles. I think often marketers go, well, I really need to understand my core. That's the most important. Well, why do you?
Why do you go there? Because research is expensive and it takes a lot of time. So of course you focus in on that low hanging fruit, so to speak. But if we need to reach new audiences and create relevance with a broader group of people, this is a great way to do that, you know, so uncovering brand positioning strategies. And I think the biggest, Rob was instrumental in this. One of our most innovative applications is ScriptSooth, which is our proprietary AI creative pre-testing platform that we developed. It's been proven to be highly predictive of in-market TV results. And I think that's a key piece. And I know we're going to get into trust later, but we were sitting on a bed of performance history in terms of both radio and television.
And so being able to kind of validate that we could leverage large language models and test as many scripts as necessary against synthetic audiences to identify the most effective creative concepts before production, you know, it ensures that the commercials that we bring to market are already optimized for that maximum effectiveness, which helps to reduce risk and just increase that likelihood of campaign success. So that's been a huge one for us and for our clients.
Elena: And Angie mentioned that we used a lot of our own history to build ScriptSooth. I know that a tool like that's custom built, it's pretty complex. Like, I know it took us a long time to build it and the data you put in matters a lot. This isn't as simple as just like asking ChatGPT what commercial is gonna perform the best.
Angela: Agree. And I think that's where you hit the trust problems is where people go, well, of course I can throw anything into Chat GPT and ask it what it thinks, but how do I trust that it's predictive or it's validated or it's representative of whatever that consumer sentiment might be.
Rob: Yeah, when we were developing ScriptSooth, there's really three key components that I feel like helped us unlock. One you've mentioned, which is the proprietary data set. I mean, that was really used to calibrate the prompt engineering that works behind of the tool. Also building a methodology that can withstand the instability of the LLMs, because they are their own beasts.
And you have to make sure that you're accounting for that, or your results. And then I think last, how do you operationalize it in a way that it's scalable? You don't have to be necessarily a deep AI scientist to be able to use a tool. The tools that you build have to be usable by everybody on your team.
Elena: And Rob, I know it took us like thousands of different iterations of this, right? To get to one that was usable. Like this took us a long time because I think one assumption when you use these things that it's hard for me when I'm talking about it with marketers is because the output happens so much faster, people assume that there wasn't a lot of proprietary or like, you know, an advantage going into building it. It's not that simple.
Rob: No, it took us at least 4 months and thousands of prompt considerations for us to get to the ultimate tool that we were able to create. And again, even all of that would have been really just a theory if we didn't have our own proprietary data to stress test everything against. We needed to make sure that we were grading our own homework and that the tool was actually accurate at the end of the day. And to Angie's point that we could trust it.
Elena: Yeah, I think that's probably one of the big challenges right now with using any synthetic audiences. How do you know if it's accurate? So that's kind of our favorite example is commercial pre-testing because we're a TV agency, but there's a lot of other opportunities for marketers to use AI.
And I can just speak to one. One thing we did was every year at our agency, we do an annual survey asking marketers what they think about a variety of topics. So things like what channels are you interested in and what do you look for when you're talking to other agencies and just a lot of general questions. And this year we ran our traditional survey and then we also ran a synthetic survey, just using Chat GPT and plugging in like the demographics of our audience into Chat GPT.
And it was scary accurate. It was accurate with every single question, which we had, like, 20 to 30 questions. The only ones that it was a little bit off was the average TV test size a marketer's interested in, and then obviously, like, any general questions about our brand awareness. It was very generous with how many people knew Marketing Architects that I don't think is totally accurate.
But beside that, for our survey next year, for these general questions, I'm definitely just going to use the synthetic audience because it was so accurate. But I was hoping you could also share just other ways you think a brand might use synthetic audiences. Not everybody listening to this podcast is running TV commercials. So any sort of quick wins might be helpful.
Angela: Yeah, I think even you didn't mention this one, but some of our own marketing, you know, LinkedIn ads, et cetera, we've also use synthetic audiences for, and it's been predictive of what we find in terms of in-market results. So from a digital perspective, that could be helpful. I would say competitive analysis and scenario testing by training synthetic audiences on real world behavioral data, brands can look to model different landscapes, competitive landscapes, predict how consumers might react to changes in pricing or changes in positioning or even major market disruptions.
We're hoping to not have any of those for the near future, but we did have that happen obviously during COVID and then I think with AI, to your point, Rob, a quantum leap forward in terms of what we can go do different than maybe what we might have been constrained by in the past in terms of the amount of content that we can push out there.
So things like personalization and dynamic content testing, brands can use synthetic audiences to pretest different variations of personalized ad creative. It could be email subject lines or landing pages to determine which messaging is most likely to drive engagement and conversion among specific customer segments.
So there's a lot, I mean, there's a lot of opportunity and spending dedicated time as a marketing team, thinking through what those opportunities are and sort of charting out, you know, what's the low hanging fruit, the matrix of what we can gain maybe with the bigger moves, but putting those on the roadmap for the future is worthy time spent.
Rob: Yeah, I'll say, how do you continue to look at the tools that you do decide to use and stress test them against some of the frontier tools that are coming out? So for instance, both Google and Chat GPT have released deep research functions now that use advanced AI and they're really powerful and you can even go back to some of the tools you did before that you're paying a lot of money for and going, wow, these things are crushing those. So and they only cost like $20 a month. I actually laugh at that. Sometimes you'll be like, Hey, have you tried Google's new deep reason? Well, I don't want to pay 20 a month. It's like, well, that's like the amount of that's like the cost of the M&Ms you spent in the focus group to, you know, to get research that didn't even matter. Like, come on.
Angela: Totally. Yep. Yep,
Elena: I think one takeaway from this is if you're a marketer and you aren't using either Chat GPT, Gemini, some sort of large language model, that you don't have your audience plugged in there, you could be asking it questions. What do you think about this? That we have an audience within a custom GPT where I can just go ask this quote, unquote, marketer questions.
What would you think of this campaign? What would you think of this email, this messaging? I mean, if you're a brand and you don't have your audience in one of these there's just no reason in my mind, not to do it. Because all it's going to do is help you feel more confident about your campaigns. You can ask it anytime. And I think if you don't have that, that's just low hanging fruit for anybody. One thing I wanted to talk about was skepticism. Because that, I think, is probably the biggest barrier right now for marketers trying to use synthetic audiences.
And it might be because people think AI lacks human creativity. Maybe that the models aren't sophisticated enough. So let's talk about that. What do we think has to change for more marketers and brands to feel comfortable using this stuff?
Angela: I think a lot of it is just usage. You know, it's just like anything else. There's an adoption curve. So you've got early innovators or early adopters. Rob for sure is in that bucket. I would say us as an agency, we are definitely in that bucket. But I think a lot of the skepticism stems from the fear that AI oversimplifies consumer behavior.
Right. So reducing people to data sets that lack emotion or some sort of nuance, you know, and I think to overcome that marketers need that clear evidence that synthetic research can produce insights that are just as actionable. And in some cases more precise, they're just getting better and better and better than even traditional methods.
And then I think there's the argument out there that synthetic isn't real or isn't human. And I think that just misunderstands how AI models are trained and what they actually represent. AI generated synthetic audiences are not invented from thin air. They're built on the aggregation and analysis of real human behavior.
It's just collected at scale through these vast data sets, the foundation of any AI model, whether it's used for research or prediction or creative generation is historical human data. And I think there's just still a bit of a misunderstanding when you can do something in seconds, instead of weeks, months, and you can do it for far less. It feels like it's fabricated or somehow just unreal.
And I think usage and becoming more accustomed to understanding the nuance of LLMs and how you need to prompt it is part of what's on you, like, we got to come across the bridge. You have to be able to do it well in order to gain the insight that is going to be useful to the brand. But in getting there, I think it unlocks a world of potential in terms of growth, frankly, for brands.
Rob: Yeah, I think a great answer, usage and time, right? A year ago, people thought we had a third eyeball when we talked about Script Sooth, and now it's yeah 📝, of course it does. I think you've had some of your spicier LinkedIn posts related to synthetic testing, if I recall.
Elena: People get really upset about synthetic testing.
Rob: And now it's like, yeah, of course it is. I mean, it was the same thing with generative video when people like, Oh, that sucks. It's crazy. It looks like people's faces are melting off their head. And now it's like, Oh, okay. Yeah, of course. Yeah, of course we're going to be making TV commercials that way. We are making it. I mean, it's, so it's usage and time.
Elena: I agree with you. The last, it's hard to find data on like how many companies allow Chat GPT. The last thing I saw was around 40%-50% just allow it in general. But that I think was an estimate of companies in the U.S. in general, I found another stat late 2024, that 77% of marketers are using something like Chat GPT. That's where I think, how was it not a hundred percent?
If you're a company that's not allowing your employees to use Chat GPT, I mean, first of all, they probably are, they're probably using it without you knowing it, but it's such a mistake. You're losing such an advantage. And yeah, I wonder too, if there's also part of the challenge marketers face is just C-suite buy-in to data like this, because there's just skepticism.
So a lot of marketers, I think, might not want to run something like this, run it, present it, because they're going to get pushed back and people just are more comfortable with data that comes from humans, even if you can prove it's more accurate. There's just that sense of wanting the human data. All right. Well, so AI, it's changed a lot in general in the past year, in the past few weeks, even the last couple of days, everything is changing constantly.
So Rob, I wanted to ask you what improvements have we already seen with these sorts of models that impact synthetic audiences? And what do you think might still be to come in this area? What can marketers get excited about in the future?
Rob: I am super excited about what's going to happen with agentic AI. You're already starting to see that happen. Now, if you're not familiar with the term, it's basically like AI being a super smart assistant for you. They are able to go and send off and have it do multi-level tasks. And if you check out the recent release from open AI and their operator, you can get a demo on how simple yet effective the technology can be. But when you take that and times it by ten you can just see all the potential possibilities.
Things like how can it go and collect data in ways that current search can't accomplish. You can literally send out AI agents to collect and synthesize and analyze consumer data and behavior like we've never been able to do before. Or we talked about focus groups and using synthetic audiences, but now imagine having agentic focus groups where you're able to have different types of agents that are debating each other on particular products, features and benefits.
And then also using agents to do real time A/B testing on your campaign. So literally having agents go stress test your flows on your website. I mean, all of these things now are just on the fringes of happening. So I'm really optimistic that it's just going to continue to empower marketers do things with the same criteria we talked about before, right? Speed and cost and accuracy is just going to get better.
Elena: Yeah. I think we've got to keep our eyes on agents. I'm so excited. There are a couple of use cases in my job that I am just waiting. I know that Chat GPT has Operator and, you know, we can start testing stuff like that, but very excited for what agents could potentially do. All right. To finish up here, I've put together a little game for us.
This one is called "Guess the Focus Group Fail." Now I'm not trying to pick on focus groups, but I thought this would be fun. And I think it's good to remind us that, you know, AI makes mistakes, but so do humans. And sometimes they're big ones. So I'm going to talk about a focus group sort of failure and give you three options for what the failure was, and I want you to guess what you think it was.
Angela: Okay.
Rob: Let's do it.
Elena: First one, this has to do with the New Coke. This was back in 1985. Some say this is the biggest focus group fail of all time. So, 1985, Coca-Cola, they reformulate their iconic soda based on taste tests. But what did these focus groups fail to consider? A, the formula tasted too artificial. B, people actually loved the original Coke's brand and emotional connection. Or C, the formula contained an ingredient that turned people's tongues blue.
Angela: I'd have to go B on that one.
Rob: It was B for sure.
Elena: Yeah, it was B. So the focus group, they loved the taste, but they didn't realize people had this deep attachment to the brand.
Rob: Don't mess with my Coca-Cola.
Elena: Why change?
Angela: Why are you trying to fix something that's not broken!
Elena: Not broken. Okay. Example two. This one has to do with Pepsi. Alright, this is about Crystal Pepsi. In fact, I don't know if you remember it.
Rob: Oh, I do.
Elena: Okay. This is the 90s. Pepsi launched a clear soda called Crystal Pepsi. What was the fatal flaw in the focus group testing? A, people thought clear meant healthier, but it wasn't. B, it tasted identical to Pepsi, making it pointless. Or C, the bottle exploded when shaken.
Rob: Wow. What do you think Rob?
Angela: That's a good one. I'm going to go with A. People thought for some reason it was better.
Angela: I'll go B.
Elena: It was A. So the focus groups, they associated clear drinks with healthier beverages, but it was still just Pepsi. So it flopped.
Angela: It was like when Miller came out with clear beer. I don't know if you guys remember that. They did the same. I was in college at the time. So it still made you drunk, which is, I, it's sometimes humans, I don't know, like, oh, it's clear. It must be like water. It's healthy. Like, come on.
Rob: Yeah.
Angela: Come on.
Elena: Yeah. I don't, I mean, I might think the same thing to be honest, but
Rob: It was like Zima. Is Zima still around? That was like the clear.
Angela: Yep. Malt liquor. Yep. Yep. I know Zima. Used to drop a Jolly Rancher in it.
Elena: Zima sounds like some sort of, like, disease or something to me.
Angela: I don't know if Zima's around. If they are, like, props to them, because I have not seen any marketing. I don't know how they could continue to exist.
Elena: Okay. I've got two more for you. So this next example has to do with Ford. So Ford in 1957, they had spent millions researching and testing the Edsel, E-D-S-E-L, I'm not sure how to pronounce that, Edsel, but it became one of the biggest automotive flops in history. What did the focus groups get wrong? A, the car was too futuristic for the average buyer. B, the focus groups didn't predict that consumers wouldn't like the unusual styling and name, or C, the steering wheel was designed backwards, making it hard to drive. I think C's answers are kind of throwaways if you haven't guessed.
Angela: Or C, the car didn't have wheels.
Rob: It's funny though, cause I'm actually, I remember the Edsel was a big flop, but I thought it was cause it like, couldn't, it was really bad at driving like it was a bad car.
Angela: It didn't have wheels!
Rob: I'm going to with, it's almost going to go with three for a minute because I don't remember why it didn't work well, but I thought the thing was like, it was a lemon.
Angela: Okay, I'll go B.
Elena: That was the correct answer. So they thought like this unique style and branding would be a hit but consumers found the name strange and styling unattractive. Shows the power of consumer testing from a brand like Ford. Okay. I got one more. This one disgusts me just talking about it, but this is about Colgate frozen dinners. So Colgate, which we know for toothpaste.
Rob: Oh, yeah.
Elena: Yuck! They once launched a line of frozen dinners. What did the focus groups fail to predict? A, people associated the brand too much with toothpaste, making the food seem unappetizing. B, the meals contained an ingredient that caused mild mouth numbness. Or C, the packaging design made it look like a medical product.
Angela: I mean, just based on Rob's reaction and my internal reaction, I got to go with number 1. I do too. I just,
Rob: I think I got a stomach ache when I, you know, you're not supposed to swallow your toothpaste and yet you're going to, yeah, I got to go with one too.
Elena: Yeah, that is correct. Yeah. I don't know how you get that wrong, but I know. But the lesson is that even, you know, focus groups, humans, sometimes we get it wrong and it can't prevent flops. So give AI, you know, give it some grace.
Angela: And we don't know what the quality of these focus groups are. How many people are we talking to? And are they representative of the broad sample?
Elena: So there we go. You guys are getting good at these games.
Rob: I'm old. I just remember all these things because I'm so old.
Episode 102
Synthetic Research vs Human Insights in Marketing
A recent study comparing AI-driven research to traditional focus groups found synthetic research is faster, more efficient, and better at spotting trends in massive datasets than human-to-human methods.

Elena, Angela, and Rob explore how synthetic audiences are revolutionizing market research, from creative pretesting to consumer behavior prediction. Learn how Marketing Architects developed ScriptSooth, an AI-powered creative testing platform proven to predict TV commercial success, and discover practical ways marketers can leverage synthetic audiences to enhance their strategies. Plus, the team plays 'Guess the Focus Group Fail' to highlight why even traditional research methods aren't infallible.
Topics Covered
• [01:00] How synthetic audiences simulate real customer behavior
• [04:00] Why ScriptSooth took thousands of iterations to perfect
• [06:00] Using AI for consumer focus groups and insights
• [09:00] Quick wins for marketers using synthetic research
• [14:00] Overcoming skepticism about AI-driven insights
• [17:00] The future of agentic AI in marketing
• [19:00] Historic marketing research failures and lessons learned
Resources:
2025 WARC Article
Today's Hosts

Elena Jasper
VP Marketing

Rob DeMars
Chief Product Architect

Angela Voss
Chief Executive Officer
Enjoy this episode? Leave us a review.
Transcript
Angela: There's the argument out there that synthetic isn't real or isn't human. And I think that just misunderstands how AI models are trained and what they actually represent. AI generated synthetic audiences are not invented from thin air. They're built on the aggregation and analysis of real human behavior.
Elena: Hello and welcome to the Marketing Architects, a research first podcast dedicated to answering your toughest marketing questions.
I'm Elena Jasper. I run the marketing team here at Marketing Architects. And I'm joined by my co-hosts, Angela Voss, the CEO of Marketing Architects and Rob DeMars, the chief product architect of Misfits and Machines.
Angela: Hello.
Do we have a real Snoop Rob or is it a synthetic Snoop rob with us today?
Rob: Domo arigato, Mr. Roboto.
Elena: So it's the real thing.
Angela: Oh, got it.
Elena: We're back with our thoughts on some recent marketing news. Always trying to root our opinions and data research and what drives results.
Today, we're talking about synthetic audiences or an AI generated group of virtual consumers that simulate real customer behavior, and then which helps marketers test strategies or find messaging and predict outcomes before launching. We did cover this topic, but it was about a year ago. And as you can imagine, a lot has changed and we have some firsthand experience using synthetic audiences to pre-test marketing creative that we wanted to talk about.
So should be interesting and an important topic for really any marketer. So let's dive in. I chose an article from Work to feature today. It's titled "Comparing the Quality of Synthetic Research to Human Insights." This article summarizes a study on synthetic research by digital design and engineering studio Siberia in partnership with Synthetic Users and the James Beard Foundation.
They set up a head-to-head test, synthetic research powered by AI versus human-to-human research like focus groups and interviews, and this all used the same psychographic and research. Synthetic research relies on AI, large data sets, and natural language processing to generate insights.
This is all without direct human interaction. They wanted to know if AI could truly replace the deep, nuanced understanding that human research provides. And what they found was AI-driven research was faster, it was more efficient than human research when it came to gathering raw insights. It was great at sifting through massive data sets and spotting trends that a human might miss. They did find that AI couldn't replicate body language or emotional cues.
So for research that required deep understanding, or research that was highly competitive, they recommended human-to-human methods. So that is what just one study found. Rob, I wanted to ask you, how much do these findings align or not align with our own experience with synthetic research? And do you share sort of those same concerns about more like the human areas of research not being ideal for AI.
Rob: Yeah. AI is definitely, I mean, I know I'm going to probably state the obvious here, but it's a quantum leap forward in what you're able to do with research. I think there's really three key components they talked about in the research: speed.
Right. When you compress what can take weeks, sometimes months into seconds, it's like comparing a sundial to an Apple watch. You know, when you think about just the leap forward, cost, I mean, you're literally taking the entire research process that can cost hundreds of thousands of dollars and driving that cost down to zero.
And then there's finally the accuracy part, and we've definitely found that synthetic audiences are oftentimes more accurate at predicting outcomes in market than when it comes to the body language part, I don't have a lot to offer there. I've never seen a huge consumer insight that transformed the growth of a brand come through like somebody wiping their brow or what is this called? When you do this with your arms,
Elena: It's an audio podcast, Rob. So
Rob: I know, but what is that? When people do that, when they cross their arms, right. In a focus group, I'm like, I don't know. I guess I don't have a strong opinion there, and I'm totally open to being corrected if that led to a huge insight, just the mass data that you're able to aggregate with AI and synthesize is just not even a comparison in my opinion.
Elena: Yeah. That is a good point about body language. Like how good are humans even at reading someone's body language. Well, we're going to talk more about what we've found these synthetic audiences are good at, but before we get too deep into that, I thought it might be helpful to talk through how we actually use this technology so that listeners know where we're coming from and what our experience is. So Ange, would you mind walking through how Marketing Architects is currently using synthetic audiences?
Angela: Yeah, I mean, I think we could spend a lot of time in this area, but I think I can hit a couple of several key ways to enhance our marketing strategies and optimize client outcomes. I think one of the powerful applications has just been in that consumer focus group, right type environment for brainstorming and uncovering new insights.
You know, it's funny in the marketing space. Somehow we got to a place where we've felt like if we were to put 12 people in a room, we're going to produce gold insights. It's sort of funny to think about it's 12 people or it's 20 people, like, of all of the consumers, you know, and you go, are you really getting quality out of there?
But in this case, you know, we can use synthetic groups where AI-driven research allows us to rapidly test early stage concepts, campaign directions, emerging trends, instead of relying solely on that traditional focus group method, we can simulate diverse consumer segments as we think about broad marketing principles. I think often marketers go, well, I really need to understand my core. That's the most important. Well, why do you?
Why do you go there? Because research is expensive and it takes a lot of time. So of course you focus in on that low hanging fruit, so to speak. But if we need to reach new audiences and create relevance with a broader group of people, this is a great way to do that, you know, so uncovering brand positioning strategies. And I think the biggest, Rob was instrumental in this. One of our most innovative applications is ScriptSooth, which is our proprietary AI creative pre-testing platform that we developed. It's been proven to be highly predictive of in-market TV results. And I think that's a key piece. And I know we're going to get into trust later, but we were sitting on a bed of performance history in terms of both radio and television.
And so being able to kind of validate that we could leverage large language models and test as many scripts as necessary against synthetic audiences to identify the most effective creative concepts before production, you know, it ensures that the commercials that we bring to market are already optimized for that maximum effectiveness, which helps to reduce risk and just increase that likelihood of campaign success. So that's been a huge one for us and for our clients.
Elena: And Angie mentioned that we used a lot of our own history to build ScriptSooth. I know that a tool like that's custom built, it's pretty complex. Like, I know it took us a long time to build it and the data you put in matters a lot. This isn't as simple as just like asking ChatGPT what commercial is gonna perform the best.
Angela: Agree. And I think that's where you hit the trust problems is where people go, well, of course I can throw anything into Chat GPT and ask it what it thinks, but how do I trust that it's predictive or it's validated or it's representative of whatever that consumer sentiment might be.
Rob: Yeah, when we were developing ScriptSooth, there's really three key components that I feel like helped us unlock. One you've mentioned, which is the proprietary data set. I mean, that was really used to calibrate the prompt engineering that works behind of the tool. Also building a methodology that can withstand the instability of the LLMs, because they are their own beasts.
And you have to make sure that you're accounting for that, or your results. And then I think last, how do you operationalize it in a way that it's scalable? You don't have to be necessarily a deep AI scientist to be able to use a tool. The tools that you build have to be usable by everybody on your team.
Elena: And Rob, I know it took us like thousands of different iterations of this, right? To get to one that was usable. Like this took us a long time because I think one assumption when you use these things that it's hard for me when I'm talking about it with marketers is because the output happens so much faster, people assume that there wasn't a lot of proprietary or like, you know, an advantage going into building it. It's not that simple.
Rob: No, it took us at least 4 months and thousands of prompt considerations for us to get to the ultimate tool that we were able to create. And again, even all of that would have been really just a theory if we didn't have our own proprietary data to stress test everything against. We needed to make sure that we were grading our own homework and that the tool was actually accurate at the end of the day. And to Angie's point that we could trust it.
Elena: Yeah, I think that's probably one of the big challenges right now with using any synthetic audiences. How do you know if it's accurate? So that's kind of our favorite example is commercial pre-testing because we're a TV agency, but there's a lot of other opportunities for marketers to use AI.
And I can just speak to one. One thing we did was every year at our agency, we do an annual survey asking marketers what they think about a variety of topics. So things like what channels are you interested in and what do you look for when you're talking to other agencies and just a lot of general questions. And this year we ran our traditional survey and then we also ran a synthetic survey, just using Chat GPT and plugging in like the demographics of our audience into Chat GPT.
And it was scary accurate. It was accurate with every single question, which we had, like, 20 to 30 questions. The only ones that it was a little bit off was the average TV test size a marketer's interested in, and then obviously, like, any general questions about our brand awareness. It was very generous with how many people knew Marketing Architects that I don't think is totally accurate.
But beside that, for our survey next year, for these general questions, I'm definitely just going to use the synthetic audience because it was so accurate. But I was hoping you could also share just other ways you think a brand might use synthetic audiences. Not everybody listening to this podcast is running TV commercials. So any sort of quick wins might be helpful.
Angela: Yeah, I think even you didn't mention this one, but some of our own marketing, you know, LinkedIn ads, et cetera, we've also use synthetic audiences for, and it's been predictive of what we find in terms of in-market results. So from a digital perspective, that could be helpful. I would say competitive analysis and scenario testing by training synthetic audiences on real world behavioral data, brands can look to model different landscapes, competitive landscapes, predict how consumers might react to changes in pricing or changes in positioning or even major market disruptions.
We're hoping to not have any of those for the near future, but we did have that happen obviously during COVID and then I think with AI, to your point, Rob, a quantum leap forward in terms of what we can go do different than maybe what we might have been constrained by in the past in terms of the amount of content that we can push out there.
So things like personalization and dynamic content testing, brands can use synthetic audiences to pretest different variations of personalized ad creative. It could be email subject lines or landing pages to determine which messaging is most likely to drive engagement and conversion among specific customer segments.
So there's a lot, I mean, there's a lot of opportunity and spending dedicated time as a marketing team, thinking through what those opportunities are and sort of charting out, you know, what's the low hanging fruit, the matrix of what we can gain maybe with the bigger moves, but putting those on the roadmap for the future is worthy time spent.
Rob: Yeah, I'll say, how do you continue to look at the tools that you do decide to use and stress test them against some of the frontier tools that are coming out? So for instance, both Google and Chat GPT have released deep research functions now that use advanced AI and they're really powerful and you can even go back to some of the tools you did before that you're paying a lot of money for and going, wow, these things are crushing those. So and they only cost like $20 a month. I actually laugh at that. Sometimes you'll be like, Hey, have you tried Google's new deep reason? Well, I don't want to pay 20 a month. It's like, well, that's like the amount of that's like the cost of the M&Ms you spent in the focus group to, you know, to get research that didn't even matter. Like, come on.
Angela: Totally. Yep. Yep,
Elena: I think one takeaway from this is if you're a marketer and you aren't using either Chat GPT, Gemini, some sort of large language model, that you don't have your audience plugged in there, you could be asking it questions. What do you think about this? That we have an audience within a custom GPT where I can just go ask this quote, unquote, marketer questions.
What would you think of this campaign? What would you think of this email, this messaging? I mean, if you're a brand and you don't have your audience in one of these there's just no reason in my mind, not to do it. Because all it's going to do is help you feel more confident about your campaigns. You can ask it anytime. And I think if you don't have that, that's just low hanging fruit for anybody. One thing I wanted to talk about was skepticism. Because that, I think, is probably the biggest barrier right now for marketers trying to use synthetic audiences.
And it might be because people think AI lacks human creativity. Maybe that the models aren't sophisticated enough. So let's talk about that. What do we think has to change for more marketers and brands to feel comfortable using this stuff?
Angela: I think a lot of it is just usage. You know, it's just like anything else. There's an adoption curve. So you've got early innovators or early adopters. Rob for sure is in that bucket. I would say us as an agency, we are definitely in that bucket. But I think a lot of the skepticism stems from the fear that AI oversimplifies consumer behavior.
Right. So reducing people to data sets that lack emotion or some sort of nuance, you know, and I think to overcome that marketers need that clear evidence that synthetic research can produce insights that are just as actionable. And in some cases more precise, they're just getting better and better and better than even traditional methods.
And then I think there's the argument out there that synthetic isn't real or isn't human. And I think that just misunderstands how AI models are trained and what they actually represent. AI generated synthetic audiences are not invented from thin air. They're built on the aggregation and analysis of real human behavior.
It's just collected at scale through these vast data sets, the foundation of any AI model, whether it's used for research or prediction or creative generation is historical human data. And I think there's just still a bit of a misunderstanding when you can do something in seconds, instead of weeks, months, and you can do it for far less. It feels like it's fabricated or somehow just unreal.
And I think usage and becoming more accustomed to understanding the nuance of LLMs and how you need to prompt it is part of what's on you, like, we got to come across the bridge. You have to be able to do it well in order to gain the insight that is going to be useful to the brand. But in getting there, I think it unlocks a world of potential in terms of growth, frankly, for brands.
Rob: Yeah, I think a great answer, usage and time, right? A year ago, people thought we had a third eyeball when we talked about Script Sooth, and now it's yeah 📝, of course it does. I think you've had some of your spicier LinkedIn posts related to synthetic testing, if I recall.
Elena: People get really upset about synthetic testing.
Rob: And now it's like, yeah, of course it is. I mean, it was the same thing with generative video when people like, Oh, that sucks. It's crazy. It looks like people's faces are melting off their head. And now it's like, Oh, okay. Yeah, of course. Yeah, of course we're going to be making TV commercials that way. We are making it. I mean, it's, so it's usage and time.
Elena: I agree with you. The last, it's hard to find data on like how many companies allow Chat GPT. The last thing I saw was around 40%-50% just allow it in general. But that I think was an estimate of companies in the U.S. in general, I found another stat late 2024, that 77% of marketers are using something like Chat GPT. That's where I think, how was it not a hundred percent?
If you're a company that's not allowing your employees to use Chat GPT, I mean, first of all, they probably are, they're probably using it without you knowing it, but it's such a mistake. You're losing such an advantage. And yeah, I wonder too, if there's also part of the challenge marketers face is just C-suite buy-in to data like this, because there's just skepticism.
So a lot of marketers, I think, might not want to run something like this, run it, present it, because they're going to get pushed back and people just are more comfortable with data that comes from humans, even if you can prove it's more accurate. There's just that sense of wanting the human data. All right. Well, so AI, it's changed a lot in general in the past year, in the past few weeks, even the last couple of days, everything is changing constantly.
So Rob, I wanted to ask you what improvements have we already seen with these sorts of models that impact synthetic audiences? And what do you think might still be to come in this area? What can marketers get excited about in the future?
Rob: I am super excited about what's going to happen with agentic AI. You're already starting to see that happen. Now, if you're not familiar with the term, it's basically like AI being a super smart assistant for you. They are able to go and send off and have it do multi-level tasks. And if you check out the recent release from open AI and their operator, you can get a demo on how simple yet effective the technology can be. But when you take that and times it by ten you can just see all the potential possibilities.
Things like how can it go and collect data in ways that current search can't accomplish. You can literally send out AI agents to collect and synthesize and analyze consumer data and behavior like we've never been able to do before. Or we talked about focus groups and using synthetic audiences, but now imagine having agentic focus groups where you're able to have different types of agents that are debating each other on particular products, features and benefits.
And then also using agents to do real time A/B testing on your campaign. So literally having agents go stress test your flows on your website. I mean, all of these things now are just on the fringes of happening. So I'm really optimistic that it's just going to continue to empower marketers do things with the same criteria we talked about before, right? Speed and cost and accuracy is just going to get better.
Elena: Yeah. I think we've got to keep our eyes on agents. I'm so excited. There are a couple of use cases in my job that I am just waiting. I know that Chat GPT has Operator and, you know, we can start testing stuff like that, but very excited for what agents could potentially do. All right. To finish up here, I've put together a little game for us.
This one is called "Guess the Focus Group Fail." Now I'm not trying to pick on focus groups, but I thought this would be fun. And I think it's good to remind us that, you know, AI makes mistakes, but so do humans. And sometimes they're big ones. So I'm going to talk about a focus group sort of failure and give you three options for what the failure was, and I want you to guess what you think it was.
Angela: Okay.
Rob: Let's do it.
Elena: First one, this has to do with the New Coke. This was back in 1985. Some say this is the biggest focus group fail of all time. So, 1985, Coca-Cola, they reformulate their iconic soda based on taste tests. But what did these focus groups fail to consider? A, the formula tasted too artificial. B, people actually loved the original Coke's brand and emotional connection. Or C, the formula contained an ingredient that turned people's tongues blue.
Angela: I'd have to go B on that one.
Rob: It was B for sure.
Elena: Yeah, it was B. So the focus group, they loved the taste, but they didn't realize people had this deep attachment to the brand.
Rob: Don't mess with my Coca-Cola.
Elena: Why change?
Angela: Why are you trying to fix something that's not broken!
Elena: Not broken. Okay. Example two. This one has to do with Pepsi. Alright, this is about Crystal Pepsi. In fact, I don't know if you remember it.
Rob: Oh, I do.
Elena: Okay. This is the 90s. Pepsi launched a clear soda called Crystal Pepsi. What was the fatal flaw in the focus group testing? A, people thought clear meant healthier, but it wasn't. B, it tasted identical to Pepsi, making it pointless. Or C, the bottle exploded when shaken.
Rob: Wow. What do you think Rob?
Angela: That's a good one. I'm going to go with A. People thought for some reason it was better.
Angela: I'll go B.
Elena: It was A. So the focus groups, they associated clear drinks with healthier beverages, but it was still just Pepsi. So it flopped.
Angela: It was like when Miller came out with clear beer. I don't know if you guys remember that. They did the same. I was in college at the time. So it still made you drunk, which is, I, it's sometimes humans, I don't know, like, oh, it's clear. It must be like water. It's healthy. Like, come on.
Rob: Yeah.
Angela: Come on.
Elena: Yeah. I don't, I mean, I might think the same thing to be honest, but
Rob: It was like Zima. Is Zima still around? That was like the clear.
Angela: Yep. Malt liquor. Yep. Yep. I know Zima. Used to drop a Jolly Rancher in it.
Elena: Zima sounds like some sort of, like, disease or something to me.
Angela: I don't know if Zima's around. If they are, like, props to them, because I have not seen any marketing. I don't know how they could continue to exist.
Elena: Okay. I've got two more for you. So this next example has to do with Ford. So Ford in 1957, they had spent millions researching and testing the Edsel, E-D-S-E-L, I'm not sure how to pronounce that, Edsel, but it became one of the biggest automotive flops in history. What did the focus groups get wrong? A, the car was too futuristic for the average buyer. B, the focus groups didn't predict that consumers wouldn't like the unusual styling and name, or C, the steering wheel was designed backwards, making it hard to drive. I think C's answers are kind of throwaways if you haven't guessed.
Angela: Or C, the car didn't have wheels.
Rob: It's funny though, cause I'm actually, I remember the Edsel was a big flop, but I thought it was cause it like, couldn't, it was really bad at driving like it was a bad car.
Angela: It didn't have wheels!
Rob: I'm going to with, it's almost going to go with three for a minute because I don't remember why it didn't work well, but I thought the thing was like, it was a lemon.
Angela: Okay, I'll go B.
Elena: That was the correct answer. So they thought like this unique style and branding would be a hit but consumers found the name strange and styling unattractive. Shows the power of consumer testing from a brand like Ford. Okay. I got one more. This one disgusts me just talking about it, but this is about Colgate frozen dinners. So Colgate, which we know for toothpaste.
Rob: Oh, yeah.
Elena: Yuck! They once launched a line of frozen dinners. What did the focus groups fail to predict? A, people associated the brand too much with toothpaste, making the food seem unappetizing. B, the meals contained an ingredient that caused mild mouth numbness. Or C, the packaging design made it look like a medical product.
Angela: I mean, just based on Rob's reaction and my internal reaction, I got to go with number 1. I do too. I just,
Rob: I think I got a stomach ache when I, you know, you're not supposed to swallow your toothpaste and yet you're going to, yeah, I got to go with one too.
Elena: Yeah, that is correct. Yeah. I don't know how you get that wrong, but I know. But the lesson is that even, you know, focus groups, humans, sometimes we get it wrong and it can't prevent flops. So give AI, you know, give it some grace.
Angela: And we don't know what the quality of these focus groups are. How many people are we talking to? And are they representative of the broad sample?
Elena: So there we go. You guys are getting good at these games.
Rob: I'm old. I just remember all these things because I'm so old.