Qualtrics X4 and the New Questions Facing Insights Leaders

Qualtrics X4 2026 marked a pivot from AI hype to trust and action. Explore how insights leaders navigate synthetic research and the evolving role of judgment.

Qualtrics X4 and the New Questions Facing Insights Leaders

Editor’s Note: While the prevailing market story about Qualtrics focuses on debt volatility and AI’s threat to software defensibility, this piece is keeping the perspective on the current "boots on the ground" reality. Moving past high-level financial analysis, these seven critical questions distill key takeaways from X4 into a roadmap for insights leaders as synthetic research transitions from hype to operational necessity.


At Qualtrics X4 2026, the most important story was not simply that AI is here. It was that the questions facing insights leaders are getting sharper, more practical, and harder to avoid.

If you spent any time at the event, you heard plenty about AI (that part was expected). What felt more important, though, was how much the conversation has moved on from whether AI will shape research and customer experience (it already is).

The real questions now are about what to trust, where synthetic research fits, what should still be human-led, and how insights leaders keep their footing as more teams gain direct access to AI-powered tools.

Those are not abstract questions anymore, they're operational ones. Organizational ones. In some cases, existential ones.

And after several days of keynotes, interviews, and product sessions, that was my clearest takeaway: the conversation is getting more specific, more credible, and, in some cases, more honest.

The loudest themes at X4 were not just speed and automation. They were trust, context, judgment, and action.

The First Question: How Do Insights Teams Move from Measurement to Action?

Remember, this is an experience management summit. So I'm starting there.

One of the clearest messages at X4 was that measurement alone is no longer enough.

Qualtrics President of Product and Engineering, Brad Anderson, put it plainly: “Experience is no longer a feature of your business. Experience is your business and your brand.”

The emphasis was not just on listening to customers or employees, but on closing the gap between understanding and acting while decisions are still live.

And, Anderson argued that “AI without context is blind.” A key problem with automation is that without context, without the data, history, and situational understanding needed, it just isn't useful.

That idea surfaced in customer examples from sectors as different as restaurants, lawn care, telecom, and healthcare. The common promise was this: connect more signals, understand them more quickly, and help organizations act sooner and more precisely.

For insights leaders, that raises an immediate question: if the business is moving faster, how does research become more decision-useful without becoming shallower?

The Second Question: What Does Context Actually Look Like in Practice?

So, I'll point this out, as I've already used it a bunch. If there was one word I heard over and over again, it was context.

I heard it in keynote sessions, product sessions, and in the the interviews the Qualtrics team set up for me.

Carly Rintisch, Senior Manager, Consumer Insights at T-Mobile, described to me how her team rebuilt an internal customer experience program that had become bloated, repetitive, and, in her words, “not trusted” inside the organization. 

One of the biggest changes was deceptively simple: they needed to stop asking customers to repeat facts the company already had answers to. “We needed to stop asking what we already know,” she told me. “We only ask what we don’t know.”

Here's a practical example of how that shift changed things at T-Mobile: They reduced a survey from more than 30 questions down to nine, cut it from around 10 minutes to two, and created a simpler system for comparing journeys across channels.

And while they greatly improved the survey length, they also enhanced their team's internal credibility. 

In this case, context in practice meant treating the existing customer data as the foundation for the conversation; by acknowledging what was already known, the team transformed the survey from a repetitive administrative task into a high-value gap analysis that respected the customer's time and the researcher's expertise.

That feels like an important lesson for insights leaders well beyond telecom. As organizations get better at connecting behavioral, transactional, and experience data, the value of research may shift further toward explanation, interpretation, and guidance, not just collection.

The Third Question: Where Is Synthetic Research Truly Useful Right Now?

No topic at X4 carried more energy, or more tension, than synthetic research.

Qualtrics has clearly made it a strategic priority. In an interview, Ali Henriques, Head of Qualtrics Market Research, told me directly that the company had to make a choice. “We had to place a bet on something,” she said. “And that was synthetic.”

Henriques also argued that much of the market still misunderstands what Qualtrics is trying to build. “We built the hard thing first,” she said. “Selling it as quantitative output isn’t sexy.”

So Qualtrics chose to focus first on a model that takes survey input and returns record-level structured data, rather than jumping straight to more marketable interfaces. That may not be the flashiest story to tell on stage, but it is a meaningful one for research buyers who are concerned about quality, validity, and use-case fit.

I continued the conversation with Jordan Harper, Principal AI Thought Leader at Qualtrics. He was careful not to frame synthetic as a wholesale replacement for human research. “I think it’s an additive tool to researchers,” he said.

That distinction came up repeatedly at X4. The strongest case being made was not that synthetic can do everything. It was that it can increasingly do some things well, especially broad directional work, early testing, faster iteration, and lower-risk exploration.

So the question for insights leaders is not whether to care about synthetic. It is how to decide where it belongs. Which leads us to the next question.

The Fourth Question: What Kinds of Research Still Depend on Human Discovery?

Even amid all the excitement around synthetic research, some of the most thoughtful comments I heard at X4 were about its limits.

Eva Ng of Schneider Electric offered me one of the clearest distinctions of the event when we spoke. There are, she said, two ways to think about research.

“One is using research to prove you have a strong hypothesis,” she told me. “Then there’s a second way to look at research which is to ... to truly search.”

That difference is not academic. It goes directly to what synthetic can and cannot do well.

“If you have a question,” Ng said, “it is something that you know you don’t know. But when you don’t know what you don’t know, you can’t even have a question.”

Sidebar: That concept resonated with me, reminding me of a stage of competence called unconscious incompetence, when you don't know that you don't know something. (If you're curious, here's a link to learn more.)

Anyway, Ng identified something many researchers know instinctively: some of the most valuable work does not begin with a cleanly framed problem. It begins with the odd comment, the unexpected use case, the side path, the thing no one thought to ask until a real person surfaced it.

That kind of discovery is harder to reduce to a prompt.

Ng is not anti-synthetic, quite the opposite. She sees synthetic research as something practitioners should learn, test, and blend into their work.

“We will learn together,” she said. “Those who are unwilling and stand on the side scared and terrified, sorry, the world will move on without you.”

That feels like one of the more useful messages insights leaders could take from X4.

“Those who are unwilling and stand on the side scared and terrified, sorry, the world will move on without you.” 

- Eva Ng, Schneider Electric

The Fifth Question: How Is the Role of the Researcher Changing?

One of the more candid threads running through my conversations at X4 was that all this change requires a shift in not just the mindset, but the role of the research practitoner.

Henriques said as much when we talked about what AI is doing to internal power dynamics around research. “This is a very pivotal year for the insights professional,” she told me. “We are losing power.”

What she meant was not that researchers are disappearing. It was that as more stakeholders have direct access to tools, data, and AI-enabled workflows they could bypass the traditional research request process altogether. Agencies can go directly to marketing teams. Product teams can experiment with new tools. Providers can put AI-powered interfaces in front of non-research users. And that raises the stakes for insights leaders.

So the practitioners task is not just to execute studies anymore. It is to guide method choice, establish trust, pressure-test outputs, and help organizations avoid making consequential decisions on top of bad or overconfident AI-generated work.

That same theme surfaced in one of the thought leadership sessions late on Day 2, where speakers argued that the scarce resource is no longer insight itself, but knowing which insights to trust.

That framing may sound dramatic, but it fits the moment. As it becomes easier to generate answers, summaries, themes, and even recommendations, the value of the researcher may move further up the chain, from production to architecture, interpretation, and governance.

The Sixth Question: How Do Insights Leaders Make the Business Case in Financial Terms?

Another important throughline at X4 was the increasing insistence that experience and insight work be tied directly to financial performance.

Ben Dunham, CFO of TruGreen, gave one of the clearest examples in his keynote presentation. Retention, he said, was the unlock for the company’s growth strategy, and customer and employee experience were central to improving it.

He walked through the economics directly. One point of customer retention, he said, was worth about $10 million in revenue and $5 million in EBITDA in the first year alone. Over time, that compounds materially.

He closed with two simple equations: “Relationships plus trust equals retention,” he said. “And hopefully from this presentation, you can see that retention equals profitable growth.”

Inspire Brands CEO Paul Brown offered a related perspective in his keynote address, describing how the company built a unified data platform across brands, transactions, customer feedback, and operational systems because “data ultimately was going to be a core strategic asset.”

These are not soft arguments. They are arguments about infrastructure, economics, and competitive advantage.

For insights leaders, that may be one of the most practical takeaways from X4: the business case is there, but it has to be made in operational and financial language, not just in the language of awareness or satisfaction.

The Seventh Question: Does Human Judgment Become Less Important, or More?

While AI dominated the technical sessions, the keynote stage repeatedly returned to a truth that technology cannot replicate: the necessity of human interpretation in turning data into direction. Let me share three standout concepts:

  • Las Vegas Raiders President Sandra Douglas Morgan said in her keynote that the hard part of leadership is not getting to perfect certainty. It is knowing when to move. Data, she said, “provides the what,” but leaders still have to act. “You have to be able to provide some decisive action knowing that you’re never going to get everything to get to 100%.”
  • Priya Parker, author of The Art of Gathering, argued that experience design is not only about physical logistics or technical systems. “We must also design software of connection,” she said.
  • And Jay Shetty argued that one of the most important leadership capabilities is the ability to make connections others do not see. 

Individually, these talks were inspiring, of course. But taken together, these messages highlight that while AI can synthesize vast amounts of information, it cannot replace what only a human can do, what only a human leader can do: see connections, allow room for connections, and make decisions based on the connections only a human can make. 

The Questions Now Facing Insights Leaders

The easy version of this story is that AI has arrived in research and experience management. That is true, but it is not new.

What X4 revealed is that the next phase of this shift is going to be defined by harder questions:

  • How do insights teams become more action-oriented without sacrificing rigor?
  • What level of context is required before automation becomes useful rather than risky?
  • Where does synthetic research belong, and where does it not?
  • What kinds of discovery still depend on real human conversation?
  • How do researchers retain influence as tools spread beyond the function?
  • How do leaders make the business case in financial terms?
  • And what kinds of judgment become more valuable as answers become easier to generate?

That's the work ahead. And insights professionals, insights practitioners, have to do the work. Because the future of insights will not belong to those who simply move fastest. It will belong to those who know what deserves speed, what requires care, and how to hold both at once.

artificial intelligencedata qualitysynthetic data

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Karen Lynch

Karen Lynch

Head of Content at Greenbook

328 articles

author bio

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karen Lynch

Dan Parker-Smith on Storytelling, Video, and Creative Collaboration in Insights
Future List Honorees

Dan Parker-Smith on Storytelling, Video, and Creative Collaboration in Insights

Future List Honoree Dan Parker-Smith shares how video storytelling, creativity, and collaboration are transforming insight communication in MRX.

From Bots to Breakthroughs: The New Playbook for Market Research
The Exchange

From Bots to Breakthroughs: The New Playbook for Market Research

Karen Lynch and Lenny Murphy unpack AI disruption in research, from agentic tools and survey fraud t...

ARTICLES

The Signal from QRCA 2026: AI Moderation is Good Enough, Sometimes
Insights Industry News

The Signal from QRCA 2026: AI Moderation is Good Enough, Sometimes

A decision matrix to choose AI-only, Hybrid, or Human-only based on risk, stakes, and nuance.

Karen Lynch

Karen Lynch

Head of Content at Greenbook

Follow the Spark: Why San Antonio Is The Place for Qual in February
Insights Industry News

Follow the Spark: Why San Antonio Is The Place for Qual in February

At QRCA San Antonio, gain practical skills, peer insight, and new ideas to return to your work with clarity and renewed momentum.

Kristin Marino

Kristin Marino

Chair 2026 Conference at QRCA

Walmart Data Ventures and Data Quality Co-Op Redefine Authentic Insights
Insights Industry News

Walmart Data Ventures and Data Quality Co-Op Redefine Authentic Insights

How Walmart’s Customer Spark Community Raises the Bar for Data Quality

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

When Good Data Goes Bad: The $10M Fraud Shaking the Industry
The Exchange

When Good Data Goes Bad: The $10M Fraud Shaking the Industry

A $10M fraud case reveals deep flaws in data quality and transparency. Discover what went wrong—and ...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers