10 Questions to Ask Every Research Sample Supplier

Bad sample has always been the silent killer of good research. AI just made it a lot harder to detect.

Your Research Is Only As Good As Your Sample.

Your methodology can be flawless. Your questionnaire can be brilliant. Your analysis can be award-worthy. None of it matters if the people answering your survey aren't real — or aren't trying.

Sample quality has always been the foundation everything else sits on. But something shifted in the last two years. Generative AI didn't just change what researchers can do — it changed what bad actors can do. The old fraud playbook (bots, click farms, professional respondents gaming incentive systems) has been upgraded. Synthetic respondents can now pass attention checks, write convincing open-ended responses, and maintain consistent personas across a survey. Traditional quality controls weren't designed for this.

At the same time, AI is giving serious panel suppliers new tools to fight back: real-time behavioral analysis, natural language scoring of open ends, anomaly detection at scale. The gap between suppliers who are investing in these capabilities and those who aren't is widening fast.

The questions below are designed to help you tell the difference — and to protect your research before a single survey launches.

1. Where does your panel actually come from?

This is the most important question, and the one suppliers are least prepared for. Push past the marketing answer. Ask specifically: what are your primary recruitment channels? What percentage of your panel is recruited organically vs. sourced from third parties?

Why it matters: Many panels don't own their respondents — they rent them from other panels. If you use Panel A and Panel B to "diversify," you may unknowingly be drawing from the same underlying pool. Worse, neither panel may have visibility into how the originating source was recruited. Always ask for origination transparency.

2. How are you detecting AI-generated responses?

This is the new front line of sample quality, and most suppliers are underprepared. Generative AI can now produce open-ended survey responses that are grammatically correct, contextually plausible, and nearly indistinguishable from genuine human answers — at scale, in milliseconds.

Ask your supplier specifically what they're doing about it. Look for answers that include AI detection tools applied to open-end responses, perplexity scoring (which flags unnaturally consistent or fluent text), and flagging of response patterns that suggest templated or synthetic generation. Vague reassurances aren't enough.

Warning sign: If a supplier says "we use attention checks" as their primary AI defense, they're fighting a 2026 problem with 2019 tools. AI agents have learned to pass standard attention checks.

3. What does your fraud detection stack look like?

A credible supplier can describe their fraud detection in specific, technical terms — not just "we take quality seriously." Look for a layered answer that includes digital fingerprinting, IP geolocation verification, device reputation scoring, third-party ID validation, and behavioral anomaly detection during the survey itself.

The sophistication of this answer tells you a lot. Suppliers who have invested in real fraud infrastructure can describe their stack in detail. Those who haven't will pivot to talking about panel tenure and community engagement.

4. How do you handle professional respondents — and AI-assisted ones?

Professional respondents — people who take dozens of surveys a month purely for incentives — have been a chronic problem in online research for years. AI assistance makes them more effective and harder to detect. A respondent using an AI tool to answer questions can complete surveys faster, write more convincing open ends, and maintain greater internal consistency than they could alone.

Ask how the panel identifies and manages high-frequency respondents. Frequency caps and cross-panel de-duplication matter. So does behavioral profiling over time: does a respondent's answer quality degrade at high survey volumes? Are their open ends suspiciously consistent across multiple surveys?

5. What quality control checks run during fielding?

Real-time quality control during data collection should be table stakes. Look for:

  • Response time monitoring: flagging completions that are too fast to be genuine — though note that AI-assisted respondents may actually slow down to appear more human

  • Straight-lining and pattern detection: identifying respondents who click through rating scales without differentiation

  • Open-end quality scoring: reviewing text responses for gibberish, copy-paste, AI-generated filler, or suspiciously high fluency

  • Behavioural consistency checks: cross-referencing responses within a single survey for logical contradictions

  • Device and session anomalies: flagging unusual patterns like virtual machines, emulators, or headless browsers

6. Do you use trap questions — and can we add our own?

Trap questions remain one of the most effective tools in the quality arsenal: fake brand awareness questions, reverse-coded scale items, impossible factual claims. The best suppliers use them by default. The best research partners let you add your own.

One important caveat: AI-assisted respondents are increasingly capable of identifying and correctly answering standard trap questions — especially well-known ones that have circulated in the research community. The most effective traps are novel and project-specific. If your supplier is using the same red herrings they designed in 2018, they may not be catching what they think they are.

7. What is your incidence rate methodology?

How does the supplier estimate whether a project is feasible before quoting? If they give you a confident IR estimate without thoroughly reviewing your screener, they're guessing. A rigorous conversation about feasibility — including how they'll manage over-quota segments and what happens if IR comes in lower than expected — signals operational maturity.

Also ask: what is their policy when bots or fraudulent respondents are removed post-field? Do they replace them at cost?

8. How do you handle post-survey data validation?

Raw data from even the best panels needs cleaning. Ask what happens after fielding closes: what systematic checks are applied, what removal criteria are used, and whether you'll receive a data quality report alongside deliverables. Critically, ask whether flagged respondents are removed silently or documented — you should be able to see what was pulled and why.

AI-enhanced data cleaning is increasingly available and genuinely useful here: automated detection of response patterns that suggest fraud, clustering analysis to identify respondent segments behaving anomalously, and NLP-based scoring of open-end quality. Ask whether your supplier is using any of these.

9. Can you describe a project where you caught a quality problem mid-field?

This question separates suppliers who talk about quality from those who actively manage it. A strong answer includes a specific situation, the signal they caught, the action they took, and what the data would have looked like without intervention. Hesitation, vagueness, or a redirect to policy documents tells you something important.

10. How do you handle subcontracting — and who are your partners?

This one surprises clients every time. Many suppliers, when they can't fill a quota in-house, subcontract to other panels — sometimes without disclosure. Ask directly: do you subcontract? Under what conditions? Who are your primary partners, and do they meet the same fraud and AI detection standards you've described?

A supplier who has invested in rigorous quality controls is only as strong as the weakest panel they pull from. If they can't or won't answer this question, that's your answer.

Conclusion

The sample quality problem isn't new. What's new is the pace at which the threat is evolving. AI has lowered the cost and raised the sophistication of fraudulent survey-taking — and the industry's defenses are playing catch-up.

The suppliers who will still be worth working with in five years are the ones investing in AI-native fraud detection now: real-time behavioral modeling, generative response detection, synthetic identity screening. That investment is visible if you ask the right questions.

Using a panel isn't inherently risky. But buyers who don't ask hard questions don't get hard answers — and they don't get clean data either.

At Panalytics, we field these questions on every project, hold our suppliers accountable to the answers, and apply our own post-field validation layer regardless of the source. If you'd like a second opinion on your current panel setup, we're happy to take a look.

Next
Next

5 Questions To Ask Before Choosing a Research Partner