Data quality varies significantly depending upon how sample is sourced and delivered. Many factors can wreak havoc on research — from poor respondent pools, fraudulent respondents to intentionally misleading answers. Factors like poor study design and survey structure can also play a role.
With the bulk of research today being conducted via online methods, we see our responsibility as being stewards of data quality. To that end, we focus on 4 critical elements that ensure that the data we collect is of the utmost quality:
1. Panel Selection
Blending of panels is one way to stabilize data sources. The premise being that an average across sources is better than relying on one source that may have unusual nuances or disparities.
With apps representing 60% of online usage time in the U.S., it’s important to ensure that your recruiting can also engage respondents through mobile apps.
To manage quotas from blended sources, we use proprietary sourcing templates that allow seamless integration of multiple sources and ensure maximum diversity and reach with our sampling.
As for panel members, they must be recruited using permission-based techniques — not via river sampling or unsolicited emails. This minimizes bias in your sample mix, ensures a diverse panelist profile, and yields engaged respondents. Everyone included in your survey must be re-contactable and traceable — which means river sample is not an option.
Using panel partners that employ Relevant ID — an invisible process that gathers a range of data points from a respondent’s computer and puts those through deterministic algorithms to create a unique digital fingerprint of each computer — eliminates duplicate or fraudulent responses.
2. Sample Management
Click-Balancing should be a key component of your process to maximize sample integrity by reducing the inherent bias that exists in virtually all types of sampling. It monitors key demos in your screener and continuously adjusts outgoing sample invitations to appropriate groups.
The optimal scenario is to create a sample frame within each target group you are recruiting. This allows you to boost the sample outgo based on how different audiences respond (e.g., response rate for men is about half that of women). Then, as sample comes in, click-balancing will ensure a representation that closely matches your targets.
3. Survey Design
Regardless of how we’re connecting with them, we must put respondents first. Since the bulk of web activities occur on mobile devices, our surveys must be mobile-friendly to keep respondents engaged. Otherwise, we run the risk of people abandoning the survey, thus impacting sample representativeness. At Radius, we have identified best practices that help keep people engaged. Our surveys are specifically designed to:
4. Data Reviews
All data must be thoroughly reviewed for speeders, “straightliners” and logical open-ended responses. This is done by a series of “sense checks,” a review of different combinations of answers, and/or a review of para data metrics (administrative data about the survey).
Respondents who have questionable data (e.g., small company size but very large revenue, identical IP address with the same browser information) are flagged, removed and restricted from future participation.
We’ve created programming templates to layer software checks against duplication and fraud. For example, a proprietary Honeypot or system trap gets embedded in a survey to prevent unauthorized usage and respondent fraud.
Finally, unique URL links are sent to each respondent and expire once a respondent is terminated or completes the survey, preventing the link from being used again.
Contact us to discuss how you can achieve the highest levels of sample quality and management.