Behavioural Insights into Fraudulent Survey Respondents

Back

Behavioural Insights into Fraudulent Survey Respondents

6-minute read -

Online surveys have become a cornerstone of data collection, but fraudulent respondents are increasingly undermining their validity. In recent years, instances of survey fraud have skyrocketed – fraud that was perhaps ~15% of responses several years ago has now exploded to roughly 80% in recent online studies, with some surveys seeing virtually all responses turn out to be fake. This alarming rise poses a significant threat to the integrity of survey data and the decisions drawn from it.

Motivations and Deceptive Tactics

The primary motive behind survey fraud is usually financial. Fraudulent respondents – whether real individuals or automated bots – deliberately provide false or misleading answers to obtain rewards or incentives. Many will outright lie about their demographics or eligibility to meet a study’s criteria and gain access to paid surveys. Researchers have noted ineligible individuals enrolling in studies under false pretences purely to claim the offered incentive. For example, someone might pretend to have a certain medical condition or belong to a target age group just to qualify for a high-paying survey. In some cases, fraudsters even become pushy or aggressive in pursuing the promised payment, underscoring that their interest is the reward – not the survey itself.

Once they qualify, fraudulent participants use various tactics to maximize their gains. A common behaviour is duplicate entries – the same person taking a survey multiple times (often by creating multiple accounts) to collect multiple payouts. Others attempt as many surveys as possible, regardless of truthfulness, essentially “farm­ing” surveys for money. Another ploy is inattentive or speeding behaviour: rushing through questions, straight-lining answers (e.g. selecting the same option down a scale), or inputting gibberish in open-ended responses.

These respondents might not even read the questions; they aim to finish quickly for the incentive. Notably, many inattentive answers aren’t driven by malice – research finds respondents often tune out due to survey fatigue from repetitive or lengthy questionnaires. However, whether born of boredom or blatant dishonesty, such careless responses can seriously degrade data quality.

Impact on Data Quality

Both careless responding and deliberate fraud skew results and threaten the validity of research. A “lazy” but genuine respondent who isn’t paying attention introduces noise and bias, while a determined fraudster may fabricate data entirely.

In practice, both phenomena can “tank the validity and reliability” of survey findings. If unchecked, widespread fraudulent responses have the potential to undermine data integrity, discredit research conclusions, and lead to poor decisions based on corrupted data. Businesses and researchers could be basing strategies on insights that are, in fact, built on a sandcastle of falsified responses. This makes it crucial to understand and address fraudulent behaviours proactively.

Bots: Automated Survey Fraudsters

A growing share of survey fraud comes from automated bots. These are software programs designed to mimic human respondents and fill out surveys en masse. Bots can submit hundreds of surveys in a short time, often leaving behind a trail of identical or near-identical answers that clearly indicate automation. For example, investigators have reported bizarre answer patterns (e.g. an implausible surge of people professing the same unusual food preference) that turned out to be the work of bots. In some fields, bots have been found to comprise up to 60% of online survey responses – a staggering figure that underscores the scale of the problem. While early bots were relatively easy to spot by their nonsensical outputs, modern bots have grown far more sophisticated.

Fraudsters now employ AI-driven bots that rotate through IP addresses and use machine learning to imitate human answering behaviour. These advanced bots can generate credible-sounding answers (even to open-ended questions) and stay consistent with any fake profile details they’ve given, making them hard to distinguish from genuine respondents. The arms race between bot developers and survey defenders continues to intensify as technology evolves.

Organised Fraud and “Survey Farms”

Not all fraud is at the individual level; there are also organised rings of human respondents who collude to exploit surveys for profit. So-called survey farms comprise groups of coordinated individuals who systematically participate in surveys to earn money. These malicious actors collaborate to exploit incentives, often sharing tactics and even using automated tools for efficiency. They provide carefully crafted (yet false) answers that are designed to appear legitimate, making detection difficult. Investigations have even uncovered systematic “survey farms” – for example, hundreds of responses originating from the same IP addresses in a short time-frame – indicating an almost cottage-industry style operation of coordinated fake survey takers.

The rise of global online work platforms since around 2018 has amplified this issue, with professional survey takers on services like Amazon Mechanical Turk contributing to the surge in fraudulent data. Researchers have observed that a significant number of fraudulent responses come from regions where the economic incentive is particularly strong – for instance, cases of large-scale fake entries trace back to countries like India or Venezuela, reflecting how global income disparities can drive people to engage in survey fraud as a form of livelihood.

In essence, an underground economy of survey fraudsters has emerged, blending human effort with technology to generate high volumes of bogus data.

Scale of the Problem

The prevalence of fraud means researchers must often throw out substantial portions of collected data. Market research firms report that they are filtering out a large fraction of responses due to quality concerns. For example, in late 2022 Kantar’s data quality team found that up to 38% of online survey data had to be discarded because of fraud or suspect quality indicators. Academic studies have reported even more startling figures for open online surveys: one recent study found 89% of its purported respondents were fraudulent once rigorous checks were applied. Indeed, multiple investigations have encountered scenarios where the majority of responses in a dataset – in some cases nearly 100% – turned out to be fake when scrutinised.

This represents not only wasted effort and incentive costs, but also a serious risk of analytics blindfold: if undetected fakes slip through, they can distort findings and mislead decision-making. It’s a sobering realization that without countermeasures, one might be analyzing data that is more fiction than fact.

Conclusion

Behavioural insights into why and how people (or bots) engage in survey fraud are crucial in mounting an effective response. Understanding the fraudster’s mindset – from the lure of easy money to the tricks used to avoid detection – allows researchers and organisations to anticipate problems and design smarter solutions.

For example, knowing that tedious surveys invite inattentiveness can push designers to make surveys shorter or more engaging, thereby pre-empting some fraud. Recognising that higher incentives attract more fraudsters might lead to adjusting incentive schemes or investing more in verification for high-reward studies. Awareness of common deception tactics (like straight-lining or IP hopping) means analytical teams can specifically watch for those patterns. In essence, fighting survey fraud is now an integral part of survey methodology. Experts agree that a multi-pronged strategy is required – combining careful survey design, robust identity verification, technical detection tools, and thorough post-survey data audits.

By applying these measures and remaining alert to fraudsters’ evolving behaviours, researchers and businesses can greatly mitigate the impact of fraudulent respondents. This ensures that decisions and insights drawn from survey research rest on a foundation of genuine, reliable data, rather than on the shaky ground of deception.

 

Want to know how Yesty can help?

Our latest whitepaper, The Incentive Blueprint: Designing Reward Systems That Attract Real Respondents and Ensure Data Quality, dives deeper into how smart incentive strategies can prevent fraud and improve data reliability. Download the whitepaper here to learn more, or reach out to us, we’re happy to show how our solutions can strengthen your fraud prevention and participant experience.