One of the trusisms that I remember from my research methods courses in my undergraduate was that to get more accurate data, ask the same question in a different way. The "tell" then came from analyzing whether or not there is significant difference in responses on like questions.
Low variation = more accurate
It was around this same time that I was regularly voluntold to participate in research sessions as a subject. Most of these sessions were not terribly interesting. Most of the time I was, well, subjected to taking surveys, many with hundreds of questions.
It all seemed a bit much, but I was to understand that it was necessary to the accuracy of the data. This also extended to the course evaluation surveys which were also often over a hundred survey questions.
After leaving education research and transitioning into practice, I quickly learned that the more is usually not better. More is typically worse. Students hate long surveys and tests, they do worse on them, and thus the performance data didn't seem to capture their abilities. Long was and is fatiguing.
Accuracy was at the cost of validity.
Less is More
"In as few as six or eight questions people are already answering in such a way that you're already worse off if you're trying to predict real-world behavior,"
I've advocated previously on create surveys, in particular instructional design surveys, using a minimalist ethos. The mental effort to complete a survey should be as distilled as possible, which is why I often propose a simple +/=/- value approach to survey feedback and using direct, barebones question items.
Collect only what you need. Collect only what is useful.
By using a short, minimal response questionaire you can then employ it more frequently to grow the data pool instead of relying on like-questions which bloat the instrument.
I haven't written on this yet, but I think the same approach can also be applied to testing validity more often than it currently is. This paragraph from the article I think speaks to that:
"the research suggests that to maximize the validity of preference measurement surveys, researchers could use an ensemble of methods, preferably using multiple means of measurement, such as questions that involve choosing between options available at different times, matching questions, and a variety of contexts"
Those that know me know my disdain for marketing as an industry, but here I'm giving credit where it is due. Instead of using the lessons here to build more addictive junk foods, perhaps we can use them to build better assessment tools to the benefit of learners.
Original Surveys with repetitive questions yield bad data, study finds (2022, January 28) retrieved 29 January 2022 from https://phys.org/news/2022-01-surveys-repetitive-yield-bad.
"... is gathering more data in surveys always better, or could asking too many questions lead to respondents providing less useful responses as they adapt to the survey,"