Limits...
Precision and Disclosure in Text and Voice Interviews on Smartphones.

Schober MF, Conrad FG, Antoun C, Ehlen P, Fail S, Hupp AL, Johnston M, Vickers L, Yan HY, Zhang C - PLoS ONE (2015)

Bottom Line: Text respondents also reported a strong preference for future interviews by text.The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews.The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, New School for Social Research, The New School, New York, New York, United States of America.

ABSTRACT
As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

No MeSH data available.


Related in: MedlinePlus

Interview duration and median number of turns per survey question.These timelines display the median duration of question-answer sequences with the median number of turns after each question.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4465184&req=5

pone.0128337.g007: Interview duration and median number of turns per survey question.These timelines display the median duration of question-answer sequences with the median number of turns after each question.

Mentions: As one might expect, interviews via text had an entirely different rhythm than interviews via voice: they were far longer and less “dense,” with fewer conversational turns after survey questions, than were voice interviews (see Fig 7). Text interviews took substantially longer to complete than voice interviews (Mann-Whitney U, z = -19.02, p < .0001) but they involved reliably fewer conversational turns (F[1,617] = 1297.01, p < .0001), reflecting a slower back-and-forth compared to the relative rapid fire of voice (as measured by turns per minute—F[1,617] = 3111.75, p < .0001). (Degrees of freedom for these comparisons reflect that 13 voice interviews were not fully audio-recorded and 2 human text interviews were each missing one sequence). The number of turns following each question in voice was more variable than in text, where most questions were answered in a single turn (Levene’s test: F[1,617] = 86.37, p < .0001). Questions 7–12, which asked about sexual behavior and orientation, averaged longer response times than other questions in all four modes, presumably due to their sensitivity [33].


Precision and Disclosure in Text and Voice Interviews on Smartphones.

Schober MF, Conrad FG, Antoun C, Ehlen P, Fail S, Hupp AL, Johnston M, Vickers L, Yan HY, Zhang C - PLoS ONE (2015)

Interview duration and median number of turns per survey question.These timelines display the median duration of question-answer sequences with the median number of turns after each question.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4465184&req=5

pone.0128337.g007: Interview duration and median number of turns per survey question.These timelines display the median duration of question-answer sequences with the median number of turns after each question.
Mentions: As one might expect, interviews via text had an entirely different rhythm than interviews via voice: they were far longer and less “dense,” with fewer conversational turns after survey questions, than were voice interviews (see Fig 7). Text interviews took substantially longer to complete than voice interviews (Mann-Whitney U, z = -19.02, p < .0001) but they involved reliably fewer conversational turns (F[1,617] = 1297.01, p < .0001), reflecting a slower back-and-forth compared to the relative rapid fire of voice (as measured by turns per minute—F[1,617] = 3111.75, p < .0001). (Degrees of freedom for these comparisons reflect that 13 voice interviews were not fully audio-recorded and 2 human text interviews were each missing one sequence). The number of turns following each question in voice was more variable than in text, where most questions were answered in a single turn (Levene’s test: F[1,617] = 86.37, p < .0001). Questions 7–12, which asked about sexual behavior and orientation, averaged longer response times than other questions in all four modes, presumably due to their sensitivity [33].

Bottom Line: Text respondents also reported a strong preference for future interviews by text.The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews.The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, New School for Social Research, The New School, New York, New York, United States of America.

ABSTRACT
As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

No MeSH data available.


Related in: MedlinePlus