Questionnaire instruments are frequently administered in digital formats, largely webbased, without much systematic investigation of possible effects from these administration methods. Furthermore, little attention has been given to the contextual lack of control for extraneous factors that may influence user responses. In this study, 263 university students were randomly assigned to one of two administration formats, web-based (WBA) or paper-based (PBA), to complete a set of questionnaires in an environment of their choice. Data collection included reporting context characteristics along three parameters: location, companions, and concurrent activities (including help-seeking). Outcomes of interest included location and conditions of user-chosen contexts, instrument performance, generative data quantity and quality, independence of completion, administrative efficiency, and participant affect. Participants did choose and allow distracters in their contexts-of-use, completing the questionnaires while engaged in multiple social and asocial concurrent activities. There were generally small but significant differences in instrument performance and user response characteristics by administration method and contexts-of-use. Participant comfort and data returned were both higher in PBA than WBA. Quantity return of generative data was higher in WBA while overall quality (completeness, coherence, correctness) of generative data was not significantly different. These findings present implications of administrative methods and contextual influences that inform measurement professionals' selection and design of administrative systems and conditions for research and evaluation data collection.