Digital Platforms consist of algorithms and rules that shape consumer behaviour. When faced with these embodiments of the platform's interests, how do consumers protect their own interests? Through multi-method, qualitative fieldwork focused on commuters using ridehailing platforms in Metro Manila, this paper shows that consumers develop strategies to achieve better terms for themselves. This paper contributes to the literature on algorithmic control and user agency in two ways. First, it proposes a fine-grained typology of consumer strategies in algorithmic digital platforms, consisting of 5 major types and 18 sub-types. Second, the typology sheds light on the distinct characteristics of consumer strategies and their implications. Future studies into user strategies and algorithmic systems, and digital platforms will benefit from the typology and implications laid out here.
The nascent literature on platform trust in contexts of the Global South does not yet explain how trust for technology platforms persist or erode in the presence of issues that invite distrust, such as the controversial use of surge pricing in ride-hailing platforms. This paper uses in-depth interviews with 30 users of ride-hailing platforms in Manila to study how attitudes toward surge pricing influence attitudes for ride-hailing platforms. The paper finds that despite respondents’ negative attitudes toward surge pricing amid doubts on its fairness and transparency, trust for ride-hailing platforms persist. Persistence of trust is partly due to users’ rational experiences indicating that ride-hailing platforms provide net benefits despite the possibility of unfair surge pricing, especially when faced with the disagreeable alternative of using Manila’s poor transport infrastructure. However, the persistence of trust was also due to cognitive biases, as reflected in constructs such as _acceptance of limited transparency_, _perceived control_, and the ideational appeal of technological systems. These cognitive biases found in the data increase our understanding why trust in platform technologies may persist even when besieged by distrust. The findings can also increase our vigilance over the various cognitive biases which can be exploited to create trust on less than meritorious grounds, and hold a firm grip on users’ trust even as the latter begin to harbour healthy skepticism over fairness and transparency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.