Rarely any study investigates how information gatekeeping through the solutions and services enabled by algorithms, hereafter referred to as algorithmic technologies (AT), creates negative consequences for the users. To fill this gap, this state‐of‐the‐art review analyzes 229 relevant articles from diverse academic disciplines. We employed thematic analysis to identify, analyze, classify, and reveal the chain reactions among the negative consequences. We found that the gatekeeping of information (text, audio, video, and graphics) through AT like artificial intelligence (e.g., chatbots, large language models, machine learning, robots), decision support systems (used by banks, grocery stores, police, etc.), hashtags, online gaming platforms, search technologies (e.g., voice assistants, ChatGPT), and Web 3.0 (e.g., Internet of Things, non‐fungible tokens) creates or reinforces cognitive vulnerability, economic divide and financial vulnerability, information divide, physical vulnerability, psychological vulnerability, and social divide virtually and in the offline world. Theoretical implications include the hierarchical depiction of the chain reactions among the primary, secondary, and tertiary divides and vulnerabilities. To mitigate these negative consequences, we call for concerted efforts using top‐down strategies for governments, organizations, and technology experts to attain more transparency, accountability, ethical behavior, and moral practices, and bottom‐up strategies for users to be more alert, discerning, critical, and proactive.