Artificial companions and digital assistants have been investigated for several decades, from research in the autonomous agents and social robots areas to the highly popular voice-enabled digital assistants that are already in widespread use (e.g., Siri and Alexa). Although these companions provide valuable information and services to people, they remain reactive entities that operate in isolated environments waiting to be asked for help. The Web is now emerging as a uniform hypermedia fabric that interconnects everything (e.g., devices, physical objects, abstract concepts, digital services), thereby enabling unprecedented levels of automation and comfort in our professional and private lives. However, this also results in increasingly complex environments that are becoming unintelligible to everyday users. To ameliorate this situation, we envision proactive Digital Companions that take advantage of this new generation of pervasive hypermedia environments to provide assistance and protection to people. In addition to Digital Companions perceiving a person's environment through vision and sound, pervasive hypermedia environments provide them with means to further contextualize the situation by exploiting information from available connected devices, and give them access to rich knowledge bases that allow to derive relevant actions and recommendations.