Virtual Reality (VR) has been adopted as a leading technology for the metaverse, yet most previous VR systems provide one-size-fits-all experiences to users. Context-awareness in VR enables personalized experiences in the metaverse, such as improved embodiment and deeper integration of the real world and virtual worlds. Personalization requires context data from diverse sources. We proposed a reusable and extensible context data collection framework, ManySense VR, which unifies data collection from diverse sources for VR applications. ManySense VR was implemented in Unity based on extensible context data managers collecting data from data sources such as an eye tracker, electroencephalogram, pulse, respiration, galvanic skin response, facial tracker, and Open Weather Map. We used ManySense VR to build a context-aware embodiment VR scene where the user’s avatar is synchronized with their bodily actions. The performance evaluation of ManySense VR showed good performance in processor usage, frame rate, and memory footprint. Additionally, we conducted a qualitative formative evaluation by interviewing five developers (two males and three females; mean age: 22) after they used and extended ManySense VR. The participants expressed advantages (e.g., ease-of-use, learnability, familiarity, quickness, and extensibility), disadvantages (e.g., inconvenient/error-prone data query method and lack of diversity in callback methods), future application ideas, and improvement suggestions that indicate potential and can guide future development. In conclusion, ManySense VR is an efficient tool for researchers and developers to easily integrate context data into their Unity-based VR applications for the metaverse.