In recent years, assistive technology and digital accessibility for blind and visually impaired people (BVIP) has been significantly improved. Yet, group discussions, especially in a business context, are still challenging as non-verbal communication (NVC) is often depicted on digital whiteboards, including deictic gestures paired with visual artifacts. However, as NVC heavily relies on the visual perception, whichrepresents a large amount of detail, an adaptive approach is required that identifies the most relevant information for BVIP. Additionally, visual artifacts usually rely on spatial properties such as position, orientation, and dimensions to convey essential information such as hierarchy, cohesion, and importance that is often not accessible to the BVIP. In this paper, we investigate the requirements of BVIP during brainstorming sessions and, based on our findings, provide an accessible multimodal tool that uses non-verbal and spatial cues as an additional layer of information. Further, we contribute by presenting a set of input and output modalities that encode and decode information with respect to the individual demands of BVIP and the requirements of different use cases.