Decoding brain signals can not only reveal Metaverse users' expectations but also early detect error-related behaviors such as stress, drowsiness, and motion sickness. For that, this article proposes a pioneering framework using wireless/overthe-air Brain-Computer Interface (BCI) to assist creation of virtual avatars as human representation in the Metaverse. Specifically, to eliminate the computational burden for Metaverse users' devices, we leverage Wireless Edge Servers (WES) that are popular in 5G architecture and therein URLLC, enhanced broadband features to obtain and process the brain activities, i.e., electroencephalography (EEG) signals (via uplink wireless channels). As a result, the WES can learn human behaviors, adapt system configurations, and allocate radio resources to create individualized settings and enhance user experiences. Despite the potential of BCI, the inherent noisy/fading wireless channels and the uncertainty in Metaverse users' demands and behaviors make the related resource allocation and learning/classification problems particularly challenging. We formulate the joint learning and resource allocation problem as a Quality-of-Experience (QoE) maximization problem that takes into the latency, brain classification accuracy, and resources of the system. To tackle this mixed integer programming problem, we then propose two novel algorithms that are (i) a hybrid learning algorithm to maximize the user QoE and (ii) a meta-learning algorithm to exploit the neurodiversity of the brain signals among multiple Metaverse users. The extensive experiment results with different BCI datasets show that our proposed algorithms can not only provide low delay for virtual reality (VR) applications but also can achieve high classification accuracy for the collected brain signals.