Rapid progress in expansion of the internet services have provided an alternative way for learning other than the traditional classroom learning. Due to the availability of multiple learning options, evaluating each option and judging the best use case plays a vital role. One of the most important characteristics that a human brain utilizes during process of learning is cognition that involves attention and retention. Student's attention span and situational interests during learning have always been a subject matter of research. Apart from classroom learning, e‐learning (MOOC based learning) is the other most preferred way of learning. Therefore, the objective of this study is to assess attention levels of a learner in MOOC (Massive Open Online Courses) learning environments and compare it with conventional classroom learning using brain signals. The proposed method captures electroencephalogram (EEG) frequency bands of different subjects while going through a short lecture in MOOC/e‐learning environment and classroom environment. The captured data points were annotated for attentiveness manually by referring to the subject's feedback and video clips. Machine learning classification model of support vector machines (SVM) was used to classify student's mental state as attentive or nonattentive. Promising results were obtained and experiments revealed that higher attention levels were maintained during MOOC learning environment in comparison to traditional learning approach.
Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions severely limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We, therefore, propose a deep neural network architecture for Low-Light Light Field (L3F) restoration, which we refer to as L3Fnet. The proposed L3Fnet not only performs the necessary visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view.To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light reconstruction methods, we also propose an L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we propose a novel pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of singleframe images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.