Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that this is a consequence of implicit multitask learning in language model training . Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping general natural language tasks into a human-readable prompted form. We convert a large set of supervised datasets, each with multiple prompts using varying natural language. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. We fine-tune a pretrained encoder-decoder model on this multitask mixture covering a wide variety of tasks. The model attains strong zero-shot performance on several standard datasets, often outperforming models up to 16× its size. Further, our approach attains strong performance on a subset of tasks from the BIG-Bench benchmark, outperforming models up to 6× its size. All prompts and trained models are available at github.com/bigscience-workshop/promptsource/ and huggingface.co/bigscience/T0pp.
In this work we present the preliminary results of the fusion of Photonic Mixer Device -PMD and CMOS cameras for driver assistance applications. Although the algorithms are demonstrated mainly for pedestrians, they apply to the other objects on the street. PMD camera delivers the 3D object list. Object coordinates are further projected into CMOS image plane where classification is performed using Support Vector Machines. As compared to PMD camera the CMOS camera has higher resolution, which gives the possibility to realize finer object detection, separation and classification. As the feature vector we use Quadruple Haar Discrete Wavelet Transformation (QH DWT). The speed improvement of the SVM in the testing phase (necessary for real-time implementation) is realized with Burg's Reduced Set Vector Method (BRSVM), improving classification speed nearly 70 times. We have achieved the pedestrian detection rate of 80%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.