In the Japanese language, the word "Kansei" is used to indicate mental states or activities that occur in response to stimulation from the outside world. For example, emotion is defined as a component of Kansei. In the West, words such as sensibility and perceptiveness are often used with meanings similar to Kansei. Furthermore, in the Japanese field of Kansei engineering, designers strive to integrate human feelings and sensibilities more closely with manufacturing and production activities. Accordingly, if Kansei-related information can be extracted in various situations, it is believed that distinctive design, development, and evaluation processes become possible. In this paper, we describe the fabrication of an inexpensive and near real-time system capable of extracting Kansei information from user facial expressions. To accomplish this, we developed a computer vision system using the free OpenCV image-processing library. The characteristic values calculated from these inputs were the variation (the normalized polygon area change ratio) of four items: the inner end of the eyebrows, the upper and lower parts of the eyes, and the corners of the mouth, which were then calculated using seven face nodes. Three Kansei information types were targeted: positive Kansei information (appreciation, fun, and happiness), negative Kansei information (unpleasantness and distaste), and Kansei information associated with shock or surprise. With the assistance of 12 test subjects, we began working to obtain specific Kansei knowledge, and quickly determined that the relationships between facial expressions and Kansei information were highly individual in nature. Therefore, in the next stage, we formulated an algorithm to discriminate the three Kansei information types while focusing on a single subject, and obtained good results. However, since facial reactions vary significantly between individual human beings, it is currently necessary to prepare a seven-point template image in advance for each person to be examined, and then to adjust the algorithm parameters after measuring their facial expressions in order to obtain Kansei information. Despite these initial difficulties, our results show that an inexpensive and near real-time computer vision system capable of extracting facial expression Kansei information can be constructed. We believe that such systems can make it possible to apply Kansei information decision support to a variety of automatic evaluation and design processes.