Computer interface for severely limb-disabled person is an important issue to investigate. Head, eye and mouse tracking are main sources of information for such interface. Since limb-disabled person cannot use under the neck, how to enrich information available is important. This paper tries to use mouse, more precisely cheek, information. Although recent progress of speech recognition technology realizes interfaces to control various machines accurately, the limb-disables who cannot speak cannot use such speech recognition interface. Computer interfaces need to be developed according to their degrees of disability and must use the body part which they can move effectively. There are many handicapped persons whose degrees of disability are different. It is desirable to prepare various computer interfaces. Such interfaces must use various body parts to cope with various handicaps. From above viewpoints, this paper proposes the use of the cheek visual appearance changes using web camera. The experimental results with two layer convolutional deep learning processing show that the average recognition accuracy is 97%. In addition, the effects of the image size and deep learning network structure for the recognition performance are reported in this paper.