Artificial tactile systems can facilitate the life of people suffering from a loss of the sense of touch. These systems use sensors and digital, battery-operated embedded units for data processing. Therefore, low-power, resource-constrained devices should host those embedded devices. The paper presents a framework based on 1-D convolutional neural networks (CNNs), which tackles the problem of classifying touch modalities, while limiting the number of architecture parameters. The paper also considers the computational cost of the pre-processing stage that handles tactile-sensor data before classification. The related pre-processing unit affects resources occupancy, computational cost, and ultimately classification accuracy. The experimental session involved a state-of-the-art real-world dataset containing three touch modalities. The 1-D CNN outperformed existing solutions in terms of accuracy, and showed a satisfactory trade-off between accuracy, computational cost, and resources occupancy. The implementation of the 1-D CNN classifier on an Arduino Nano 33 BLE device yielded real-time performances.