Computer vision based traffic sign detection and recognition is an active field of research but the task becomes challenging when the sign of interest is partially occluded by nearby objects like a tree, pole or vehicle. Another difficulty posed especially in the developing countries is the lost colors problem that arises due to aging and poor maintenance. This work presents an automatic technique that focuses on visible parts only and suppresses occluded portions. Features are collected using a convolutional neural network inspired invariant feature extraction technique augmented with feature interaction based dimensionality reduction. Further, with the use of dynamic parameter estimation, an adaptive system for continuous learning is also proposed. Since the effect of partial occlusion has not been thoroughly studied, there is no benchmark database available for this purpose. We have prepared two datasets by combining originally and synthetically occluded images taken from field surveys and from famous GTSRB database. Experiments revealed that our technique outperforms state of the art recognition methods previously used for visible and occluded signs by obtaining 0.81 precision and 0.79 recall values on the average. The proposed method also shows a remarkably low error rate as the amount of occlusion is increased.