This thesis focuses on resolving the issues in high dimensional learning for neural networks. The curse of dimensionality has limited the growth of artificial neural network applications. While the curse is evident in current artificial systems, the stability and plasticity of the human biological brain shows no sign of this problem. Much of the information processing within the Prefrontal Cortex is performed as semantically transformed information. The work represented here, is broken into 4 stages. At the basic strata, an attempt is made to define an original stable platform, The K-iterations Fast Learning Artificial Neural Network (KFLANN), by which hierarchical systems can be built. Effort was spent in defining such a platform that provides a consistent clustering behavior, regardless the Data Presentation Sequence (DPS). The second stage of the work presents an original attempt to fuse the KFLANN with an approximation system by combining it with the Canonical Correlation Analysis (CCA). This is similar to the PCA, except that there is now a consideration of multiple dimensions. The resultant system is a hierarchical system which was called the HieFLANN. The third stage presents a second model of hierarchical nature, but differs from the HieFLANN in concept. The resultant HieFLANN_BP model represents an attempt to cascade atomized network models to process segments of the original problem, only to recombine processed information at varying hierarchical layers.