In order to overcome the extremely time-consuming drawback of deep learning (DL), broad learning system (BLS) was proposed as an alternative method. This model is simple, fast, and easy to update. To ensure the fitting and generalization ability of BLS, the hidden layer neurons are often set too many, in fact, a lot of neurons are not needed. Greedy BLS (GBLS) is proposed in this paper to deal with the redundancy of the hidden layer in BLS from another perspective. Different from BLS, the structure of GBLS can be seen as a combination of unsupervised multi-layer feature representation and supervised classification or regression. It trains with a greedy learning scheme, performs principal component analysis (PCA) on the previous hidden layer to form a set of compressed nodes, which are transformed into enhancement nodes and then activated by nonlinear functions. The new hidden layer is composed of all newly generated compressed nodes and enhancement nodes, and so on. The last hidden layer of the network contains the higher-order and abstract essential features of the original data, which is connected to the output layer. Each time a new layer is added to the model, and there is no need to retrain from the beginning, only the previous layer is trained. Experimental results demonstrate that the proposed GBLS model outperforms BLS both in classification and regression.