Recently, considerable research attention has been paid to network embedding, a popular approach to construct feature vectors of vertices in latent space. Due to the curse of dimensionality and sparsity in graphical datasets, this approach has become indispensable for machine learning tasks over large networks. The majority of existing literature has considered this technique under the assumption that the network is static. However, networks in many applications, including social networks, collaboration networks, and recommender systems, nodes and edges accrue to a growing network as a streaming. Moreover, high-throughput production machine learning systems require to promptly generate representations for new vertices. A small number of very recent results have address the problem of embedding for dynamic networks. However, they either rely on knowledge of vertex attributes, suffer high-time complexity, or need to be re-trained without closed-form expression. Thus the approach of adapting of the existing methods designed for static networks or dynamic networks to the streaming environment faces non-trivial technical challenges.These challenges motivate developing new approaches to the problems of streaming network embedding. In this paper We propose a new framework that is able to generate latent features for new vertices with high efficiency and low complexity under specified iteration rounds. We formulate a constrained optimization problem for the modification of the representation resulting from a stream arrival. We show this problem has no closed-form solution and instead develop an online approximation solution. Our solution follows three steps: (1) identify vertices affected by newly arrived ones, (2) generating latent features for new vertices, and (3) updating the latent features of the most affected vertices. The generated representations are provably feasible and not far from the optimal ones in terms of expectation. Multi-class classification and clustering on five real-world networks demonstrate that our model can efficiently update vertex representations and simultaneously achieve comparable or even better performance compared with model retraining.