We analyze networks that feature reputational learning: how links are initially formed by agents under incomplete information, how agents learn about their neighbors through these links, and how links may ultimately become broken. We show that the type of information agents have access to, and the speed at which agents learn about each other, can have tremendous repercussions for the network evolution and the overall network social welfare. Specifically, faster learning can often be harmful for networks as a whole if agents are myopic, because agents fail to fully internalize the benefits of experimentation and break off links too quickly. As a result, preventing two agents from linking with each other can be socially beneficial, even if the two agents are initially believed to be of high quality. This is due to the fact that having fewer connections slows the rate of learning about these agents, which can be socially beneficial. Another method of solving the informational problem is to impose costs for breaking links, in order to incentivize agents to experiment more carefully.