In the last 20 years, e-mail, instant messaging, documents, blogs, news, text communication in the transfer of information over the web, as a result of the presentation and transmission of information as a result of the Web the dramatic increase in the amount of data in digital environments has increased the importance of studies in the field of knowledge extraction from unstructured data. Since the 2000s, one of the primary goals of researchers in the field of artificial intelligence has been to extract knowledge from heterogeneous data sources on the World Wide Web, including real-life entities and semantic relationships between entities, and to display them in machine-readable format. Advances in natural language processing and information extraction have increased the importance of large-scale knowledge bases in complex applications, resulting in scalable information extraction from semi-structured and unstructured heterogeneous data sources on the Web, and the detection of entities and relationships; It enabled the automatic creation of prominent knowledge bases in this field such as DbPedia, YAGO, NELL, Freebase, Probase, Google Knowledge Vault, IBM Watsons, which contain millions of semantic relationships between hundreds of thousands of entities, and displaying the created information in machine-readable format. Within the scope of this article; Web-scale(end-to-end) knowledge extraction from heterogeneous data sources, methods, challenges and opportunities are provided.