In this digital era, images are the essential medium of communication. They may be found on magazine covers, legal evidence, newspapers, and social media. With the advancement of numerous editing tools, image modification will pose a serious concern due to the spread of information. Image splicing is one of the leading strategies in image forgery. While there are several approaches to solving the problem, some have flaws, such as the accuracy of faked region localization. We present an effective image-splicing forgery localization model called VGG16Unet. The VGG16Unet is an encoder-decoder architecture for the localization of image-spliced forgery. The pre-trained VGG16 is used as an encoder part of the architecture, and our network's decoder part is inspired by UNet architecture. The faked images and their respective masks are sent over the network in form of patches to train the model. This model is tested on the widely used dataset of image splicing forgery, i.e., NIST and CASIA v2. The experimental result illustrates that VGG16Unet's performance of localizing the image-spliced forged regions is more effective than prior methods.