Strong Lensing is a powerful probe of the matter distribution in galaxies and clusters and a relevant tool for cosmography. Analyses of strong gravitational lenses with Deep Learning have become a popular approach due to these astronomical objects' rarity and image complexity. Next-generation surveys will provide more opportunities to derive science from these objects and an increasing data volume to be analyzed. However, finding strong lenses is challenging, as their number densities are orders of magnitude below those of galaxies. Therefore, specific Strong Lensing search algorithms are required to discover the highest number of systems possible with high purity and low false alarm rate. The need for better algorithms has prompted the development of an open community data science competition named Strong Gravitational Lensing Challenge (SGLC). This work presents the Deep Learning strategies and methodology used to design the highest-scoring algorithm in the II SGLC. We discuss the approach used for this dataset, the choice for a suitable architecture, particularly the use of a network with two branches to work with images in different resolutions, and its optimization. We also discuss the detectability limit, the lessons learned, and prospects for defining a tailor-made architecture in a survey in contrast to a general one. Finally, we release the models and discuss the best choice to easily adapt the model to a dataset representing a survey with a different instrument. This work helps to take a step towards efficient, adaptable and accurate analyses of strong lenses with deep learning frameworks.