Reconstruction of submerged targets from multiple views (called multi-look imaging or ML!) is a technology adapted from machine vision and tomographic imaging whose utility is emerging in airborne battlefield surveillance applications. In summary, MU integrates a sufficient number of views that may have different portions of a target or scene visible. The integration process is designed to support semi-automatic derivation of a solid model that portrays an imaged target or scene. This approach is advantageous for target and feature location, especially in the presence of partial obscuration due to turbid optical media, overlying cover, or occlusion by other target objects.In previous research, we have analyzed the application of MU techniques to airborne imaging of surface and underwater targets [1][2][3]. Part 1 of this series of two papers contains error analysis of imaging through the sea surface (also called trans-MBL imaging, for Marine Boundary Layer). Emphasis was placed on target location errors resulting from optical effects such as scattering and absorption, as well as sampling and estimation of the interfacial topography. In this paper, we extend previous research to include algorithms for estimation of target submergence depth and lateral feature position. Increased accuracy is obtained by partially restoring the received image to reduce spatial dissociation due to interfacial refraction and blurring due to scattering within aqueous or aerosol media. Performance analysis is conducted with received imagery simulated by our existing computer models. Algorithms are expressed in image algebra, a rigorous, concise notation for signal and image processing that unifies linear and nonlinear mathematics in the image domain. Image algebra has been implemented on numerous workstations and parallel processors. Hence, our algorithms are widely portable.