2021
DOI: 10.3390/rs13214220
|View full text |Cite
|
Sign up to set email alerts
|

MADNet 2.0: Pixel-Scale Topography Retrieval from Single-View Orbital Imagery of Mars Using Deep Learning

Abstract: The High-Resolution Imaging Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter provides remotely sensed imagery at the highest spatial resolution at 25–50 cm/pixel of the surface of Mars. However, due to the spatial resolution being so high, the total area covered by HiRISE targeted stereo acquisitions is very limited. This results in a lack of the availability of high-resolution digital terrain models (DTMs) which are better than 1 m/pixel. Such high-resolution DTMs have always been considere… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(30 citation statements)
references
References 62 publications
0
30
0
Order By: Relevance
“…MARSGAN [30] and MADNet [29] are both based on the relativistic GAN framework [72,73] that involves training of a generator network and a relativistic discriminator network in parallel. For MARSGAN SRR, the generator network is trained to produce potential SRR estimations, whilst the discriminator network is trained in parallel (and updated in an alternating manner with the generator network) to estimate the probability of the given training higher-resolution images being relatively more realistic than the generated SRR images on average (within a small batch), whereas for MADNet SDE, the generator network is trained to produce per-pixel relative heights, and the discriminator network is trained to distinguish the predicted heights from the ground-truth heights.…”
Section: Overview Of Marsgan Srr and Madnet Sdementioning
confidence: 99%
See 4 more Smart Citations
“…MARSGAN [30] and MADNet [29] are both based on the relativistic GAN framework [72,73] that involves training of a generator network and a relativistic discriminator network in parallel. For MARSGAN SRR, the generator network is trained to produce potential SRR estimations, whilst the discriminator network is trained in parallel (and updated in an alternating manner with the generator network) to estimate the probability of the given training higher-resolution images being relatively more realistic than the generated SRR images on average (within a small batch), whereas for MADNet SDE, the generator network is trained to produce per-pixel relative heights, and the discriminator network is trained to distinguish the predicted heights from the ground-truth heights.…”
Section: Overview Of Marsgan Srr and Madnet Sdementioning
confidence: 99%
“…Apart from the globally available lower resolution (~463 m/pixel) Mars Orbiter Laser Altimeter (MOLA) DTM [11,12], higher resolution Mars DTMs are typically produced from the 12.5-50 m/pixel Mars Express High Resolution Stereo Camera (HRSC) images [13], the 6 m/pixel Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images [14], the ~4.6 m/pixel (4 m/pixel nominal resolution) ExoMars Trace Gas Orbiter (TGO) Colour and Stereo Surface Imaging System (CaSSIS) images [15,16], and the ~30 cm/pixel (25 cm/pixel nominal resolution) MRO High Resolution Imaging Science Experiment (HiRISE) images [17]. The DTM products derived from these imaging sources often have different effective resolutions and spatial coverage, depending on the properties of the input images and the DTM retrieval methods, which include traditional photogrammetric methods [18][19][20][21], photoclinometry methods [22][23][24][25], and deep learning-based methods [26][27][28][29].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations