2022
DOI: 10.1016/j.rama.2022.03.007
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Mesquite Distribution Using Unpiloted Aerial Vehicles and Satellite Imagery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 56 publications
0
8
0
Order By: Relevance
“…Imagery captured by the drone was processed in Pix4Dmapper (Pix4D S. A., Prilly, Switzerland), which enabled us to stitch overlapping images together to create 2‐D orthomosaics and 3‐D models of each flight (DiMaggio et al 2020, Page et al 2022). Pix4Dmapper uses the structure from motion algorithm to create 3‐D photogrammetric meshes and 3‐D point cloud datasets (X, Y, Z), generating a digital surface model (DSM) and digital terrain model (DTM; Kuzelka and Surovy 2018, DiMaggio et al 2020, Page et al 2022). The DSM represents height values of the vegetation canopy and the DTM depicts elevation values of the terrain (Jimenez‐Jimenez et al 2021).…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Imagery captured by the drone was processed in Pix4Dmapper (Pix4D S. A., Prilly, Switzerland), which enabled us to stitch overlapping images together to create 2‐D orthomosaics and 3‐D models of each flight (DiMaggio et al 2020, Page et al 2022). Pix4Dmapper uses the structure from motion algorithm to create 3‐D photogrammetric meshes and 3‐D point cloud datasets (X, Y, Z), generating a digital surface model (DSM) and digital terrain model (DTM; Kuzelka and Surovy 2018, DiMaggio et al 2020, Page et al 2022). The DSM represents height values of the vegetation canopy and the DTM depicts elevation values of the terrain (Jimenez‐Jimenez et al 2021).…”
Section: Methodsmentioning
confidence: 99%
“…The 25‐class raster and the RGB orthomosaic were used as input for the Export Training Data for Deep Learning tool, with the metadata format set on classified tiles. The output creates a folder containing images, labels, maps, and stats of the created training data or image chips identified in the drone imagery (2.9 cm 2 ; Page et al 2022). We applied the Train Deep Learning Model tool (Esri 2022) to the RGB orthomosaic and image chips using the following parameters: max epochs (amount of times the dataset will be passed back and forth through the neural network) set at a default value of 20, model type was u‐net pixel classification, batch size of training samples processed at a time is the default size of 2, backbone model is the default ResNet‐34 (preconfigured model with more than 1 million images and is 34 layers deep), and validation percentage at a default value of 10 (10 percent of the training samples will be used to validate the model; ESRI 2022).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations