2021
DOI: 10.3390/su132011450
|View full text |Cite
|
Sign up to set email alerts
|

Review of Transit Data Sources: Potentials, Challenges and Complementarity

Abstract: Public transport has become one of the major transport options, especially when it comes to reducing motorized individual transport and achieving sustainability while reducing emissions, noise and so on. The use of public transport data has evolved and rapidly improved over the past decades. Indeed, the availability of data from different sources, coupled with advances in analytical and predictive approaches, has contributed to increased attention being paid to the exploitation of available data to improve pub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 197 publications
0
8
0
Order By: Relevance
“…We also used data from GitHub, an open-source database (McGovern 2016 ; NYTimes COVID-19 data bot and Sun 2020 ). Using data drawn from widely varying sources in order to analyze transit crowding standards is an increasingly common transportation research method (Ge et al 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…We also used data from GitHub, an open-source database (McGovern 2016 ; NYTimes COVID-19 data bot and Sun 2020 ). Using data drawn from widely varying sources in order to analyze transit crowding standards is an increasingly common transportation research method (Ge et al 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…For an exception see Ang Pik Yoke et al (2021). In all these cases data availability seems a major issue; for a recent survey regarding public transport see Ge et al (2021).…”
Section: Integrated Vehicle and Crew Schedulingmentioning
confidence: 99%
“…Therefore, we have performed an automated web data extraction method: web scraping. Individual scripts were developed to scrape the warehouse information from the Html file and assemble one dataset for each metropolitan area (Ge et al, 2021;Gharahighehi et al, 2021;Jiao et al, 2021;Luo and He, 2021;Pineda-Jaramillo and Pineda-Jaramillo, 2021;Ploessl et al, 2021). This method was mainly composed of seven steps for this paper: (i) find the URL where the data is published; (ii) inspect the webpage to find the data from its source code; (iii) build the prototype code, which was written in R language and using the rvest package and intended to extract and prepare the data; (iv) generalize the code considering functions, loops and debugging and run the code alternating among US metropolitan areas; (v) store the data as an organized data frame; (vi) check and clean the gathered data; (vii) geocode the information on each warehouse.…”
Section: Unstructured Web Data Extraction -Warehouse Informationmentioning
confidence: 99%