2019
DOI: 10.1109/access.2019.2905263
|View full text |Cite
|
Sign up to set email alerts
|

Smart Mobile Crowdsensing With Urban Vehicles: A Deep Reinforcement Learning Perspective

Abstract: Mobile crowdsensing (MCS) is a promising sensing paradigm based on the mobile node which provides the solution with cost-effectiveness to perform urban data collection. To monitor the urban environment and facilitate the municipal administration, more and more applications adopt vehicles as participants to carry out MCS tasks. The performance of the applications highly depends on the sensing data which is influenced by the recruiting strategy on vehicles. In this paper, we propose a novel vehicle selection alg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…Step 2: Iterative Matching (9) while ∃ϕ(s i ) � ∅ do (10) for s i ∈ S do (11) ∀ s i ∈ S makes a request to its most preferred r j according to F i . (12) end for (13) Add the relay nodes selected by more than one source node into Ω (14) if Ω ≠ ∅ then (15) for r j ∈ Ω do (16) r j raises its price as (17).…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Step 2: Iterative Matching (9) while ∃ϕ(s i ) � ∅ do (10) for s i ∈ S do (11) ∀ s i ∈ S makes a request to its most preferred r j according to F i . (12) end for (13) Add the relay nodes selected by more than one source node into Ω (14) if Ω ≠ ∅ then (15) for r j ∈ Ω do (16) r j raises its price as (17).…”
Section: Simulation Resultsmentioning
confidence: 99%
“…To fully utilize the spectrum and energy resources, relay selection needs to be optimized dynamically according to the network state and service requirements. However, relay selection optimization in SPIoT still faces several critical challenges as below [10].…”
Section: Introductionmentioning
confidence: 99%
“…However, in our case we found the epochs Nash Equilibrium for all the vehicles working for a given task. Following a reinforcement learning approach Wang et al [16] model the problem of vehicle recruitment for spatio-temporal coverage purposes as a Markov Decision Process. In the reinforcement learning approach we have an agent who observes the system, it takes actions to maximize the long term reward, and these actions take the agent to a new stage.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Wang et al formulate the maximum spatial-temporal coverage optimization issue as a deep reinforcement learning process. A deep reinforcement learning based vehicle scheduling is adopted to produce an optimal solution and maximize the spatial-temporal coverage [33]. Wei et al train a deterministic policy gradient algorithm on an abstracted structure to imitate the deformation of the path under the external force.…”
Section: Literature Reviewmentioning
confidence: 99%