2020
DOI: 10.1109/jiot.2020.2976778
|View full text |Cite
|
Sign up to set email alerts
|

A Learning-Based Credible Participant Recruitment Strategy for Mobile Crowd Sensing

Abstract: Mobile Crowd Sensing (MCS) acts as a key component of Internet of Things (IoTs), which has attracted much attention. In an MCS system, participants play an important role, since all the data is collected and provided by them. It is challenging but essential to recruit credible participants and motive them to contribute high quality data. In this paper, we propose a learning-based credible participant recruitment strategy (LC-PRS), which aims to maximize the platform and participants' profits at the same time v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 35 publications
(8 citation statements)
references
References 49 publications
0
8
0
Order By: Relevance
“…Here, we also drop the task index t for the same reason. Following our previous work [60], the quality of sensing data contributed by a participant is modeled as a semi-Markov with discrete time.…”
Section: Task Recommendation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we also drop the task index t for the same reason. Following our previous work [60], the quality of sensing data contributed by a participant is modeled as a semi-Markov with discrete time.…”
Section: Task Recommendation Methodsmentioning
confidence: 99%
“…Following the former work [60], the reward paid for sensing acts as a signal to reflect sensing data supply and demand, which depends on the demand of the platform and the supply of participants. Here we employ the maximum offered reward to decide the maximum reward offered to the participants.…”
Section: Maximum Offered Reward Decision Mechanismmentioning
confidence: 99%
“…Discussions of specific parts of RL solution design problems occur in smaller number of cases, but these kinds of publication demonstrate the fact that constructing an appropriate RL application is not always trivial. We can highlight state space design [12,25,33,107,144,179,193,208,217,220,222,224,227,266,267] and action space design [109,220,246,268], reward construction [14,76,110,199,220,226,246,[269][270][271][272][273], and exploration strategy planning [86,274] which can be determinants from the whole application point of view. [11,13,17,20,21,24,38,43,61,62,66,69,82,89,…”
Section: Complexitymentioning
confidence: 99%
“…We can highlight state space design [12,25,33,107,144,179,193,208,217,220,222,224,227,266,267] and action space design [109,220,246,268], reward construction [14,76,110,199,220,226,246,[269][270][271][272][273], and exploration strategy planning [86,274] which can be determinants from the whole application point of view. [11,13,17,20,21,24,38,43,61,62,66,69,82,89,93], Allocation, assignment, resource management [20,22,…”
Section: Complexitymentioning
confidence: 99%
See 1 more Smart Citation