2018
DOI: 10.3390/fi10110108
|View full text |Cite
|
Sign up to set email alerts
|

Quality of Experience in Cyber-Physical Social Systems Based on Reinforcement Learning and Game Theory

Abstract: This paper addresses the problem of museum visitors’ Quality of Experience (QoE) optimization by viewing and treating the museum environment as a cyber-physical social system. To achieve this goal, we harness visitors’ internal ability to intelligently sense their environment and make choices that improve their QoE in terms of which the museum touring option is the best for them and how much time to spend on their visit. We model the museum setting as a distributed non-cooperative game where visitors selfishly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…In [24] the focus is on maximizing the satisfaction of the proposed recommendations. User satisfaction is strongly related with the Quality of Experience (QoE) [27][28][29] when dealing with systems dedicated to museums. In [29] there is an effort to understand the visitors' behavior within CH spaces, especially targeting their optimal visiting time along with the maximization of their perceived satisfaction.…”
Section: Related Workmentioning
confidence: 99%
“…In [24] the focus is on maximizing the satisfaction of the proposed recommendations. User satisfaction is strongly related with the Quality of Experience (QoE) [27][28][29] when dealing with systems dedicated to museums. In [29] there is an effort to understand the visitors' behavior within CH spaces, especially targeting their optimal visiting time along with the maximization of their perceived satisfaction.…”
Section: Related Workmentioning
confidence: 99%
“…u,s is defined as the probability of the end-user u to select the MEC server s to offload its data. Based on the theory of stochastic learning automata [22], the rule of updating the end-users' action probabilities at the SDN controller is defined as follows.…”
Section: Mec As a Learning Systemmentioning
confidence: 99%
“…The ESCAPE service was developed based on the principles of the reinforcement learning and game theory, and consists of two decision-making layers. (b) At the first layer, the evacuees acting as stochastic learning automata [6][7][8] decide which evacuation route they want to join based on their past decisions while performing the current evacuation, and taking the (limited) available information from the disaster area through the ESCAPE service, e.g., evacuation rate per route, evacuees already on the route, and capacity of the route. The latter information can easily be available in a real implementation scenario through sensors deployed at the evacuation routes.…”
Section: Contributions and Outlinementioning
confidence: 99%