2020 the 3rd International Conference on Machine Learning and Machine Intelligence 2020
DOI: 10.1145/3426826.3426838
|View full text |Cite
|
Sign up to set email alerts
|

On Finding the Best Learning Model for Assessing Confidence in Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 3 publications
0
7
0
Order By: Relevance
“…AI enhances its performance by optimizing a given objective function, which serves as a criterion for effective operation. This function acts as a performance metric, representing how well AI is functioning (Nair et al 2020). AI learns to maximize or minimize the objective function based on information acquired from learning data.…”
Section: Process Of Ai Goal Setting and Subgoal Derivationmentioning
confidence: 99%
“…AI enhances its performance by optimizing a given objective function, which serves as a criterion for effective operation. This function acts as a performance metric, representing how well AI is functioning (Nair et al 2020). AI learns to maximize or minimize the objective function based on information acquired from learning data.…”
Section: Process Of Ai Goal Setting and Subgoal Derivationmentioning
confidence: 99%
“…In the computational reinforcement-learning literature, this reality has called into question longstanding approaches to model-based reinforcement learning (Littman, 2015 ; Sutton, 1991 ; Sutton & Barto, 1998 ) which use standard maximum-likelihood estimation techniques that endeavor to learn the exact model (𝒰, 𝒯) that governs the underlying MDP. The end result has been a flurry of recent work (Abachi et al, 2020 ; Asadi et al, 2018 ; Ayoub et al, 2020 ; Cui et al, 2020 ; D’Oro et al, 2020 ; Farahmand, 2018 ; Farahmand et al, 2017 ; Grimm et al, 2020 , 2021 , 2022 ; Nair et al, 2020 ; Nikishin et al, 2022 ; Oh et al, 2017 ; Schrittwieser et al, 2020 ; Silver et al, 2017 ; Voelcker et al, 2022 ) which eschews the traditional maximum-likelihood objective in favor of various surrogate objectives which restrict the focus of the agent’s modeling towards specific aspects of the environment. As the core goal of endowing a decision-making agent with its own internal model of the world is to facilitate model-based planning (Bertsekas, 1995 ), central among these recent approaches is the value-equivalence principle (Grimm et al, 2020 , 2021 , 2022 ) which provides mathematical clarity on how surrogate models can still enable lossless planning relative to the true model of the environment.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Lynch et al [8] and Nair et al [9] tackle the challenge of achieving a manipulation task composed of several subtasks. Lynch et al [8] uses the pairs of current and goal images to estimate the entire sequence of actions to achieve the goal, by encoding the task in a latent space.…”
Section: A Related Workmentioning
confidence: 99%
“…Lynch et al [8] uses the pairs of current and goal images to estimate the entire sequence of actions to achieve the goal, by encoding the task in a latent space. Nair et al [9] also uses difference between a current image and a goal image to infer the latent space, but to select the following action. Florensa et al [10] used Reinforcement Learning to learn all feasible goal paths in the environment for locomotion.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation