2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8622598
|View full text |Cite
|
Sign up to set email alerts
|

When Machine Learning Meets Blockchain: A Decentralized, Privacy-preserving and Secure Design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
120
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 167 publications
(121 citation statements)
references
References 17 publications
0
120
0
1
Order By: Relevance
“…The idea of learning to learn was inspired by psychology studies [Ward, 1937], where researchers discovered that the human beings are born to be able to learn new skills via previous learning experiences. The idea of learning to learn has been widely explored in various machine learning systems such as meta-learning, life-long learning and continuous learning, as well as many optimization problems, including learning to optimize hyperparameters [Maclaurin et al, 2015], learning to optimize neural networks [Ge et al, 2017] and learning to optimize loss functions [Houthooft et al, 2018].…”
Section: Learning To Learnmentioning
confidence: 99%
See 4 more Smart Citations
“…The idea of learning to learn was inspired by psychology studies [Ward, 1937], where researchers discovered that the human beings are born to be able to learn new skills via previous learning experiences. The idea of learning to learn has been widely explored in various machine learning systems such as meta-learning, life-long learning and continuous learning, as well as many optimization problems, including learning to optimize hyperparameters [Maclaurin et al, 2015], learning to optimize neural networks [Ge et al, 2017] and learning to optimize loss functions [Houthooft et al, 2018].…”
Section: Learning To Learnmentioning
confidence: 99%
“…We notice that this procedure can easily map onto a recurrent neural network (RNN), which learns to utilize its internal memory to store information about previous processes and function evaluations, and learns to access this memory to produce current output. To address the gradients vanishing or exploding issues when dealing with long sequential data, in this paper, we apply two popular and efficient variants of the standard RNN, RNN with Long Short-Term Memory units (LSTM) [Hochreiter and Schmidhuber, 1997] and RNN with Gated Recurrent units (GRU) [Chung et al, 2014] .…”
Section: Recurrent Neural Network Based Aggregatormentioning
confidence: 99%
See 3 more Smart Citations