2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) 2019
DOI: 10.1109/lics.2019.8785665
|View full text |Cite
|
Sign up to set email alerts
|

Backprop as Functor: A compositional perspective on supervised learning

Abstract: A supervised learning algorithm searches over a set of functions A → B parametrised by a space P to find the best approximation to some ideal function f : A → B. It does this by taking examples (a, f (a)) ∈ A × B, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent-with respect to a fixed step size and an error function satisfying a certain property-defines a monoidal functor from a category of parametrised functions t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
135
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 62 publications
(135 citation statements)
references
References 17 publications
0
135
0
Order By: Relevance
“…To manage the extended search possibilities, it makes sense to parameterize the space of transformations as a family of mappings get p : A → B indexed over some parameter space p ∈ P. For example, we may define the ITdepartments view to be parameterized by the experience of employees shown in the view (including any experience as a special parameter value). Then we have two interrelated propagation operations that map an update B B to a parameter update p p and a source update A A (called, resp., an update and a request in [1]). Thus, the extended search space allows for new update policies that look for updating the parameter as an update propagation possibility.…”
Section: Bx Needs Learning Capabilitiesmentioning
confidence: 99%
See 4 more Smart Citations
“…To manage the extended search possibilities, it makes sense to parameterize the space of transformations as a family of mappings get p : A → B indexed over some parameter space p ∈ P. For example, we may define the ITdepartments view to be parameterized by the experience of employees shown in the view (including any experience as a special parameter value). Then we have two interrelated propagation operations that map an update B B to a parameter update p p and a source update A A (called, resp., an update and a request in [1]). Thus, the extended search space allows for new update policies that look for updating the parameter as an update propagation possibility.…”
Section: Bx Needs Learning Capabilitiesmentioning
confidence: 99%
“…As model spaces are themselves categories, the entire search space is a product of categories, P×A (or even P×A×B if we consider amendments), and thus codiscreteness may "affect" only P or only A or both. For example, for learners described in [1], spaces A, B, P are sets, get is a parameterized function, and put consists of two families of discrete operations described in [1]: put upd updates the parameter and put req updates the source value thus making a request to the previous layer. In general, a learning lens from a model space (category) A to model space B is a pair of operations (get, put): A P -B where get: P×A → B is a functor considered (via Currying) as a family of functors get: P → B A , and put is a family of operations providing some sort of an inverse map for functor get.…”
Section: Categorical Vs Codiscrete Learningmentioning
confidence: 99%
See 3 more Smart Citations