2024
DOI: 10.1016/j.comnet.2024.110188
|View full text |Cite
|
Sign up to set email alerts
|

Practicality of in-kernel/user-space packet processing empowered by lightweight neural network and decision tree

Takanori Hara,
Masahiro Sasabe
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…9b that the maximum end-to-end delay worsens as N increases. More specifically, the maximum end-to-end delay almost exhibits a constant value in a specific range of N (e.g., N = [5,9] and N = [10,16]).…”
Section: Maximum End-to-end Delaymentioning
confidence: 99%
See 1 more Smart Citation
“…9b that the maximum end-to-end delay worsens as N increases. More specifically, the maximum end-to-end delay almost exhibits a constant value in a specific range of N (e.g., N = [5,9] and N = [10,16]).…”
Section: Maximum End-to-end Delaymentioning
confidence: 99%
“…In-network learning (INL), which is one of the distributed ML, performs both learning and inference over a network by dividing the ML model into the small ML models and deploying them on the network [4][5][6]. It is expected that INL is compatible with programmable network devices with computational and hardware resource constraints since INL tries to deploy small ML models on physical nodes in the network [7][8][9].…”
Section: Introductionmentioning
confidence: 99%