2010
DOI: 10.1007/s00521-010-0442-0
|View full text |Cite
|
Sign up to set email alerts
|

A neural network proxy cache replacement strategy and its implementation in the Squid proxy server

Abstract: As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 48 publications
(24 citation statements)
references
References 31 publications
0
24
0
Order By: Relevance
“…A survey on applications of neural networks and evolutionary techniques in web caching can be found in [27]. [28][29][30][31][32] proposes the use of a backpropagation neural network in a Web proxy cache for taking replacement decisions. A predictor that learns the patterns of Web pages and predicts the future accesses is presented in [33].…”
Section: Related Workmentioning
confidence: 99%
“…A survey on applications of neural networks and evolutionary techniques in web caching can be found in [27]. [28][29][30][31][32] proposes the use of a backpropagation neural network in a Web proxy cache for taking replacement decisions. A predictor that learns the patterns of Web pages and predicts the future accesses is presented in [33].…”
Section: Related Workmentioning
confidence: 99%
“…NNPCR [54] and its extension, NNPCR-2 [55], apply artificial neural networks to rate object cacheability. The networks were supervised to learn cacheable patterns from valid HTTP status codes and object sizes.…”
Section: Object Cacheabilitymentioning
confidence: 99%
“…Very Fast decision Tree learning method integrates with Greedy Dual Size Frequency 3,7,8 to improve the performance of Byte Hit Ratio. In this method, the VFDT uses as inputs the recency and frequency of the web object based on the sliding window mechanism, and as well as Retrieval time, the size of the Web object, and the classifier produces a target output in order to indicate whether the web object can visited in future or not see in Equation 5.…”
Section: Very Fast Decision Tree -Greedy Dual Size Frequency (Vfdt-gdsf)mentioning
confidence: 99%