Back to Search Start Over

Decoupling NDN caches via CCndnS: Design, analysis, and application

Authors :
Y. C. Tay
Mostafa Rezazad
Source :
Computer Communications. 151:338-354
Publication Year :
2020
Publisher :
Elsevier BV, 2020.

Abstract

In-network caching is considered to be a vital part of the Internet for future applications (e.g., Internet of Things). One proposal that has attracted interest in recent years, Named Data Networking (NDN), aims to facilitate in-network caching by locating content by name. However, the efficiency of in-network caching has been questioned by experts. Data correlation among caches builds strong dependencies between caches at the edge and in the core. That dependency makes analyzing network performance difficult. This paper proposes CCndnS (Content Caching strategy for NDN with Skip), a caching policy to break the dependencies among caches, thus facilitating the design of an efficient data placement algorithm. Specifically, each cache – regardless of its location in the network – should receive an independent set of requests; otherwise, only misses from downstream caches make their way to the upstream caches, i.e. a filtering effect that induces a correlation among the caches. CCndnS breaks a file into smaller segments and spreads them in the path between requester and publisher in a way that the head of the file (the first segment) should be cached at the edge router close to the requester and the tail far from the requester and towards the content provider. Requests for a segment skip searching caches in its path, to search only the cache with the segment of interest. That reduces the number of futile checks on caches, and thus the delay from memory accesses. This mechanism also decouples the caches, so there is a simple analytical model for cache performance in the network. We illustrate an application of the model to enforce a Service Level Agreement (SLA) between a content provider and the caching system proposed in this paper. The model can be used for cache provisioning for two purposes: (1) To specify the cache size to be reserved for specific contents to reach some desired performance. For instance, if the client of an SLA requires a 50% cache hit for its content at each router, the model can be used to determine the cache size that needs to be reserved to reach the 50% hit rate. (2) To calculate the effect of such reservations on other contents that use the routers covered by the SLA. The design, analysis, and application are tested with extensive simulations.

Details

ISSN :
01403664
Volume :
151
Database :
OpenAIRE
Journal :
Computer Communications
Accession number :
edsair.doi...........9bad900b7ce66a3c776cfddda63700a8
Full Text :
https://doi.org/10.1016/j.comcom.2019.12.053