1. Optimizing L1 cache for embedded systems through grammatical evolution
- Author
-
J. Manuel Colmenar, Josefa Díaz Álvarez, José L. Risco-Martín, Oscar Garnica, and Juan Lanchares
- Subjects
FOS: Computer and information sciences ,Computer science ,CPU cache ,Cache coloring ,Computer Science - Artificial Intelligence ,Pipeline burst cache ,02 engineering and technology ,Parallel computing ,Cache pollution ,Cache-oblivious algorithm ,Theoretical Computer Science ,Cache invalidation ,Write-once ,0202 electrical engineering, electronic engineering, information engineering ,Neural and Evolutionary Computing (cs.NE) ,Cache algorithms ,Hardware_MEMORYSTRUCTURES ,business.industry ,Computer Science - Neural and Evolutionary Computing ,020207 software engineering ,Smart Cache ,Artificial Intelligence (cs.AI) ,Bus sniffing ,Embedded system ,020201 artificial intelligence & image processing ,Geometry and Topology ,Cache ,business ,Software - Abstract
Nowadays, embedded systems are provided with cache memories that are large enough to influence in both performance and energy consumption as never occurred before in this kind of systems. In addition, the cache memory system has been identified as a component that improves those metrics by adapting its configuration according to the memory access patterns of the applications being run. However, given that cache memories have many parameters which may be set to a high number of different values, designers are faced with a wide and time-consuming exploration space. In this paper, we propose an optimization framework based on Grammatical Evolution (GE) which is able to efficiently find the best cache configurations for a given set of benchmark applications. This metaheuristic allows an important reduction of the optimization runtime obtaining good results in a low number of generations. Besides, this reduction is also increased due to the efficient storage of evaluated caches. Moreover, we selected GE because the plasticity of the grammar eases the creation of phenotypes that form the call to the cache simulator required for the evaluation of the different configurations. Experimental results for the Mediabench suite show that our proposal is able to find cache configurations that obtain an average improvement of 62 % versus a real world baseline configuration.
- Published
- 2023