370 results on '"memory allocation"'
Search Results
2. Neural ensembles: role of intrinsic excitability and its plasticity.
- Author
-
Hansel, Christian and Yuste, Rafael
- Subjects
PURKINJE cells ,SENSORY perception ,LONG-term potentiation ,NEUROPLASTICITY ,MOTOR ability - Abstract
Synaptic connectivity defines groups of neurons that engage in correlated activity during specific functional tasks. These co-active groups of neurons form ensembles, the operational units involved in, for example, sensory perception, motor coordination and memory (then called an engram). Traditionally, ensemble formation has been thought to occur via strengthening of synaptic connections via long-term potentiation (LTP) as a plasticity mechanism. This synaptic theory of memory arises from the learning rules formulated by Hebb and is consistent with many experimental observations. Here, we propose, as an alternative, that the intrinsic excitability of neurons and its plasticity constitute a second, non-synaptic mechanism that could be important for the initial formation of ensembles. Indeed, enhanced neural excitability is widely observed in multiple brain areas subsequent to behavioral learning. In cortical structures and the amygdala, excitability changes are often reported as transient, even though they can last tens of minutes to a few days. Perhaps it is for this reason that they have been traditionally considered as modulatory, merely supporting ensemble formation by facilitating LTP induction, without further involvement in memory function (memory allocation hypothesis). We here suggest-based on two lines of evidence--that beyond modulating LTP allocation, enhanced excitability plays a more fundamental role in learning. First, enhanced excitability constitutes a signature of active ensembles and, due to it, subthreshold synaptic connections become suprathreshold in the absence of synaptic plasticity (iceberg model). Second, enhanced excitability promotes the propagation of dendritic potentials toward the soma and allows for enhanced coupling of EPSP amplitude (LTP) to the spike output (and thus ensemble participation). This permissive gate model describes a need for permanently increased excitability, which seems at odds with its traditional consideration as a short-lived mechanism. We propose that longer modifications in excitability are made possible by a low threshold for intrinsic plasticity induction, suggesting that excitability might be on/offmodulated at short intervals. Consistent with this, in cerebellar Purkinje cells, excitability lasts days to weeks, which shows that in some circuits the duration of the phenomenon is not a limiting factor in the first place. In our model, synaptic plasticity defines the information content received by neurons through the connectivity network that they are embedded in. However, the plasticity of cell-autonomous excitability could dynamically regulate the ensemble participation of individual neurons as well as the overall activity state of an ensemble. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Memory Analysis
- Author
-
Kävrestad, Joakim, Birath, Marcus, Clarke, Nathan, Hazzan, Orit, Series Editor, Maurer, Frank, Series Editor, Kävrestad, Joakim, Birath, Marcus, and Clarke, Nathan
- Published
- 2024
- Full Text
- View/download PDF
4. An Empirical Study of Memory Pool Based Allocation and Reuse in CUDA Graph
- Author
-
Qian, Ruyi, Gao, Mengjuan, Shi, Qinwen, Xu, Yuanchao, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tari, Zahir, editor, Li, Keqiu, editor, and Wu, Hongyi, editor
- Published
- 2024
- Full Text
- View/download PDF
5. Optimized memory allocation in edge-PLCs using Deep Q-Networks and bidirectional LSTM with Quantum Genetic Algorithm
- Author
-
N. Naveen Kumar, S. Saravana, S. Balamurugan, P. Seshu Kumar, and S. Suresh
- Subjects
Reinforcement learning ,Memory allocation ,Edge-PLCs ,BiLSTM networks ,Quantum Genetic Algorithm Optimization ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper offers a fresh perspective to memory allocation in edge-PLCs within industrial IoT environments, leveraging a hybrid a structure that incorporates Deep Q-Networks (DQN) and Bidirectional Long Short-Term Memory Networks (BiLSTM), complemented by Quantum Genetic Algorithm Optimization. In addressing the dynamic challenges of memory management, our method combines the reinforcement learning capabilities of DQN with the temporal contextualization provided by BiLSTM networks. The DQN component learns to optimize memory allocation policies based on immediate rewards and feedback, while the BiLSTM network captures long-term dependencies in data, enhancing predictive modeling for future data arrival rates. Moreover, we introduce Quantum Genetic Algorithm Optimization, a cutting-edge approach that infuses quantum-inspired principles into the traditional genetic algorithm framework, to further refine the memory allocation process. By leveraging principles of quantum computing, this optimization algorithm explores the solution space more efficiently, allowing for faster convergence and improved performance. Through simulation experiments, we exhibit the potency of our hybrid approach in reducing data loss probability and enhancing system performance in edge-PLCs. Our findings underscore the significance of integrating advanced machine learning techniques with quantum-inspired optimization algorithms to address complex challenges in industrial IoT environments, offering a promising avenue for enhancing memory allocation efficiency in edge computing systems.
- Published
- 2024
- Full Text
- View/download PDF
6. Local memory allocation recruits memory ensembles across brain regions.
- Author
-
Lavi, Ayal, Sehgal, Megha, de Sousa, Andre, Ter-Mkrtchyan, Donara, Sisan, Fardad, Luchetti, Alessandro, Okabe, Anna, Bear, Cameron, and Silva, Alcino
- Subjects
CREB ,auditory fear conditioning ,conditioned taste aversion ,cross-regional recruitment ,memory allocation ,memory coordination ,memory ensemble ,rabies ,retrograde mechanism ,Mice ,Animals ,Memory ,Learning ,Brain ,Neurons - Abstract
Memories are thought to be stored in ensembles of neurons across multiple brain regions. However, whether and how these ensembles are coordinated at the time of learning remains largely unknown. Here, we combined CREB-mediated memory allocation with transsynaptic retrograde tracing to demonstrate that the allocation of aversive memories to a group of neurons in one brain region directly affects the allocation of interconnected neurons in upstream brain regions in a behavioral- and brain region-specific manner in mice. Our analysis suggests that this cross-regional recruitment of presynaptic neurons is initiated by downstream memory neurons through a retrograde mechanism. Together with statistical modeling, our results indicate that in addition to the anterograde flow of information between brain regions, the establishment of interconnected, brain-wide memory traces relies on a retrograde mechanism that coordinates memory ensembles at the time of learning.
- Published
- 2023
7. Intrinsic Neural Excitability Biases Allocation and Overlap of Memory Engrams.
- Author
-
Delamare, Geoffroy, Feitosa Tomé, Douglas, and Clopath, Claudia
- Subjects
- *
RECOLLECTION (Psychology) , *RECURRENT neural networks , *MEMORY , *AMYGDALOID body , *NEUROPLASTICITY - Abstract
Memories are thought to be stored in neural ensembles known as engrams that are specifically reactivated during memory recall. Recent studies have found that memory engrams of two events that happened close in time tend to overlap in the hippocampus and the amygdala, and these overlaps have been shown to support memory linking. It has been hypothesized that engram overlaps arise from the mechanisms that regulate memory allocation itself, involving neural excitability, but the exact process remains unclear. Indeed, most theoretical studies focus on synaptic plasticity and little is known about the role of intrinsic plasticity, which could be mediated by neural excitability and serve as a complementary mechanism for forming memory engrams. Here, we developed a rate-based recurrent neural network that includes both synaptic plasticity and neural excitability. We obtained structural and functional overlap of memory engrams for contexts that are presented close in time, consistent with experimental and computational studies. We then investigated the role of excitability in memory allocation at the network level and unveiled competitive mechanisms driven by inhibition. This work suggests mechanisms underlying the role of intrinsic excitability in memory allocation and linking, and yields predictions regarding the formation and the overlap of memory engrams. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Off-Chip Memory Allocation for Neural Processing Units
- Author
-
Andrey Kvochko, Evgenii Maltsev, Artem Balyshev, Stanislav Malakhov, and Alexander Efimov
- Subjects
NPU ,memory allocation ,neural network runtime ,tiling ,strip-packing problem ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Many modern Systems-on-Chip (SoCs) are equipped with specialized Machine Learning (ML) accelerators that use both on-chip and off-chip memory to execute neural networks. While on-chip memory usually has a hard limit, off-chip memory is often considered large enough to hold the network’s inputs, outputs, weights, and any intermediate results that may occur during model execution. This assumption may not hold for edge devices, such as smartphones, which usually have a limit on the amount of memory a process can use. In this study, we propose a novel approach for minimizing a neural network’s off-chip memory usage by introducing a tile-aware allocator capable of reusing memory occupied by parts of a tensor before the entire tensor expires. We describe the necessary conditions for such an off-chip memory allocation approach and provide the results, showing that it can save up to 33% of the peak off-chip memory usage in some common network architectures.
- Published
- 2024
- Full Text
- View/download PDF
9. Initialization Methods for FPGA-Based EMT Simulations
- Author
-
Xin Ma and Xiao-Ping Zhang
- Subjects
Initialization ,electromagnetic transient (EMT) ,FPGA ,COE file ,memory allocation ,Distribution or transmission of electric power ,TK3001-3521 ,Production of electric energy or power. Powerplants. Central stations ,TK1001-1841 - Abstract
FPGA has become a very powerful platform to provide real-time Electromagnetic Transient (EMT) solutions due to the much lower investment costs in comparison to the other existing real-time platform. Existing off-line initialization methods cannot be applied to real-time FPGA directly owing to timing constraints and resource utilization. Without appropriate initialization, it can lead to divergence for FPGA-based EMT simulations and cause inaccurate simulation results. To provide real-time initialization, this paper presents four initialization methods for FPGA-based EMT, namely, physical interface (Method 1), signal declaration (Method 2), signal assignment (Method 3) and Coefficient (COE) file (Method 4). The performance of these four methods are also compared, and Method 4 can initialize instantly with the simplest code. To improve hardware adaptability, optimized strategies are developed for address sequence, interface, update modes and dataflow. To accelerate initialization, software-to-hardware algorithm and structure are developed to automate initialization data sources for different topologies. Case study shows Method 2–4 can both initialize successfully on FPGA platform, while Method 4 achieves the best timing and routing performance. To verify scalability, Method 4 is expanded to initialize 4-machine 11-bus system and eliminate significant error to less than 5%, with a timing constraint of 0.005 ns.
- Published
- 2024
- Full Text
- View/download PDF
10. Dimensions and mechanisms of memory organization
- Author
-
de Sousa, André F, Chowdhury, Ananya, and Silva, Alcino J
- Subjects
Mental Health ,Neurosciences ,Underpinning research ,1.2 Psychological and socioeconomic processes ,Animals ,Brain ,Humans ,Memory ,Neurons ,engram overlap ,inferential reasoning ,memory allocation ,memory linking ,memory organization ,mnemonic structures ,Psychology ,Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Memory formation is dynamic in nature, and acquisition of new information is often influenced by previous experiences. Memories sharing certain attributes are known to interact so that retrieval of one increases the likelihood of retrieving the other, raising the possibility that related memories are organized into associative mnemonic structures of interconnected representations. Although the formation and retrieval of single memories have been studied extensively, very little is known about the brain mechanisms that organize and link related memories. Here we review studies that suggest the existence of mnemonic structures in humans and animal models. These studies suggest three main dimensions of experience that can serve to organize related memories: time, space, and perceptual/conceptual similarities. We propose potential molecular, cellular, and systems mechanisms that might support organization of memories according to these dimensions.
- Published
- 2021
11. Real-Time Multi-Task ADAS Implementation on Reconfigurable Heterogeneous MPSoC Architecture
- Author
-
Guner Tatar and Salih Bayar
- Subjects
ADAS ,deep learning ,deep processing unit ,memory allocation ,multi-task learning ,MPSoC-FPGA architecture ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The rapid adoption of Advanced Driver Assistance Systems (ADAS) in modern vehicles, aiming to elevate driving safety and experience, necessitates the real-time processing of high-definition video data. This requirement brings about considerable computational complexity and memory demands, highlighting a critical research void for a design integrating high FPS throughput with optimal Mean Average Precision (mAP) and Mean Intersection over Union (mIoU). Performance improvement at lower costs, multi-tasking ability on a single hardware platform, and flawless incorporation into memory-constrained devices are also essential for boosting ADAS performance. Addressing these challenges, this study proposes an ADAS multi-task learning hardware-software co-design approach underpinned by the Kria KV260 Multi-Processor System-on-Chip Field Programmable Gate Array (MPSoC-FPGA) platform. The approach facilitates efficient real-time execution of deep learning algorithms specific to ADAS applications. Utilizing the BDD100K+Waymo, KITTI, and CityScapes datasets, our ADAS multi-task learning system endeavours to provide accurate and efficient multi-object detection, segmentation, and lane and drivable area detection in road images. The system deploys a segmentation-based object detection strategy, using a ResNet-18 backbone encoder and a Single Shot Detector architecture, coupled with quantization-aware training to augment inference performance without compromising accuracy. The ADAS multi-task learning offers customization options for various ADAS applications and can be further optimized for increased precision and reduced memory usage. Experimental results showcase the system’s capability to perform real-time multi-class object detection, segmentation, line detection, and drivable area detection on road images at approximately 25.4 FPS using a $1920\times 1080\text{p}$ Full HD camera. Impressively, the quantized model has demonstrated a 51% mAP for object detection, 56.62% mIoU for image segmentation, 43.86% mIoU for line detection, and 81.56% IoU for drivable area identification, reinforcing its high efficacy and precision. The findings underscore that the proposed ADAS multi-task learning system is a practical, reliable, and effective solution for real-world applications.
- Published
- 2023
- Full Text
- View/download PDF
12. Lightweight Array Contraction by Trace-Based Polyhedral Analysis
- Author
-
Thievenaz, Hugo, Kimura, Keiji, Alias, Christophe, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Anzt, Hartwig, editor, Bienz, Amanda, editor, Luszczek, Piotr, editor, and Baboulin, Marc, editor
- Published
- 2022
- Full Text
- View/download PDF
13. MAFF: Self-adaptive Memory Optimization for Serverless Functions
- Author
-
Zubko, Tetiana, Jindal, Anshul, Chadha, Mohak, Gerndt, Michael, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Montesi, Fabrizio, editor, Papadopoulos, George Angelos, editor, and Zimmermann, Wolf, editor
- Published
- 2022
- Full Text
- View/download PDF
14. Web-Based Simulator for Operating Systems
- Author
-
Prajwal, K., Navaneeth, P., Tharun, K., Chandak, Trupti, Kumar, M. Anand, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Agarwal, Basant, editor, Rahman, Azizur, editor, Patnaik, Srikant, editor, and Poonia, Ramesh Chandra, editor
- Published
- 2022
- Full Text
- View/download PDF
15. Memory management optimization strategy in Spark framework based on less contention.
- Author
-
Song, Yixin, Yu, Junyang, Wang, JinJiang, and He, Xin
- Subjects
- *
INDUSTRIAL efficiency , *MEMORY , *PARALLEL programming , *CACHE memory - Abstract
The parallel computing framework Spark 2.x adopts a unified memory management model. In the case of the memory bottleneck, the memory allocation of active tasks and the RDD(Resilient Distributed Datasets) cache causes memory contention, which may reduce computing resource utilization and persistence acceleration effects, thus affecting program execution efficiency. To this end, we propose less contention management strategy, abbreviated as MCM, to reduce the negative impact of memory contention. MCM is divided into two steps: Firstly, the task minimum memory priority guarantee algorithm is priority to meet the minimum resources of tasks for execution, to optimize the number of active tasks. Secondly, considering contention costs, the persisted location selection algorithm dynamically selects the best storage location to improve the effect of persistence acceleration. The experimental results comfirm that MCM has wonderful adaptability and scalability. In the case of serious memory bottleneck, MCM obviously reduces job execution time. Compared with similar works, such as only_memory, only_disk, memory_and_disk, DMAOM and SACM, MCM reduces the execution time by 28.3%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. SoK: Secure Memory Allocation
- Author
-
Novković, Bojan, Golub, Marin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Conti, Mauro, editor, Stevens, Marc, editor, and Krenn, Stephan, editor
- Published
- 2021
- Full Text
- View/download PDF
17. A Dynamic Protection Mechanism for GPU Memory Overflow
- Author
-
Yang, Yaning, Wang, Xiaoqi, Peng, Shaoliang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, He, Xin, editor, Shao, En, editor, and Tan, Guangming, editor
- Published
- 2021
- Full Text
- View/download PDF
18. Fast and Memory-Efficient TFIDF Calculation for Text Analysis of Large Datasets
- Author
-
Senbel, Samah, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fujita, Hamido, editor, Selamat, Ali, editor, Lin, Jerry Chun-Wei, editor, and Ali, Moonis, editor
- Published
- 2021
- Full Text
- View/download PDF
19. Compilation for Real-Time Systems a Decade After Predator
- Author
-
Falk, Heiko, Jadhav, Shashank, Luppold, Arno, Muts, Kateryna, Oehlert, Dominic, Piontek, Nina, Roth, Mikko, and Chen, Jian-Jia, editor
- Published
- 2021
- Full Text
- View/download PDF
20. Dynamic Buffer Management in Massively Parallel Systems: The Power of Randomness.
- Author
-
Pham M, Yuan Y, Li H, Mou C, Tu Y, Xu Z, and Meng J
- Abstract
Massively parallel systems, such as Graphics Processing Units (GPUs), play an increasingly crucial role in today's data-intensive computing. The unique challenges associated with developing system software for massively parallel hardware to support numerous parallel threads efficiently are of paramount importance. One such challenge is the design of a dynamic memory allocator to allocate memory at runtime. Traditionally, memory allocators have relied on maintaining a global data structure, such as a queue of free pages. However, in the context of massively parallel systems, accessing such global data structures can quickly become a bottleneck even with multiple queues in place. This paper presents a novel approach to dynamic memory allocation that eliminates the need for a centralized data structure. Our proposed approach revolves around letting threads employ random search procedures to locate free pages. Through mathematical proofs and extensive experiments, we demonstrate that the basic random search design achieves lower latency than the best-known existing solution in most situations. Furthermore, we develop more advanced techniques and algorithms to tackle the challenge of warp divergence and further enhance performance when free memory is limited. Building upon these advancements, our mathematical proofs and experimental results affirm that these advanced designs can yield an order of magnitude improvement over the basic design and consistently outperform the state-of-the-art by up to two orders of magnitude. To illustrate the practical implications of our work, we integrate our memory management techniques into two GPU algorithms: a hash join and a group-by. Both case studies provide compelling evidence of our approach's pronounced performance gains.
- Published
- 2025
- Full Text
- View/download PDF
21. ctFS: Replacing File Indexing with Hardware Memory Translation through Contiguous File Allocation for Persistent Memory.
- Author
-
LI, RUIBIN, REN, XIANG, ZHAO, XU, HE, SIWEI, STUMM, MICHAEL, and YUAN, DING
- Subjects
MEMORY ,PRICE indexes ,COMPUTER systems ,RANDOM access memory - Abstract
Persistent byte-addressable memory (PM) is poised to become prevalent in future computer systems. PMs are significantly faster than disk storage, and accesses to PMs are governed by the Memory Management Unit (MMU) just as accesses with volatile RAM. These unique characteristics shift the bottleneck from I/O to operations such as block address lookup—for example, in write workloads, up to 45% of the overhead in ext4-DAX is due to building and searching extent trees to translate file offsets to addresses on persistent memory. We propose a novel contiguous file system, ctFS, that eliminates most of the overhead associated with indexing structures such as extent trees in the file system. ctFS represents each file as a contiguous region of virtual memory, hence a lookup from the file offset to the address is simply an offset operation, which can be efficiently performed by the hardware MMU at a fraction of the cost of software-maintained indexes. Evaluating ctFS on real-world workloads such as LevelDB shows it outperforms ext4-DAX and SplitFS by 3.6× and 1.8×, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. A multifarious exploration of synaptic tagging and capture hypothesis in synaptic plasticity: Development of an integrated mathematical model and computational experiments
- Author
-
Khan, R, Kulasiri, Don, and Samarasinghe, Sandhya
- Published
- 2023
- Full Text
- View/download PDF
23. Smart Destination-Based Parking for the Optimization of Waiting Time
- Author
-
Balzano, Marco, Balzano, Walter, Sorrentino, Loredana, Stranieri, Silvia, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Barolli, Leonard, editor, Amato, Flora, editor, Moscato, Francesco, editor, Enokido, Tomoya, editor, and Takizawa, Makoto, editor
- Published
- 2020
- Full Text
- View/download PDF
24. Memory Analysis
- Author
-
Kävrestad, Joakim and Kävrestad, Joakim
- Published
- 2020
- Full Text
- View/download PDF
25. Extending the wait-free hierarchy to multi-threaded systems.
- Author
-
Perrin, Matthieu, Mostéfaoui, Achour, Bonin, Grégoire, and Courtillat-Piazza, Ludmila
- Subjects
- *
INFINITE processes , *COMPUTER architecture , *PROGRAMMING languages , *SHARED workspaces - Abstract
In modern operating systems and programming languages adapted to multicore computer architectures, parallelism is abstracted by the notion of execution threads. Multi-threaded systems have two major specificities: on the one part, new threads can be created dynamically at runtime, so there is no bound on the number of threads participating in long-running executions. On the other part, threads have access to a memory allocation mechanism that cannot allocate infinite arrays. These specificities make it challenging to adapt some algorithms to multi-threaded systems, in particular those that need to assign one shared register per process. This paper explores the synchronization power of shared objects in multi-threaded systems by extending the famous Herlihy's wait-free hierarchy to take these constraints into consideration. It proposes to subdivide the set of objects with an infinite consensus number into nine new degrees, depending on their ability to synchronize a bounded, finite or infinite number of processes, with or without the need to allocate an infinite array. To show the relevance of the proposed extension, for each new degree it is either proved that it is empty, or an object illustrating it is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Adapting and Evaluating Buddy Allocators for Use Within ZGC : ZGC's New Best Friend
- Author
-
Casper, Norrbin and Casper, Norrbin
- Abstract
In the current software development environment, Java remains one of the major languages, powering numerous applications. Central to Java's effectiveness is the Java Virtual Machine (JVM), with HotSpot being a key implementation. Within HotSpot, garbage collection (GC) is critical for efficient memory management, where one collector is Z (ZGC), designed for minimal latency and high throughput. ZGC primarily uses bump-pointer allocation, which, while fast, can lead to fragmentation issues. An alternative allocation strategy involves using free-lists to dynamically manage memory blocks of various sizes, such as the buddy allocator. This thesis explores the adaptation and evaluation of buddy allocators for potential integration within ZGC, aiming to enhance memory allocation efficiency and minimize fragmentation. The thesis investigates the binary buddy allocator, the binary tree buddy allocator, and the inverse buddy allocator, assessing their performance and suitability for ZGC. Although not integrated into ZGC, these exploratory modifications and evaluations provide insight into their behavior and performance in a GC context. The study reveals that while buddy allocators offer promising solutions to fragmentation, they require careful adaptation to handle the unique demands of ZGC. The conclusions drawn from this research highlight the potential of free-list-based allocators to improve memory management in Java applications. These advances could reduce GC-induced latency and enhance the scalability of Java-based systems, addressing the growing demands of modern software applications.
- Published
- 2024
27. Exploring hardware memory allocation in heterogeneous systems
- Author
-
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, University of California, Santa Barbara, Balkind, Jonathan, Farrés García, Joan, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, University of California, Santa Barbara, Balkind, Jonathan, and Farrés García, Joan
- Abstract
A mesura que els sistemes heterogenis es popularitzen, la transmisió de dades entre accel- eradors i software és cada vegada més important. No obstant, els acceleradors moderns no disposen de cap interfície clara per assignar memòria, que es realitza normalment a través d’interrupcions, el qual provoca canvis de context tediosos i redueix el rendiment. En aquesta tesi, donem un pas cap a demostrar que els gestors de memòria en hardware i software poden coexistir en el mateix mapa de memòria. Presentem Falafel, un gestor de memòria en hardware que pot respondre a peticions de memòria a través d’una interfí- cie senzilla y estàndard, sense interactuar amb software, alliberant als fils d’execució de la tasca d’assignar memòria als acceleradors. Falafel està completament desacoplat dels nuclis i interactua amb acceleradors i software a través de cues de memòria. Integrem Falafel en un sistema multinucli, i mostrem com Falafel pot oferir guanys de entre 6-9% en càrregues de productor/consumidor en un sistema amb un accelerador extern generant peticions., A medida que los sistemas heterogéneos se popularizan, la transferencia de datos entre aceleradores y software es cada vez más importante. Sin embargo, los aceleradores modernos no disponen de una interfaz clara para asignar memoria, que se realiza normalmente a través de interrupciones, lo que provoca cambios de contexto tediosos y reduce el rendimiento. En esta tesis, damos un paso adelante para demostrar que los gestores de memoria en hardware y software pueden coexistir en el mismo mapa de memoria. Presentamos Falafel, un gestor de memoria en hardware que puede responder a peticiones de memoria a través de una interfaz sencilla y estándar, sin interactuar con software, liberando a los hilos de ejecución de la tarea de asignar memoria a los aceleradores. Falafel está completamente desacoplado de los núcleos e interactúa con los aceleradores y el software a través de colas de memoria. Integramos Falafel en un sistema multinúcleo, y mostramos como Falafel puede ofrecer ganancias de entre 6-9% en cargas de productor/consumidor en un sistema con un acelerador externo generando peticiones., As heterogeneous systems become more mainstream, data transfer between accelerators and software is becoming increasingly important. However, modern accelerators don’t have a clear interface through which to allocate memory, with requests being typically handled through interrupts, causing tedious context switches and hurting performance. In this thesis, we take a step forwards in showing that hardware and software allocators can coexist in the same memory map. We present Falafel, a hardware memory allocator that can respond to allocation requests through a simple, standard interface, without the need of software interaction, freeing threads from the task of having to allocate memory for accelerators. Falafel is completely decoupled from the core, and interfaces with accelerators and software through memory queues. We integrate Falafel into a manycore system, and show that Falafel can offer speedups of around 6-9% in producer/consumer workloads in a system with an external accelerator generating requests., Outgoing
- Published
- 2024
28. Addressing Fragmentation in ZGC through Custom Allocators : Leveraging a Lean, Mean, Free-List Machine
- Author
-
Sikström, Joel and Sikström, Joel
- Abstract
The Java programming language manages memory automatically through the use of a garbage collector (GC). The Java Virtual Machine provides several GCs tuned for different usage scenarios. One such GC is ZGC. Both ZGC and other GCs utilize bump-pointer allocation, which allocates objects compactly but leads to the creation of unusable memory gaps over time, known as fragmentation. ZGC handles fragmentation through relocation, a process which is costly. This thesis proposes an alternative memory allocation method leveraging free-lists to reduce the need for relocation to manage fragmentation.We design and develop a new allocator tailored for ZGC, based on the TLSF allocator by Masmano et al. Previous research on the customization of allocators shows varying results and does not fully investigate usage in complex environments like a GC.Opportunities for enhancements in performance and memory efficiency are identified and implemented through the exploration of ZGC's operational boundaries. The most significant adaptation is the introduction of a 0-byte header, which leverages information within ZGC to significantly reduce internal fragmentation of the allocator. We evaluate the performance of our adapted allocator and compare it to a reference implementation of TLSF. Results show that the adapted allocator performs on par with the reference implementation for single allocations but is slightly slower for single frees and when applying allocation patterns from real-world programs. The findings of this work suggest that customizing allocators for garbage collection is worth considering and may be useful for future integration.
- Published
- 2024
29. The first two decades of CREB-memory research: data for philosophy of neuroscience
- Author
-
John Bickle
- Subjects
cyclic adenosine monophosphate response element-binding protein (creb) ,memory consolidation ,memory allocation ,ruthless reductionism ,mechanism ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
I recount some landmark discoveries that initially confirmed the cyclic AMP response element-binding (CREB) protein-memory consolidation and allocation linkages. This work constitutes one of the successes of the field of Molecular and Cellular Cognition (MCC) but is also of interest to philosophers of neuroscience. Two approaches, “mechanism” and “ruthless reductionism”, claim to account for this case, yet these accounts differ in one crucial way. I explain this difference and argue that both the experiment designs and discussions of these discoveries by MCC scientists better fit the ruthless reductionist's account. This conclusion leads to further philosophical discussion about how discoveries in cellular/molecular neurobiology integrate with systems neuroscience findings.
- Published
- 2021
- Full Text
- View/download PDF
30. Cortico-amygdala interaction determines the insular cortical neurons involved in taste memory retrieval
- Author
-
Konami Abe, Marin Kuroda, Yosuke Narumi, Yuki Kobayashi, Shigeyoshi Itohara, Teiichi Furuichi, and Yoshitake Sano
- Subjects
Memory allocation ,Insular cortex ,Basolateral amygdala ,Conditioned taste aversion ,Taste memory ,Functional interaction ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
Abstract The insular cortex (IC) is the primary gustatory cortex, and it is a critical structure for encoding and retrieving the conditioned taste aversion (CTA) memory. In the CTA, consumption of an appetitive tastant is associated with aversive experience such as visceral malaise, which results in avoidance of consuming a learned tastant. Previously, we showed that levels of the cyclic-AMP-response-element-binding protein (CREB) determine the insular cortical neurons that proceed to encode a conditioned taste memory. In the amygdala and hippocampus, it is shown that CREB and neuronal activity regulate memory allocation and the neuronal mechanism that determines the specific neurons in a neural network that will store a given memory. However, cellular mechanism of memory allocation in the insular cortex is not fully understood. In the current study, we manipulated the neuronal activity in a subset of insular cortical and/or basolateral amygdala (BLA) neurons in mice, at the time of learning; for this purpose, we used an hM3Dq designer receptor exclusively activated by a designer drug system (DREADD). Subsequently, we examined whether the neuronal population whose activity is increased during learning, is reactivated by memory retrieval, using the expression of immediate early gene c-fos. When an hM3Dq receptor was activated only in a subset of IC neurons, c-fos expression following memory retrieval was not significantly observed in hM3Dq-positive neurons. Interestingly, the probability of c-fos expression in hM3Dq-positive IC neurons after retrieval was significantly increased when the IC and BLA were co-activated during conditioning. Our findings suggest that functional interactions between the IC and BLA regulates CTA memory allocation in the insular cortex, which shed light on understanding the mechanism of memory allocation regulated by interaction between relevant brain areas.
- Published
- 2020
- Full Text
- View/download PDF
31. Accelerated superpixel image segmentation with a parallelized DBSCAN algorithm.
- Author
-
Loke, Seng Cheong, MacDonald, Bruce A., Parsons, Matthew, and Wünsche, Burkhard Claus
- Abstract
Segmentation of an image into superpixel clusters is a necessary part of many imaging pathways. In this article, we describe a new routine for superpixel image segmentation (F-DBSCAN) based on the DBSCAN algorithm that is six times faster than previous existing methods, while being competitive in terms of segmentation quality and resistance to noise. The gains in speed are achieved through efficient parallelization of the cluster search process by limiting the size of each cluster thus enabling the processes to operate in parallel without duplicating search areas. Calculations are performed in large consolidated memory buffers which eliminate fragmentation and maximize memory cache hits thus improving performance. When tested on the Berkeley Segmentation Dataset, the average processing speed is 175 frames/s with a Boundary Recall of 0.797 and an Achievable Segmentation Accuracy of 0.944. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. LIFO-STACK SIZE DETERMINATION FOR GROWING OF THE IMAGE REGIONS
- Author
-
V. Yu. Tsviatkou
- Subjects
lifo-stack ,image segmentation ,region growing ,stack size ,memory allocation ,Electronics ,TK7800-8360 - Abstract
This paper considers the problem of memory allocation for the organization of the LIFO-stack in the algorithm for image segmentation based on growing regions is considered. Segmentation divides the image into regions with identical or similar properties and is the most demanding process for the capacity of RAM. The cultivation of areas begins with the neighborhoods of pre-selected initial growth pixels and uses stacks to store the coordinates of adjacent pixels attached to the cultivated region. Stack loading is maximized when the segment size matches the size of the YX image. In the absence of an expression for the exact determination of the size of the stack, it is possible to guarantee the stable operation of the algorithm for growing regions, eliminating the overflow of the memory allocated for processing if the stack size is assumed equal to YX. However, this approach does not take into account the fact that filling the coordinate stacks is also accompanied by a selection of them, which makesthe stack size always smaller than YX. The article proposes an expression that allows one to increase the accuracy of determining the required size of the LIFO-stack for storing the coordinates of adjacent pixels depending on the image size. The expression takes into account the conditions of the maximum load of the LIFO-stack when: a) the segmentation of the square region with the initial growth pixel in the corner of this region is carried out; b) in the scan window, adjacent pixels are always selected in order with the first selectable pixel located in the corner of the scan window. Using the proposed expression to calculate the required capacity of the LIFO-stack under conditions of its maximum load in the image segmentation algorithm based on growing regions provides a 2-fold reduction in the number of LIFO-stack memory cells.
- Published
- 2020
- Full Text
- View/download PDF
33. Memory Management
- Author
-
Kävrestad, Joakim and Kävrestad, Joakim
- Published
- 2018
- Full Text
- View/download PDF
34. Memory allocation anomalies in high‐performance computing applications: A study with numerical simulations.
- Author
-
Gomes, Antônio Tadeu A., Molion, Enzo, Souto, Roberto P., and Méhaut, Jean‐François
- Subjects
SOFTWARE development tools ,PARTIAL differential equations ,COMPUTER simulation ,FINITE element method ,MEMORY ,FOOTPRINTS - Abstract
Summary: A memory allocation anomaly occurs when the allocation of a set of heap blocks imposes an unnecessary overhead on the execution of an application. This overhead is particularly disturbing for high‐performance computing (HPC) applications running on shared resources—for example, numerical simulations running on clusters or clouds—because it may increase either the execution time of the application (contributing to a reduction on the overall efficiency of the shared resource) or its memory consumption (eventually inhibiting its capacity to handle larger problems). In this article, we propose a method for identifying, locating, characterizing and fixing allocation anomalies, and a tool for developers to apply the method. We experiment our method and tool with a numerical simulator aimed at approximating the solutions to partial differential equations using a finite element method. We show that taming allocation anomalies in this simulator reduces both its execution time and the memory footprint of its processes, irrespective of the specific heap allocator being employed with it. We conclude that the developer of HPC applications can benefit from the method and tool during the software development cycle. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. The first two decades of CREB-memory research: data for philosophy of neuroscience.
- Author
-
Bickle, John
- Subjects
- *
SENSE data , *CYCLIC adenylic acid , *NEUROSCIENCES - Abstract
I recount some landmark discoveries that initially confirmed the cyclic AMP response element-binding (CREB) protein-memory consolidation and allocation linkages. This work constitutes one of the successes of the field of Molecular and Cellular Cognition (MCC) but is also of interest to philosophers of neuroscience. Two approaches, "mechanism" and "ruthless reductionism", claim to account for this case, yet these accounts differ in one crucial way. I explain this difference and argue that both the experiment designs and discussions of these discoveries by MCC scientists better fit the ruthless reductionist's account. This conclusion leads to further philosophical discussion about how discoveries in cellular/molecular neurobiology integrate with systems neuroscience findings. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET.
- Author
-
CHENG PAN, XIAOLIN WANG, YINGWEI LUO, and ZHENLIN WANG
- Subjects
SPACE (Architecture) ,MEMORY ,EVICTION ,WEB services - Abstract
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-awareMemory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latencywithminimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%~52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%~5.5%) when dynamically switching policies between pRedis and HC. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. The brain in motion: How ensemble fluidity drives memory-updating and flexibility
- Author
-
William Mau, Michael E Hasselmo, and Denise J Cai
- Subjects
neural ensemble ,engram ,memory flexibility ,memory updating ,memory consolidation ,memory allocation ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
While memories are often thought of as flashbacks to a previous experience, they do not simply conserve veridical representations of the past but must continually integrate new information to ensure survival in dynamic environments. Therefore, ‘drift’ in neural firing patterns, typically construed as disruptive ‘instability’ or an undesirable consequence of noise, may actually be useful for updating memories. In our view, continual modifications in memory representations reconcile classical theories of stable memory traces with neural drift. Here we review how memory representations are updated through dynamic recruitment of neuronal ensembles on the basis of excitability and functional connectivity at the time of learning. Overall, we emphasize the importance of considering memories not as static entities, but instead as flexible network states that reactivate and evolve across time and experience.
- Published
- 2020
- Full Text
- View/download PDF
38. The Interplay of Synaptic Plasticity and Scaling Enables Self-Organized Formation and Allocation of Multiple Memory Representations
- Author
-
Johannes Maria Auth, Timo Nachstedt, and Christian Tetzlaff
- Subjects
memory allocation ,memory formation ,synaptic plasiticity ,synaptic scaling ,network dynamic ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
It is commonly assumed that memories about experienced stimuli are represented by groups of highly interconnected neurons called cell assemblies. This requires allocating and storing information in the neural circuitry, which happens through synaptic weight adaptations at different types of synapses. In general, memory allocation is associated with synaptic changes at feed-forward synapses while memory storage is linked with adaptation of recurrent connections. It remains, however, largely unknown how memory allocation and storage can be achieved and the adaption of the different synapses involved be coordinated to allow for a faithful representation of multiple memories without disruptive interference between them. In this theoretical study, by using network simulations and phase space analyses, we show that the interplay between long-term synaptic plasticity and homeostatic synaptic scaling organizes simultaneously the adaptations of feed-forward and recurrent synapses such that a new stimulus forms a new memory and where different stimuli are assigned to distinct cell assemblies. The resulting dynamics can reproduce experimental in-vivo data, focusing on how diverse factors, such as neuronal excitability and network connectivity, influence memory formation. Thus, the here presented model suggests that a few fundamental synaptic mechanisms may suffice to implement memory allocation and storage in neural circuitry.
- Published
- 2020
- Full Text
- View/download PDF
39. ML for ML: Learning Cost Semantics by Experiment
- Author
-
Das, Ankush, Hoffmann, Jan, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Legay, Axel, editor, and Margaria, Tiziana, editor
- Published
- 2017
- Full Text
- View/download PDF
40. Formal Verification of a Memory Allocation Module of Contiki with Frama-C: A Case Study
- Author
-
Mangano, Frédéric, Duquennoy, Simon, Kosmatov, Nikolai, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Cuppens, Frédéric, editor, Cuppens, Nora, editor, Lanet, Jean-Louis, editor, and Legay, Axel, editor
- Published
- 2017
- Full Text
- View/download PDF
41. The Interplay of Synaptic Plasticity and Scaling Enables Self-Organized Formation and Allocation of Multiple Memory Representations.
- Author
-
Auth, Johannes Maria, Nachstedt, Timo, and Tetzlaff, Christian
- Subjects
NEUROPLASTICITY ,LONG-term synaptic depression ,NEURAL circuitry ,SYNAPSES ,MEMORY ,PHASE space - Abstract
It is commonly assumed that memories about experienced stimuli are represented by groups of highly interconnected neurons called cell assemblies. This requires allocating and storing information in the neural circuitry, which happens through synaptic weight adaptations at different types of synapses. In general, memory allocation is associated with synaptic changes at feed-forward synapses while memory storage is linked with adaptation of recurrent connections. It remains, however, largely unknown how memory allocation and storage can be achieved and the adaption of the different synapses involved be coordinated to allow for a faithful representation of multiple memories without disruptive interference between them. In this theoretical study, by using network simulations and phase space analyses, we show that the interplay between long-term synaptic plasticity and homeostatic synaptic scaling organizes simultaneously the adaptations of feed-forward and recurrent synapses such that a new stimulus forms a new memory and where different stimuli are assigned to distinct cell assemblies. The resulting dynamics can reproduce experimental in-vivo data, focusing on how diverse factors, such as neuronal excitability and network connectivity, influence memory formation. Thus, the here presented model suggests that a few fundamental synaptic mechanisms may suffice to implement memory allocation and storage in neural circuitry. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Object-Level Memory Allocation and Migration in Hybrid Memory Systems.
- Author
-
Liu, Haikun, Liu, Renshan, Liao, Xiaofei, Jin, Hai, He, Bingsheng, and Zhang, Yu
- Subjects
- *
DYNAMIC random access memory , *HYBRID systems , *MEMORY , *RANDOM access memory , *SOURCE code - Abstract
Hybrid memory systems composed of emerging non-volatile memory (NVM) and DRAM have drawn increasing attention in recent years. To fully exploit the advantages of both NVM and DRAM, a primary goal is to properly place application data on the hybrid memories. Previous studies have focused on page migration schemes to achieve higher performance and energy efficiency. However, those schemes all rely on online page access monitoring (costly), and data migration at the page granularity may cause additional overhead due to DRAM bandwidth contention and maintenance of cache/TLB consistency. In this article, we present Object-level memory Allocation and Migration (OAM) mechanisms for hybrid memory systems. OAM exploits a profiling tool to characterize objects’ memory access patterns at different execution phases of applications, and applies a performance/energy model to direct the initial static memory allocation and runtime dynamic object migration between NVM and DRAM. Based on our newly-developed programming interfaces for hybrid memory systems, application source codes can be automatically transformed via static code instrumentation. We evaluate OAM on an emulated hybrid memory system, and experimental results show that OAM can significantly reduce system energy-delay-product by 61 percent on average compared to a page-interleaving data placement scheme. It can also significantly reduce data migration overhead by 83 and 69 percent compared to the state-of-the-art page migration scheme CLOCK-DWF and 2PP, respectively, while improving application performance by up to 22 and 10 percent. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. Cortico-amygdala interaction determines the insular cortical neurons involved in taste memory retrieval.
- Author
-
Abe, Konami, Kuroda, Marin, Narumi, Yosuke, Kobayashi, Yuki, Itohara, Shigeyoshi, Furuichi, Teiichi, and Sano, Yoshitake
- Subjects
INSULAR cortex ,NEURONS ,TASTE ,DESIGNER drugs ,MEMORY ,AMYGDALOID body - Abstract
The insular cortex (IC) is the primary gustatory cortex, and it is a critical structure for encoding and retrieving the conditioned taste aversion (CTA) memory. In the CTA, consumption of an appetitive tastant is associated with aversive experience such as visceral malaise, which results in avoidance of consuming a learned tastant. Previously, we showed that levels of the cyclic-AMP-response-element-binding protein (CREB) determine the insular cortical neurons that proceed to encode a conditioned taste memory. In the amygdala and hippocampus, it is shown that CREB and neuronal activity regulate memory allocation and the neuronal mechanism that determines the specific neurons in a neural network that will store a given memory. However, cellular mechanism of memory allocation in the insular cortex is not fully understood. In the current study, we manipulated the neuronal activity in a subset of insular cortical and/or basolateral amygdala (BLA) neurons in mice, at the time of learning; for this purpose, we used an hM3Dq designer receptor exclusively activated by a designer drug system (DREADD). Subsequently, we examined whether the neuronal population whose activity is increased during learning, is reactivated by memory retrieval, using the expression of immediate early gene c-fos. When an hM3Dq receptor was activated only in a subset of IC neurons, c-fos expression following memory retrieval was not significantly observed in hM3Dq-positive neurons. Interestingly, the probability of c-fos expression in hM3Dq-positive IC neurons after retrieval was significantly increased when the IC and BLA were co-activated during conditioning. Our findings suggest that functional interactions between the IC and BLA regulates CTA memory allocation in the insular cortex, which shed light on understanding the mechanism of memory allocation regulated by interaction between relevant brain areas. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. Hippocampus-to-amygdala pathway drives the separation of remote memories of related events.
- Author
-
Concina, Giulia, Milano, Luisella, Renna, Annamaria, Manassero, Eugenio, Stabile, Francesca, and Sacchetti, Benedetto
- Abstract
The mammalian brain can store and retrieve memories of related events as distinct memories and remember common features of those experiences. How it computes this function remains elusive. Here, we show in rats that recent memories of two closely timed auditory fear events share overlapping neuronal ensembles in the basolateral amygdala (BLA) and are functionally linked. However, remote memories have reduced neuronal overlap and are functionally independent. The activity of parvalbumin (PV)-expressing neurons in the BLA plays a crucial role in forming separate remote memories. Chemogenetic blockade of PV preserves individual remote memories but prevents their segregation, resulting in reciprocal associations. The hippocampus drives this process through specific excitatory connections with BLA GABAergic interneurons. These findings provide insights into the neuronal mechanisms that minimize the overlap between distinct remote memories and enable the retrieval of related memories separately. [Display omitted] • Hippocampus drives the pattern separation of fear memories in the amygdala • Parvalbumin cells enable the separation of related fear memories in the amygdala • Recent overlapped memories are transformed into segregated remote memories over time Remembering closely related events poses a brain challenge. Each memory must retain its uniqueness while commonalities are recalled. Concina et al. found that hippocampal neurons guide amygdala inhibitory cells, separating overlapped memories. This insight aids understanding of memory segregation, which is compromised in disorders like schizophrenia and post-traumatic stress disorder. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Closing the Performance Gap with Modern C++
- Author
-
Heller, Thomas, Kaiser, Hartmut, Diehl, Patrick, Fey, Dietmar, Schweitzer, Marc Alexander, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Taufer, Michela, editor, Mohr, Bernd, editor, and Kunkel, Julian M., editor
- Published
- 2016
- Full Text
- View/download PDF
46. HYDRA : Extending Shared Address Programming for Accelerator Clusters
- Author
-
Sakdhnagool, Putt, Sabne, Amit, Eigenmann, Rudolf, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Shen, Xipeng, editor, Mueller, Frank, editor, and Tuck, James, editor
- Published
- 2016
- Full Text
- View/download PDF
47. Automatic and Efficient Data Host-Device Communication for Many-Core Coprocessors
- Author
-
Ren, Bin, Ravi, Nishkam, Yang, Yi, Feng, Min, Agrawal, Gagan, Chakradhar, Srimat, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Shen, Xipeng, editor, Mueller, Frank, editor, and Tuck, James, editor
- Published
- 2016
- Full Text
- View/download PDF
48. High-Performance and Scalable Agent-Based Simulation with BioDynaMo
- Author
-
Breitwieser, Lukas (author), Hesam, A.S. (author), Rademakers, Fons (author), Luna, Juan Gómez (author), Mutlu, Onur (author), Breitwieser, Lukas (author), Hesam, A.S. (author), Rademakers, Fons (author), Luna, Juan Gómez (author), and Mutlu, Onur (author)
- Abstract
Agent-based modeling plays an essential role in gaining insights into biology, sociology, economics, and other fields. However, many existing agent-based simulation platforms are not suitable for large-scale studies due to the low performance of the underlying simulation engines. To overcome this limitation, we present a novel high-performance simulation engine. We identify three key challenges for which we present the following solutions. First, to maximize parallelization, we present an optimized grid to search for neighbors and parallelize the merging of thread-local results. Second, we reduce the memory access latency with a NUMA-aware agent iterator, agent sorting with a space-filling curve, and a custom heap memory allocator. Third, we present a mechanism to omit the collision force calculation under certain conditions. Our evaluation shows an order of magnitude improvement over Biocellion, three orders of magnitude speedup over Cortex3D and NetLogo, and the ability to simulate 1.72 billion agents on a single server. Supplementary Materials, including instructions to reproduce the results, are available at: https://doi.org/10.5281/zenodo.6463816, Computer Engineering
- Published
- 2023
- Full Text
- View/download PDF
49. Techniques for Memory-Efficient Model Checking of C and C++ Code
- Author
-
Ročkai, Petr, Štill, Vladimír, Barnat, Jiří, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Calinescu, Radu, editor, and Rumpe, Bernhard, editor
- Published
- 2015
- Full Text
- View/download PDF
50. Reverse Code Generation for Parallel Discrete Event Simulation
- Author
-
Schordan, Markus, Jefferson, David, Barnes, Peter, Oppelstrup, Tomas, Quinlan, Daniel, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Krivine, Jean, editor, and Stefani, Jean-Bernard, editor
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.