Search

Your search keyword '"Garofalo, Angelo"' showing total 127 results

Search Constraints

Start Over You searched for: Author "Garofalo, Angelo" Remove constraint Author: "Garofalo, Angelo"
127 results on '"Garofalo, Angelo"'

Search Results

1. Towards Reliable Systems: A Scalable Approach to AXI4 Transaction Monitoring

2. AXI-REALM: Safe, Modular and Lightweight Traffic Monitoring and Regulation for Heterogeneous Mixed-Criticality Systems

3. Open-Source Heterogeneous SoCs for AI: The PULP Platform Experience

4. A Flexible Template for Edge Generative AI with High-Accuracy Accelerated Softmax & GELU

5. vCLIC: Towards Fast Interrupt Handling in Virtualized RISC-V Mixed-criticality Systems

6. Unleashing OpenTitan's Potential: a Silicon-Ready Embedded Secure Element for Root of Trust and Cryptographic Offloading

7. A Gigabit, DMA-enhanced Open-Source Ethernet Controller for Mixed-Criticality Systems

8. SentryCore: A RISC-V Co-Processor System for Safe, Real-Time Control Applications

9. AXI-REALM: A Lightweight and Modular Interconnect Extension for Traffic Regulation and Monitoring of Heterogeneous Real-Time SoCs

10. ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers

11. A 3 TOPS/W RISC-V Parallel Cluster for Inference of Fine-Grain Mixed-Precision Quantized Neural Networks

12. A Survey on Deep Learning Hardware Accelerators for Heterogeneous HPC Platforms

13. Echoes: a 200 GOPS/W Frequency Domain SoC with FFT Processor and I2S DSP for Flexible Data Acquisition from Microphone Arrays

14. DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training

15. Cyber Security aboard Micro Aerial Vehicles: An OpenTitan-based Visual Communication Use Case

16. A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference

17. End-to-End DNN Inference on a Massively Parallel Analog In Memory Computing Architecture

18. Dustin: A 16-Cores Parallel Ultra-Low-Power Cluster with 2b-to-32b Fully Flexible Bit-Precision and Vector Lockstep Execution Mode

19. A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks

20. XpulpNN: Enabling Energy Efficient and Flexible Inference of Quantized Neural Network on RISC-V based IoT End Nodes

21. A Mixed-Precision RISC-V Processor for Extreme-Edge DNN Inference

22. A transprecision floating-point cluster for efficient near-sensor data analytics

23. DORY: Automatic End-to-End Deployment of Real-World DNNs on Low-Cost IoT MCUs

24. Enabling Mixed-Precision Quantized Neural Networks in Extreme-Edge Devices

25. PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors

26. Streamlining the OpenMP Programming Model on Ultra-Low-Power Multi-core MCUs

33. Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing

36. A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference.

37. Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC With 2–8 b DNN Acceleration and 30%-Boost Adaptive Body Biasing

40. Flexible Computing Systems For AI Acceleration At The Extreme Edge Of The IoT

49. Work-in-Progress: DORY: Lightweight Memory Hierarchy Management for Deep NN Inference on IoT Endnodes

Catalog

Books, media, physical & digital resources