Graph neural networks (GNN) are vital for analyzing real-world problems (e.g., network analysis, drug interaction, electronic design automation, e-commerce) that use graph models. However, efficient GNN acceleration faces with multiple challenges related to high and variable sparsity of input feature vectors, power-law degree distribution in the adjacency matrix, and maintaining load-balanced computation with minimal random memory accesses. This thesis addresses the problems of building fast, energy-efficient inference and training accelerators for GNNs, addressing both static and dynamic graphs. For inference, this thesis proposes GNNIE, a versatile GNN inference accelerator capable of handling a diverse set of GNNs, including graph attention networks (GATs), graph convolutional networks (GCNs), GraphSAGE, GINConv, and DiffPool. It mitigates workload imbalance by (i) splitting vertex feature operands into blocks, (ii) reordering and redistributing computations, (iii) using a novel "flexible MAC" architecture. To maximize on-chip data reuse and reduce random DRAM fetches, GNNIE adopts a novel graph-specific, degree-aware caching policy. GNNIE attains substantial speedup over CPU (7197x), GPU (17.81x), and prior works, e.g., HyGCN (5x), AWB-GCN (1.3x) over multiple datasets on GCN, GAT, GraphSAGE, and GINConv. For training GNNs for large graphs, this research develops a GNNIE-based multicore accelerator. A novel feature vector segmentation approach is proposed to scale on large graphs using small on-chip buffers. A multicore-specific graph-specific caching is also implemented to reduce off-chip and on-chip communication and to alleviate random DRAM accesses. Experiments over multiple large datasets and multiple GNNs demonstrate an average training speedup and energy efficiency improvement of 17x and 322x, respectively, over DGL on a GPU, and a speedup of 14x with 268x lower energy over the GPU-based GNNAdvisor approach. Overall, this research tackles scalability and versatility issues of building GNN accelerators while delivering significant speedup and energy efficiency. Finally, this thesis addresses the acceleration of dynamic graph neural networks (DGNNs), which play a crucial role in applications such as social network analytics and urban traffic prediction that require inferencing on graph-structured data, where the connectivity and features of the underlying graph evolve over time. The proposed platform integrates GNN and Recurrent Neural Network (RNN) components of DGNNs, providing a unified platform for spatial and temporal information capture, respectively. The contributions encompass optimized cache reuse strategies, a novel caching policy, and an efficient pipelining mechanism. Evaluation across multiple graph datasets and multiple DGNNs demonstrates average energy efficiency gains of 8393x, 183x, and 87x - 10x, and inference speedups of 1796x, 77x, and 21x - 2.4x, over Intel Xeon Gold CPU, NVIDIA V100 GPU, and prior state-of-the-art DGNN accelerators, respectively, are demonstrated across multiple graph datasets and multiple DGNNs. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]