Network latency is an important metric for many networked systems. For small-scale systems, explicit measurements are carried out to collect $N \times (N-1)$ latency values to cover any pairs of nodes in the network. But this is not practical for large-scale systems due to the significant traffic and processing overhead needed for actual end-to-end latency measurements. Therefore, instead of actual measurements, researchers have proposed to estimate the round-trip times (RTT) to predict the latencies between all nodes within a network based on a small set of actual RTT measurements. However, such methods not only assume that the network is symmetric, which is not necessarily the case in reality, but also require time to converge. In this work, we present a novel method of network latency estimation using Artificial Intelligence (AI), specifically machine learning, which not only does not require any explicit measurements, but is also drastically faster than existing methods. Our method is trained using the well-known iConnect-Ubisoft dataset of actual RTT measurements, and uses the IP address as the primary input. Performance evaluations using two different datasets show that 73.6% and 59.3% of the measurements, respectively for each dataset, are within 20% estimation error.