The ever-increasing demands of high-performance visual and accelerated computing has resulted in GPUs becoming some of the most complex ASICs being built today. The last few years have also seen an explosion in demand for unique silicon designs serving varied markets such as gaming, HPC, healthcare, smart cities, robotics and automotive. Process scaling is an important factor of delivering such continuous performance gains over the decades. Some of these designs push the limits of current chip manufacturing technology, growing to 80B transistors and beyond. Furthermore, these new designs are implemented with innovative new methods in physical design and are accelerated to reach the market at a staggering pace. Delivering outgoing quality in such an expeditious development environment presents unique test challenges related to test time, cost, power, advanced defectivity and diagnosability to list a few. Current industry practice for structural testing of SOCs requires expensive automatic test equipment (ATE). As chip sizes grow exponentially, demand for memory per IO increases to meet low DPPM (defective-parts-permillion) requirements. In most cases, neither additional IOs are available for test, nor the speed of test paths through the IOs improve. With 2.5D and 3D chips becoming more mainstream, IOs available for test have further decreased. Scan compression schemes come at the cost of poor diagnosability or vendor specific design customizations thereby increasing the chip costs. This causes the test cost to increase drastically. As chip volumes grows and 3D integration gets more adoption, it is critical to catch defects as early in the test flow as possible given the cost of waste that otherwise would accumulate. This makes ATE to system-level test (SLT) correlation one of the key factors, which has been historically a major challenge for test. In the small fraction of cases where it is possible, the cost of running structural tests on SLT environment is very expensive, which makes this process impractical. Universal EDA solutions for bridging the ATE-SLT correlation gap do not exist or are not practical. As a new application to DFX design space, automotive and high-performance computing (HPC) markets require periodic in-field testing for safety and reliability. Structural tests provide extremely high coverage compared to functional patterns and most suited to satisfy the requirements of these segments. Existing schemes for in-field structural testing are limited due to long run times and/or low coverage. Additionally, the application of the structural patterns needs to be extremely secure to protect the confidential assets, which makes it even harder. With the advent of sub-5nm transistor technologies, standard solutions that only look at tests’ pass/fail results are not sufficient to catch marginal defects that are hard to detect. In the era of machine learning, we need look beyond test results and what human experts can see, into anomalous chip parameters in a sea of complex data. DFX design space is expanding and rapidly changing, and it is not yet fully serviced by existing EDA solutions. Hence, current requirements push us to innovate across the spectrum of DFX architectures. This talk will provide a summary of various ideas used at NVIDIA to tackle these challenges: Using functional high-speed interfaces for test-data transfer, portable test solutions, programmable test architectures, improved design-for-debug features, emulation for DFX, and using machine learning to solve DFX problems.