At the heart of a statistical analysis, we are interested in drawing conclusions about random variables and the laws they follow. For this we require a sample, therefore our approach is best described as learning from data. In many instances, we already have an intuition about the generating process, meaning the space of all possible models reduces to a specific class that is defined up to a set of unknown parameters. Consequently, learning becomes the task of inferring these parameters given observations. Within this scope, the thesis answers the following two questions: Why are invariances needed? Among all parameters of the model, we often distinguish between those of interest and the so-called nuisance. The latter does not carry any meaning for our purposes, but may still play a crucial role in how the model supports the parameters of interest. This is a fundamental problem in statistics which is solved by finding suitable transformations such that the model becomes invariant against unidentifiable properties. Often, the application at hand already decides upon the necessary requirements: a Euclidean distance matrix, for example, does not carry translational information of the underlying coordinate system. Why Gaussian models? The normal distribution constitutes an important class in statistics due to frequent occurrences in nature, hence it is highly relevant for many research disciplines including physics, astronomy, engineering, but also psychology and social sciences. Besides fundamental results like the central limit theorem, a significant part of its appeal is rooted in convenient mathematical properties which permit closed-form solutions to numerous problems. In this work, we develop and discuss generalizations of three established models: a Gaussian mixture model, a Gaussian graphical model and the Gaussian information bottleneck. On the one hand, all of these are analytically convenient, but on the other hand they suffer from strict normality requirements which seriously limit their range of application. To this end, our focus is to explore solutions and relax these restrictions. We successfully show that with the addition of invariances, the aforementioned models gain a substantial leap forward while retaining their core concepts of the Gaussian foundation.