Though ubiquitous in applications, global optimisation problems are generally the most computationally intense due to their solution time growing exponentially with linear increase in their dimensions (this is the well known/so called 'curse of dimensionality'). In this thesis, we show that this scalability - and sometimes even tractability - challenges can be overcome in the global optimization of functions with low effective dimensionality, that are constant along an (unknown) linear subspace and only vary over the effective (complement) subspace. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations. We consider using random embeddings to reduce the problem dimension and search space, while attempting to preserve optimal values. In particular, we investigate two randomly reduced (sub)problem formulations that aim to solve the corresponding unconstrained and bound-constrained cases of the global optimization problem. In our REGO formulation for unconstrained global optimization, a Gaussian random, low-dimensional problem with bound constraints is formulated and solved in a reduced space by means of a(ny) global optimization algorithm. We prove novel probabilistic bounds for the success of REGO in solving the original, low effective-dimensionality unconstrained problem, which show its independence of the (potentially large) ambient dimension and its precise dependence on the dimensions of the effective and randomly embedding subspaces. These results significantly improve existing theoretical analyses by providing the exact distribution of a reduced minimizer and its Euclidean norm and by the general assumptions required on the problem. We validate our theoretical findings by extensive numerical testing of REGO with three types of global optimization solvers, illustrating the improved scalability of REGO compared to the full-dimensional application of the respective solvers. For the bound-constrained global optimization problem with special structure, a reduced subproblem formulation is investigated that solves the original problem over a Gaussian random low-dimensional subspace subject to affine constraints, so as to preserve feasibility with respect to the given domain. Under reasonable assumptions, we show that the probability that the reduced problem is successful in solving the original, full-dimensional problem is positive. Furthermore, in the case when the objective's effective subspace is aligned with the coordinate axes, we provide an asymptotic bound on this success probability that captures its algebraic dependence on the effective and, surprisingly, ambient dimensions. We then propose X-REGO, a generic algorithmic framework that uses multiple random embeddings, solving the above reduced problem repeatedly, approximately and possibly, adaptively. Using the success probability of the reduced subproblems, we prove that X-REGO converges globally, with probability one, and linearly in the number of embeddings, to a neighbourhood of a constrained global minimizer. Our numerical experiments on special structure functions illustrate our theoretical findings and the improved scalability of X-REGO variants when coupled with state-of-the-art global - and even local - optimization solvers for the subproblems. In an attempt to improve the scalability of generic global optimization problems, that may not possess low effective dimensionality, we propose to extend the use of the random embeddings framework above to this context. For Lipschitz continuous objectives, we develop a novel analysis that estimates the probability of success of the feasible randomly reduced subproblems based on connections to the field of conic integral geometry. To evaluate the quality of our bound, we compare it to the success of uniform sampling, in the asymptotic regime. Finally, again using our success probability bound, we establish that the X- REGO algorithmic framework applied to the generic bound-constrained global optimization problem is convergent with probability one, and linearly in the number of embeddings, to a neighbourhood of a constrained global minimizer.