Distributed optimization methods allow us to decompose an optimization probleminto smaller, more manageable subproblems that are solved in parallel. For thisreason, they are widely used to solve large-scale problems arising in areas as diverseas wireless communications, optimal control, machine learning, artificial intelligence,computational biology, finance and statistics, to name a few. Moreover, distributedalgorithms avoid the cost and fragility associated with centralized coordination, andprovide better privacy for the autonomous decision makers. These are desirableproperties, especially in applications involving networked robotics, communicationor sensor networks, and power distribution systems.In this thesis we propose the Accelerated Distributed Augmented Lagrangians(ADAL) algorithm, a novel decomposition method for convex optimization prob-lems with certain separability structure. The method is based on the augmentedLagrangian framework and addresses problems that involve multiple agents optimiz-ing a separable convex objective function subject to convex local constraints andlinear coupling constraints. We establish the convergence of ADAL and also showthat it has a worst-case O(1/k) convergence rate, where k denotes the number ofiterations.Moreover, we show that ADAL converges to a local minimum of the problemfor cases with non-convex objective functions. This is the first published work thatformally establishes the convergence of a distributed augmented Lagrangian methodivfor non-convex optimization problems. An alternative way to select the stepsizesused in the algorithm is also discussed. These two contributions are independentfrom each other, meaning that convergence of the non-convex ADAL method canstill be shown using the stepsizes from the convex case, and, similarly, convergenceof the convex ADAL method can be shown using the stepsizes proposed in the non-convex proof.Furthermore, we consider cases where the distributed algorithm needs to operatein the presence of uncertainty and noise and show that the generated sequences ofprimal and dual variables converge to their respective optimal sets almost surely. Inparticular, we are concerned with scenarios where: i) the local computation stepsare inexact or are performed in the presence of uncertainty, and ii) the messageexchanges between agents are corrupted by noise. In this case, the proposed schemecan be classified as a distributed stochastic approximation method. Compared toexisting literature in this area, our work is the first that utilizes the augmentedLagrangian framework. Moreover, the method allows us to solve a richer class ofproblems as compared to existing methods on distributed stochastic approximationthat consider only consensus constraints.Extensive numerical experiments have been carried out in an effort to validatethe novelty and effectiveness of the proposed method in all the areas of the afore-mentioned theoretical contributions. We examine problems in convex, non-convex,and stochastic settings where uncertainties and noise affect the execution of the al-gorithm. For the convex cases, we present applications of ADAL to certain popularnetwork optimization problems, as well as to a two-stage stochastic optimizationproblem. The simulation results suggest that the proposed method outperformsthe state-of-the-art distributed augmented Lagrangian methods that are known inthe literature. For the non-convex cases, we perform simulations on certain simplenon-convex problems to establish that ADAL indeed converges to non-trivial localvsolutions of the problems; in comparison, the straightforward implementation of theother distributed augmented Lagrangian methods on the same problems does notlead to convergence. For the stochastic setting, we present simulation results ofADAL applied on network optimization problems and examine the effect that noiseand uncertainties have in the convergence behavior of the method.As an extended and more involved application, we also consider the problemof relay cooperative beamforming in wireless communications systems. Specifically,we study the scenario of a multi-cluster network, in which each cluster containsmultiple single-antenna source destination pairs that communicate simultaneouslyover the same channel. The communications are supported by cooperating amplify-and-forward relays, which perform beamforming. Since the emerging problem is non-convex, we propose an approximate convex reformulation. Based on ADAL, we alsodiscuss two different ways to obtain a distributed solution that allows for autonomouscomputation of the optimal beamforming decisions by each cluster, while taking intoaccount intra- and inter-cluster interference effects.Our goal in this thesis is to advance the state-of-the-art in distributed optimization by proposing methods that combine fast convergence, wide applicability, easeof implementation, low computational complexity, and are robust with respect todelays, uncertainty in the problem parameters, noise corruption in the message ex-changes, and inexact computations.