This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find x̅∈C such that 〈Ax̅,y-x̅〉≥0 for all y∈C, where A:H→H is a single-valued operator, C is a closed convex set of a real Hilbert space H. Given Ψ:H→R∪{+∞} which acts as a penalization function with respect to the constraint x̅∈C, and a penalization parameter βk, we consider an algorithm which alternates a proximal step with respect to ∂Ψ and a gradient step with respect to A and reads as xk=(I+λkβk∂Ψ)-1(xk-1-λkAxk-1). Under mild hypotheses, we obtain weak convergence for an inverse strongly monotone operator and strong convergence for a Lipschitz continuous and strongly monotone operator. Applications to hierarchical minimization and fixed-point problems are also given and the multivalued case is reached by replacing the multivalued operator by its Yosida approximate which is always Lipschitz continuous. [ABSTRACT FROM AUTHOR]