Server farms generally consume an enormous amount of energy, which not only increases the running cost but also simultaneously enhances their greenhouse gas emissions. One way to improve the energy efficiency in server farms is dynamical powering on/off servers. However, this method suffers from many setup time to turn the servers back on, which would have a negative impact on a job's response time and waste yet additional energy. This situation is further exacerbated by the unreliability of the servers, which has become the norm in today's server farms. In this paper, we investigate the impact of dynamical powering on/off servers on energy and performance in a typical server farm environment. Prior work has analyzed similar models for a single server, but no analytical results are known for multiservers. We mainly use the matrix geometric method to analyze this model, and system performance measures are explicitly developed in terms of computable forms. An energy-performance trade-off model is derived to determine the optimal management policy for the server farms. Finally, we discuss some extensions of our model to show its robustness and to point out avenues for future research. Numerical examples are provided at several points throughout the paper to illustrate the correctness of our analysis results and to validate the optimization approach. Copyright © 2014 John Wiley & Sons, Ltd.