Human interactive proofs (HIPs) are a basic security measure on the Internet to avoid automatic attacks. There is an ongoing effort to find a HIP that is secure enough yet easy for humans. Recently, a new HIP has been designed aiming at higher security: the Civil Rights Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). It employs the empathy capacity of humans to further strengthen Securimage, a well-known text CAPTCHA. In this paper, we analyze it from a security perspective, finding fundamental design flaws. Using several well-known machine learning (ML) algorithms, we analyze to what extent these flaws affect its security. We discover that thanks to them, we can create a successful side-channel attack. This attack is able to correctly solve the HIP on 20.7 % of occasions, much more than enough to consider it broken. Thus, we show that there is no need to solve the problem of optical character recognition nor empathy analysis for computers to break this particular HIP. ML can be successfully used to break a HIP that uses both with a side-channel attack. This security analysis can be applied to other HIPs. It will allow to test whether they are leaking too much information by unexpected ways, given non-evident design flaws. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]