With the emergence of OpenAI's ChatGPT, generative artificial intelligence has demonstrated its profound potential to develop. However, the problems about its unclear technologically ethical standards, which arise from the algorithmic rules, have ignited intense global discussions and apprehensions. Therefore, "being responsible" has become a pivotal demand for its development. Nonetheless, nowadays the existing documents discussing the regulation and governance of artificial intelligence from a "responsible" perspective are inadequate compared to the technological advancements themselves. In particular, there is a lack of profound research into the triggering mechanisms of the AIGC (AI-generated content) technology ethical crisis, as well as the formulation of deep-level investigations into "responsible" AI guidelines and regulatory frameworks. This article primarily adopts an approach that involves analyzing policies and data released by the Artificial Intelligence and Economic Society Research Center of the China Academy of Information and Communications Research, the official websites of OpenAI, the EuropeanCommission, and others. By drawing inspiration from the latest research achievements in "responsible" artificial intelligence in the journal Nature, this article explores and uncovers the legislative efforts made by governments worldwide related to "responsibility" in artificial intelligence, scholarly research on the topic, and the latest governance practices implemented by AI enterprises. This research investigates the current state of "responsible" artificial intelligence, its regulatory domains, and the adopted governance measures. The research reveals that the ethical crisis of "unintelligibility, uncontrollability, unreliability, and unsustainability" triggered by AIGC is not only harming humanity but also hindering the advancement of AI technology itself. Thus, it is imperative for humans to regulate and govern artificial intelligence responsibly for the benefit of humanity. As of now, there is a basic consensus among governments worldwide, the academia, and global AI enterprises on the development of responsible artificial intelligence, which is to create a practical foundation for the responsible application of AI to humanity. In comparison to previous literature, this article extends the discourse in two main aspects: firstly, it explores the triggering mechanisms of the AIGC technology ethical crisis, mainly including low algorithmic operational transparency, which will lead to " misunderstanding"; a lack of subjective goodwill, which will trigger "uncontrollability"; Al illusions, which will cause security "unreliability", and the absence of established human- machine partnerships, which will invite "unsustainability". Secondly, in response to the ethical crisis of "misunderstanding," "uncontrollability," "unreliability," and "unsustainability," the article proposes targeted approaches for academia, government, AI enterprises, and users to address these issues. These include promoting academic research in interpretability from a "technology responsibility" perspective, ethical agreements and regulations with a "social responsibility" approach, enterprise autonomy from a "user responsibility" perspective, and enhancing algorithmic literacy with a "self-responsibility" approach, respectively. The core content of this article unveils the triggering mechanisms underlying the AIGC technology ethical crisis and proposes targeted governance strategies. Particularly, the detailed research analyses of foreign legislation related to generative artificial intelligence and AI enterprise autonomy might provide useful insights for China's legislation and autonomous corporate development in this field. Additionally, the study of "responsible" artificial intelligence aims to stimulate academic exchange and embrace intellectual collisions while humbly seeking guidance from relevant experts in this field. [ABSTRACT FROM AUTHOR]