20 results on '"Blili-Hamelin, Borhane"'
Search Results
2. Introducing v0.5 of the AI Safety Benchmark from MLCommons
- Author
-
Vidgen, Bertie, Agrawal, Adarsh, Ahmed, Ahmed M., Akinwande, Victor, Al-Nuaimi, Namir, Alfaraj, Najla, Alhajjar, Elie, Aroyo, Lora, Bavalatti, Trupti, Bartolo, Max, Blili-Hamelin, Borhane, Bollacker, Kurt, Bomassani, Rishi, Boston, Marisa Ferrara, Campos, Siméon, Chakra, Kal, Chen, Canyu, Coleman, Cody, Coudert, Zacharie Delpierre, Derczynski, Leon, Dutta, Debojyoti, Eisenberg, Ian, Ezick, James, Frase, Heather, Fuller, Brian, Gandikota, Ram, Gangavarapu, Agasthya, Gangavarapu, Ananya, Gealy, James, Ghosh, Rajat, Goel, James, Gohar, Usman, Goswami, Sujata, Hale, Scott A., Hutiri, Wiebke, Imperial, Joseph Marvin, Jandial, Surgan, Judd, Nick, Juefei-Xu, Felix, Khomh, Foutse, Kailkhura, Bhavya, Kirk, Hannah Rose, Klyman, Kevin, Knotz, Chris, Kuchnik, Michael, Kumar, Shachi H., Kumar, Srijan, Lengerich, Chris, Li, Bo, Liao, Zeyi, Long, Eileen Peters, Lu, Victor, Luger, Sarah, Mai, Yifan, Mammen, Priyanka Mary, Manyeki, Kelvin, McGregor, Sean, Mehta, Virendra, Mohammed, Shafee, Moss, Emanuel, Nachman, Lama, Naganna, Dinesh Jinenhally, Nikanjam, Amin, Nushi, Besmira, Oala, Luis, Orr, Iftach, Parrish, Alicia, Patlak, Cigdem, Pietri, William, Poursabzi-Sangdeh, Forough, Presani, Eleonora, Puletti, Fabrizio, Röttger, Paul, Sahay, Saurav, Santos, Tim, Scherrer, Nino, Sebag, Alice Schoenauer, Schramowski, Patrick, Shahbazi, Abolfazl, Sharma, Vin, Shen, Xudong, Sistla, Vamsi, Tang, Leonard, Testuggine, Davide, Thangarasa, Vithursan, Watkins, Elizabeth Anne, Weiss, Rebecca, Welty, Chris, Wilbers, Tyler, Williams, Adina, Wu, Carole-Jean, Yadav, Poonam, Yang, Xianjun, Zeng, Yi, Zhang, Wenhui, Zhdanov, Fedor, Zhu, Jiacheng, Liang, Percy, Mattson, Peter, and Vanschoren, Joaquin
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark.
- Published
- 2024
3. A Safe Harbor for AI Evaluation and Red Teaming
- Author
-
Longpre, Shayne, Kapoor, Sayash, Klyman, Kevin, Ramaswami, Ashwin, Bommasani, Rishi, Blili-Hamelin, Borhane, Huang, Yangsibo, Skowron, Aviya, Yong, Zheng-Xin, Kotha, Suhas, Zeng, Yi, Shi, Weiyan, Yang, Xianjun, Southen, Reid, Robey, Alexander, Chao, Patrick, Yang, Diyi, Jia, Ruoxi, Kang, Daniel, Pentland, Sandy, Narayanan, Arvind, Liang, Percy, and Henderson, Peter
- Subjects
Computer Science - Artificial Intelligence - Abstract
Independent evaluation and red teaming are critical for identifying the risks posed by generative AI systems. However, the terms of service and enforcement strategies used by prominent AI companies to deter model misuse have disincentives on good faith safety evaluations. This causes some researchers to fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal. Although some companies offer researcher access programs, they are an inadequate substitute for independent research access, as they have limited community representation, receive inadequate funding, and lack independence from corporate incentives. We propose that major AI developers commit to providing a legal and technical safe harbor, indemnifying public interest safety research and protecting it from the threat of account suspensions or legal reprisal. These proposals emerged from our collective experience conducting safety, privacy, and trustworthiness research on generative AI systems, where norms and incentives could be better aligned with public interests, without exacerbating model misuse. We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
- Published
- 2024
4. Evolving AI Risk Management: A Maturity Model based on the NIST AI Risk Management Framework
- Author
-
Dotan, Ravit, Blili-Hamelin, Borhane, Madhavan, Ravi, Matthews, Jeanna, and Scarpino, Joshua
- Subjects
Computer Science - Computers and Society - Abstract
Researchers, government bodies, and organizations have been repeatedly calling for a shift in the responsible AI community from general principles to tangible and operationalizable practices in mitigating the potential sociotechnical harms of AI. Frameworks like the NIST AI RMF embody an emerging consensus on recommended practices in operationalizing sociotechnical harm mitigation. However, private sector organizations currently lag far behind this emerging consensus. Implementation is sporadic and selective at best. At worst, it is ineffective and can risk serving as a misleading veneer of trustworthy processes, providing an appearance of legitimacy to substantively harmful practices. In this paper, we provide a foundation for a framework for evaluating where organizations sit relative to the emerging consensus on sociotechnical harm mitigation best practices: a flexible maturity model based on the NIST AI RMF.
- Published
- 2024
5. A Framework for Assurance Audits of Algorithmic Systems
- Author
-
Lam, Khoa, Lange, Benjamin, Blili-Hamelin, Borhane, Davidovic, Jovana, Brown, Shea, and Hasan, Ali
- Subjects
Computer Science - Computers and Society - Abstract
An increasing number of regulations propose AI audits as a mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently lacks agreed-upon practices, procedures, taxonomies, and standards. We propose the criterion audit as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that mitigate harms and uphold human values. We discuss the necessary conditions for the criterion audit and provide a procedural blueprint for performing an audit engagement in practice. We illustrate how this framework can be adapted to current regulations by deriving the criteria on which bias audits can be performed for in-scope hiring algorithms, as required by the recently effective New York City Local Law 144 of 2021. We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing where robust guardrails against quality assurance issues are only starting to emerge. Our discussion -- informed by experiences in performing these audits in practice -- highlights the critical role that an audit ecosystem plays in ensuring the effectiveness of audits.
- Published
- 2024
- Full Text
- View/download PDF
6. Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
- Author
-
Blili-Hamelin, Borhane, Hancox-Li, Leif, and Smart, Andrew
- Subjects
Computer Science - Computers and Society - Abstract
Dreams of machines rivaling human intelligence have shaped the field of AI since its inception. Yet, the very meaning of human-level AI or artificial general intelligence (AGI) remains elusive and contested. Definitions of AGI embrace a diverse range of incompatible values and assumptions. Contending with the fractured worldviews of AGI discourse is vital for critiques that pursue different values and futures. To that end, we provide a taxonomy of AGI definitions, laying the ground for examining the key social, political, and ethical assumptions they make. We highlight instances in which these definitions frame AGI or human-level AI as a technical topic and expose the value-laden choices being implicitly made. Drawing on feminist, STS, and social science scholarship on the political and social character of intelligence in both humans and machines, we propose contextual, democratic, and participatory paths to imagining future forms of machine intelligence. The development of future forms of AI must involve explicit attention to the values it encodes, the people it includes or excludes, and a commitment to epistemic justice.
- Published
- 2024
7. Making Intelligence: Ethical Values in IQ and ML Benchmarks
- Author
-
Blili-Hamelin, Borhane and Hancox-Li, Leif
- Subjects
Computer Science - Machine Learning ,Computer Science - Computers and Society - Abstract
In recent years, ML researchers have wrestled with defining and improving machine learning (ML) benchmarks and datasets. In parallel, some have trained a critical lens on the ethics of dataset creation and ML research. In this position paper, we highlight the entanglement of ethics with seemingly ``technical'' or ``scientific'' decisions about the design of ML benchmarks. Our starting point is the existence of multiple overlooked structural similarities between human intelligence benchmarks and ML benchmarks. Both types of benchmarks set standards for describing, evaluating, and comparing performance on tasks relevant to intelligence -- standards that many scholars of human intelligence have long recognized as value-laden. We use perspectives from feminist philosophy of science on IQ benchmarks and thick concepts in social science to argue that values need to be considered and documented when creating ML benchmarks. It is neither possible nor desirable to avoid this choice by creating value-neutral benchmarks. Finally, we outline practical recommendations for ML benchmark research ethics and ethics review., Comment: FAccT 2023, June 12 to 15, 2023, Chicago, IL, USA
- Published
- 2022
- Full Text
- View/download PDF
8. A Framework for Assurance Audits of Algorithmic Systems
- Author
-
Lam, Khoa, primary, Lange, Benjamin, additional, Blili-Hamelin, Borhane, additional, Davidovic, Jovana, additional, Brown, Shea, additional, and Hasan, Ali, additional
- Published
- 2024
- Full Text
- View/download PDF
9. Unsocial Society
- Author
-
Blili-Hamelin, Borhane, primary and Särkelä, Arvi, additional
- Published
- 2020
- Full Text
- View/download PDF
10. Open Dataset, Leaving the Academy
- Author
-
Blili-Hamelin, Borhane, primary and Duckles, Beth, additional
- Published
- 2023
- Full Text
- View/download PDF
11. Next step of: Workshop 1, Leaving the Academy
- Author
-
Blili-Hamelin, Borhane, primary and Duckles, Beth, additional
- Published
- 2023
- Full Text
- View/download PDF
12. Next step of: Leaving the Academy: A Participatory Open Research Approach
- Author
-
Blili-Hamelin, Borhane, primary and Duckles, Beth, additional
- Published
- 2023
- Full Text
- View/download PDF
13. Workshop 3, Leaving the Academy
- Author
-
Blili-Hamelin, Borhane, primary and Duckles, Beth, additional
- Published
- 2023
- Full Text
- View/download PDF
14. Leaving The Academy
- Author
-
Duckles, Beth M., Blili-Hamelin, Borhane, and Bhattacharya, Mrinmoyee
- Subjects
Career transition ,Academia ,Post-Academic ,Alt-ac - Abstract
In this paper we report the results of an open participatory research project that surveyed 209 people with a Ph.D. who have left academia. In this paper, we share the advice they had for people who have not yet left their academic position. We found significant themes in the advice we heard from respondents: We heard advice that folks leaving academia need to shift their mindset when thinking about transitioning out of academia. This mindset shift includes reframing the choice to leave as something that is beneficial as well as beginning the process now and not waiting. We also heard a sense of optimism and positivity around leaving academia as well as suggestions around how to reframe one’s career. Folks leaving academia have found a wide variety of practical strategies helpful during their career transition. These include how to explore careers, how to reframe one’s skills, the importance of networking, informational interviews, and to consider the differences in doing job applications outside of academia versus a CV and cover letter inside of academia. Finally, we heard advice on the realities of leaving academia in the form of frank advice around money, value and worth as well as a discussion of the differences between academic work and industry positions. Our hope is that the discussions of the mindset, practical strategies and realities of leaving academia will be inspiring to people looking to gain insights on the choice to leave academia and the honest realities from those who have experienced the shift. This project is an experiment in participatory qualitative research. Our open dataset is available here:https://doi.org/10.5281/zenodo.8014779 We encourage anyone wishing to look at the full dataset to contact the lead authors, Beth M. Duckles and Borhane Blili-Hamelin.  
- Published
- 2023
- Full Text
- View/download PDF
15. Leaving the Academy: A Participatory Open Research Approach
- Author
-
Blili-Hamelin, Borhane, primary and Post Academics, Open, additional
- Published
- 2022
- Full Text
- View/download PDF
16. Borhane Blili-Hamelin's collection
- Author
-
Blili-Hamelin, Borhane, primary
- Published
- 2022
- Full Text
- View/download PDF
17. Toolkit for Cross-Disciplinary Workshops
- Author
-
Blili-Hamelin, Borhane, Duckles, Beth M., and Monette, Marie-��ve
- Subjects
Open knowledge ,Cross-disciplinary ,Open science ,Open workshops - Abstract
This toolkit shares insights and strategies from our 2021 pilot Open Problem Workshop series. We asked how might we help rethink how real world problems call for the expertise of PhDs from all fields? For those interested in hosting a cross-disciplinary workshop, we have assembled a toolkit of nine activities and suggestions on how to facilitate those activities in an online, collaborative setting. We uncovered three core insights and five guideposts for building successful cross-disciplinary workshops. We believe that openness requires us to empower other communities to replicate, fork, and remix our insights. *Authors listed are in alphabetical order to denote equal co-authorship.
- Published
- 2022
- Full Text
- View/download PDF
18. Working Open: gdoc Template. Companion material for Schley, Duckles & Blili-Hamelin, 2020
- Author
-
Blili-Hamelin, Borhane, Schley, Sara, and Duckles, Beth
- Subjects
flexible online pedagogty - Abstract
Template for gdoc model described in: Schley, S., Duckles, B., & Borhane, B. In Press. Open Knowledge and collaborative documents. To appear in Journal of Faculty Development, Sept. 2020. Content also appears here:https://bit.ly/workingopen_gdoc
- Published
- 2020
- Full Text
- View/download PDF
19. Topography of the Splintered World: Hegel and the Disagreements of Right
- Author
-
Blili-Hamelin, Borhane
- Subjects
Philosophy ,Quarreling ,FOS: Philosophy, ethics and religion - Abstract
For Hegel, serious, painful disagreement among reasonable individuals is part of the very fabric of our intellectual, moral, and social lives. Disagreement about what matters cannot be eliminated. Traditionally, this kind of interpretation is thought to be incompatible with Hegel’s epistemic and metaphysical ambitions: that reason has absolute power to explain all there is, leaving no significant question without an adequate answer. But if genuine disagreement cannot be eliminated, then at least some significant practical normative questions must remain without fully adequate answers. I develop a novel strategy for reconciling these two fundamental aspects of his approach to practical norms and values in his Philosophy of Right. Through what I call topographic explanations, Hegel takes on the task of explaining why the world is structured in such a way that (a) some significant questions necessarily remain open to painful disagreement, and that (b) the world remains a worthy home for our deepest aspirations.
- Published
- 2019
- Full Text
- View/download PDF
20. Liberté? : réflexion sur un problème dans l'éthique de Theodor Adorno
- Author
-
Blili-Hamelin, Borhane and Macdonald, Iain
- Subjects
naturalisme éthique ,rationalisme éthique ,antisémitisme ,anti-Semitism ,liberté ,ethics ,moral rationalism ,categorical imperative ,éthique ,critique ,action ,freedom ,impératif catégorique ,moral naturalism - Abstract
La réflexion morale de Theodor Adorno est manifestement traversée par une tension : l’exigence paradoxale d’enraciner pleinement la morale à la fois dans les impulsions les plus vives et dans la raison la plus lucide. Plus qu’une excentricité parmi d’autres de la figure de proue de l’École de Francfort, le présent mémoire donne à penser que ce problème pourrait être une des principales charnières de son éthique. L’objectif de ma recherche est de dégager une voie pour articuler conjointement, «sans sacrifice aucun», ces deux exigences. Pour ce faire, je tenterai d’étayer l’hypothèse suivante : l’analyse du problème de la liberté et de la non-liberté que développe le premier des trois «modèles» de Dialectique négative permet de comprendre à la fois le lien et l’écart entre la dimension impulsive et rationnelle de l’éthique d’Adorno. L’argument qui sera déployé se penchera d’abord sur le problème de la non-liberté et son incarnation à travers le phénomène concret de l’antisémitisme ainsi que de la peur et de la rage animale dans lesquelles il s’enracine, pour ensuite examiner la conception adornienne de la liberté dans ses deux dimensions de «pleine conscience théorique» et «d’impulsion spontanée», et pour finalement tenter d’apprécier la portée plus générale pour la compréhension de l’éthique d’Adorno de cette interprétation du problème de la liberté en tentant de comprendre sur cette base son «nouvel impératif catégorique»., Throughout Theodor Adorno’s moral thought runs a paradoxical demand : that morality should be fully rooted in both the liveliest impulses and the keenest reasonings. More than a quirk among Adorno’s many, this essay suggests that this problem plays a pivotal role in his ethics. The current research seeks to develop a strategy to conjointly articulate these two demands. To this end, I will try to expound the following hypothesis : the analysis of the problem of freedom and unfreedom set forth by the first of the ‘models’ in Negative Dialectics enables making sense of both the bond and the disparity between the impulsive and rational constituents of adornian ethics. This study will first focus on the problem of unfreedom and its embodiment in the concrete phenomena of anti-Semitism as well as the animal fear and rage that it builds upon. It will then go on to examine Adorno’s conception of freedom in its two facets : «full theoretical consciousness» and «spontaneous impulse». It will finally try to ascertain the more general relevance of this interpretation of the problem of freedom for making sense of Adorno’s ethics, by trying to make sense on that basis of his «new categorical imperative».
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.