17,143,189 results on '"Humans"'
Search Results
2. Humans need not apply : a guide to wealth and work in the age of artificial intelligence.
- Author
-
Kaplan, Jerry
- Subjects
Artificial intelligence -- Economic aspects ,Artificial intelligence -- Forecasting ,Artificial intelligence -- Social aspects - Abstract
Summary: Researchers are finally cracking the code on artificial intelligence. It has the potential to usher in a new age of affluence and leisure-- but as Kaplan warns, the transition may be protracted and brutal unless we address the two great scourges of the modern developed world: volatile labor markets and income inequality. He proposes innovative, free-market adjustments to our economic system and social policies to avoid an extended period of social turmoil.-- Source other than Library of Congress.
- Published
- 2015
3. Embodying Deeply Held Values in Education: Seeking a More Equitable World for Both Humans and Non-Humans
- Author
-
Jing Lin, Shue-kei Joanna Mok, and Virginia Gomes
- Abstract
In this article, we contend that the bedrock of an equitable world lies in the profound recognition of love as the fundamental force permeating the cosmos. We believe that love is built into the essence of who we are. We posit that genuine progress toward an equitable world is elusive unless we place love, both for one another and for the natural world, at the core of our educational endeavors.
- Published
- 2024
4. Enteric Pathogens in Humans, Domesticated Animals, and Drinking Water in a Low-Income Urban Area of Nairobi, Kenya.
- Author
-
Daly, Sean, Chieng, Benard, Araka, Sylvie, Mboya, John, Imali, Christine, Swarthout, Jenna, Njenga, Sammy, Pickering, Amy, and Harris, Angela
- Subjects
TaqMan Array Card ,drinking water quality ,host−pathogen relationship ,low- and middle-income country ,microbial source tracking ,zoonotic pathogen ,Kenya ,Drinking Water ,Animals ,Humans ,Feces ,Animals ,Domestic ,Poverty ,Escherichia coli ,Water Microbiology ,Dogs - Abstract
To explore the sources of and associated risks with drinking water contamination in low-income, densely populated urban areas, we collected human feces, domesticated animal feces, and source and stored drinking water samples in Nairobi, Kenya in 2019; and analyzed them using microbial source tracking (MST) and enteric pathogen TaqMan Array Cards (TACs). We established host-pathogen relationships in this setting, including detecting Shigella and Norovirus─which are typically associated with humans─in dog feces. We evaluated stored and source drinking water quality using indicator Escherichia coli (E. coli), MST markers, and TACs, detecting pathogen targets in drinking water that were also detected in specific animal feces. This work highlights the need for further evaluation of host-pathogen relationships and the directionality of pathogen transmission to prevent the disease burden associated with unsafe drinking water and domestic animal ownership.
- Published
- 2024
5. Neuropsychobiology of fear-induced bradycardia in humans: progress and pitfalls.
- Author
-
Battaglia, Simone, Nazzi, Claudio, Lonsdorf, Tina, and Thayer, Julian
- Subjects
Fear ,Humans ,Bradycardia ,Heart Rate ,Conditioning ,Classical ,Conditioning ,Psychological - Abstract
In the last century, the paradigm of fear conditioning has greatly evolved in a variety of scientific fields. The techniques, protocols, and analysis methods now most used have undergone a progressive development, theoretical and technological, improving the quality of scientific productions. Fear-induced bradycardia is among these techniques and represents the temporary deceleration of heart beats in response to negative outcomes. However, it has often been used as a secondary measure to assess defensive responding to threat, along other more popular techniques. In this review, we aim at paving the road for its employment as an additional tool in fear conditioning experiments in humans. After an overview of the studies carried out throughout the last century, we describe more recent evidence up to the most contemporary research insights. Lastly, we provide some guidelines concerning the best practices to adopt in human fear conditioning studies which aim to investigate fear-induced bradycardia.
- Published
- 2024
6. Theta phase precession supports memory formation and retrieval of naturalistic experience in humans
- Author
-
Zheng, Jie, Yebra, Mar, Schjetnan, Andrea GP, Patel, Kramay, Katz, Chaim N, Kyzar, Michael, Mosher, Clayton P, Kalia, Suneil K, Chung, Jeffrey M, Reed, Chrystal M, Valiante, Taufik A, Mamelak, Adam N, Kreiman, Gabriel, and Rutishauser, Ueli
- Subjects
Biological Psychology ,Psychology ,Neurosciences ,Clinical Research ,Mental Health ,1.2 Psychological and socioeconomic processes ,1.1 Normal biological development and functioning ,Mental health ,Neurological ,Humans ,Theta Rhythm ,Mental Recall ,Male ,Memory ,Episodic ,Female ,Adult ,Young Adult ,Temporal Lobe ,Neurons ,Motion Pictures ,Biomedical and clinical sciences ,Health sciences - Abstract
Associating different aspects of experience with discrete events is critical for human memory. A potential mechanism for linking memory components is phase precession, during which neurons fire progressively earlier in time relative to theta oscillations. However, no direct link between phase precession and memory has been established. Here we recorded single-neuron activity and local field potentials in the human medial temporal lobe while participants (n = 22) encoded and retrieved memories of movie clips. Bouts of theta and phase precession occurred following cognitive boundaries during movie watching and following stimulus onsets during memory retrieval. Phase precession was dynamic, with different neurons exhibiting precession in different task periods. Phase precession strength provided information about memory encoding and retrieval success that was complementary with firing rates. These data provide direct neural evidence for a functional role of phase precession in human episodic memory.
- Published
- 2024
7. Climate, food and humans predict communities of mammals in the United States
- Author
-
Kays, Roland, Snider, Matthew H., Hess, George, Cove, Michael V., Jensen, Alex, Shamon, Hila, McShea, William J., Rooney, Brigit, Allen, Maximilian L., Pekins, Charles E., Wilmers, Christopher C., Pendergast, Mary E., Green, Austin M., Suraci, Justin, Leslie, Matthew S., Nasrallah, Sophie, Farkas, Dan, Jordan, Mark, Grigione, Melissa, LaScaleia, Michael C., Davis, Miranda L., Hansen, Chris, Millspaugh, Josh, Lewis, Jesse S., Havrda, Michael, Long, Robert, Remine, Kathryn R., Jaspers, Kodi J., Lafferty, Diana J. R., Hubbard, Tru, Studds, Colin E., Barthelmess, Erika L., Andy, Katherine, Romero, Andrea, O'Neill, Brian J., Hawkins, Melissa T. R., Lombardi, Jason V., Sergeyev, Maksim, Fisher-Reid, M. Caitlin, Rentz, Michael S., Nagy, Christopher, Davenport, Jon M., Rega-Brodsky, Christine C., Appel, Cara L., Lesmeister, Damon B., Giery, Sean T., Whittier, Christopher A., Alston, Jesse M., Sutherland, Chris, Rota, Christopher, Murphy, Thomas, Lee, Thomas E., Mortelliti, Alessio, Bergman, Dylan L., Compton, Justin A., Gerber, Brian D., Burr, Jess, Rezendes, Kylie, DeGregorio, Brett A., Wehr, Nathaniel H., Benson, John F., O’Mara, M. Teague, Jachowski, David S., Gray, Morgan, Beyer, Dean E., Belant, Jerrold L., Horan, Robert V., Lonsinger, Robert C., Kuhn, Kellie M., Hasstedt, Steven C. M., Zimova, Marketa, Moore, Sophie M., Herrera, Daniel J., Fritts, Sarah, Edelman, Andrew J., Flaherty, Elizabeth A., Petroelje, Tyler R., Neiswenter, Sean A., Risch, Derek R., Iannarilli, Fabiola, van der Merwe, Marius, Maher, Sean P., Farris, Zach J., Webb, Stephen L., Mason, David S., Lashley, Marcus A., Wilson, Andrew M., Vanek, John P., Wehr, Samuel R., Conner, L. Mike, Beasley, James C., Bontrager, Helen L., Baruzzi, Carolina, Ellis-Felege, Susan N., Proctor, Mike D., Schipper, Jan, Weiss, Katherine C. B., Darracq, Andrea K., Barr, Evan G., Alexander, Peter D., Şekercioğlu, Çağan H., Bogan, Daniel A., Schalk, Christopher M., Fantle-Lepczyk, Jean E., Lepczyk, Christopher A., LaPoint, Scott, Whipple, Laura S., Rowe, Helen Ivy, Mullen, Kayleigh, Bird, Tori, Zorn, Adam, Brandt, LaRoy, Lathrop, Richard G., McCain, Craig, Crupi, Anthony P., Clark, James, and Parsons, Arielle
- Published
- 2024
8. Generating Social and Emotional Skill Items: Humans vs. ChatGPT. ACT Research. Issue Brief
- Author
-
ACT, Inc., Kate E. Walton, and Cristina Anguiano-Carrasco
- Abstract
Large language models (LLMs), such as ChatGPT, are becoming increasingly prominent. Their use is becoming more and more popular to assist with simple tasks, such as summarizing documents, translating languages, rephrasing sentences, or answering questions. Reports like McKinsey's (Chui, & Yee, 2023) estimate that by implementing LLMs, corporations could see a potential growth of $4.4 trillion annually in corporate benefits, while Nielsen (2023) estimates a 66% increase in employee productivity when using LLMs and other forms of generative artificial intelligence (AI). Can we use ChatGPT in the field of social and emotional learning assessment development to enhance our productivity? Some have examined how social and emotional (SE) skills are related to ChatGPT usage, such as cheating in the academic domain (Greitemeyer & Kastenmüller, 2023). In another study, researchers (de Winter et al., 2023) had ChatGPT generate a large number of personas and complete several SE skill measures. They then carried out several analyses such as a factor analysis and correlations with outcome measures and determined how similar the results were to previous research using human-completed SE skill measures. In the current study, rather than have ChatGPT complete SE skill measures, we sought to have ChatGPT create SE skill measures. Ultimately, we will compare a ChatGPT-generated assessment with a human-generated assessment in terms of reliability and validity.
- Published
- 2024
9. Safe and Efficient Robot Action Planning in the Presence of Unconcerned Humans
- Author
-
Amiri, Mohsen and Hosseinzadeh, Mehdi
- Subjects
Computer Science - Robotics ,Mathematics - Optimization and Control - Abstract
This paper proposes a robot action planning scheme that provides an efficient and probabilistically safe plan for a robot interacting with an unconcerned human -- someone who is either unaware of the robot's presence or unwilling to engage in ensuring safety. The proposed scheme is predictive, meaning that the robot is required to predict human actions over a finite future horizon; such predictions are often inaccurate in real-world scenarios. One possible approach to reduce the uncertainties is to provide the robot with the capability of reasoning about the human's awareness of potential dangers. This paper discusses that by using a binary variable, so-called danger awareness coefficient, it is possible to differentiate between concerned and unconcerned humans, and provides a learning algorithm to determine this coefficient by observing human actions. Moreover, this paper argues how humans rely on predictions of other agents' future actions (including those of robots in human-robot interaction) in their decision-making. It also shows that ignoring this aspect in predicting human's future actions can significantly degrade the efficiency of the interaction, causing agents to deviate from their optimal paths. The proposed robot action planning scheme is verified and validated via extensive simulation and experimental studies on a LoCoBot WidowX-250.
- Published
- 2025
10. Influencing Humans to Conform to Preference Models for RLHF
- Author
-
Hatgis-Kessell, Stephane, Knox, W. Bradley, Booth, Serena, Niekum, Scott, and Stone, Peter
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction - Abstract
Designing a reinforcement learning from human feedback (RLHF) algorithm to approximate a human's unobservable reward function requires assuming, implicitly or explicitly, a model of human preferences. A preference model that poorly describes how humans generate preferences risks learning a poor approximation of the human's reward function. In this paper, we conduct three human studies to asses whether one can influence the expression of real human preferences to more closely conform to a desired preference model. Importantly, our approach does not seek to alter the human's unobserved reward function. Rather, we change how humans use this reward function to generate preferences, such that they better match whatever preference model is assumed by a particular RLHF algorithm. We introduce three interventions: showing humans the quantities that underlie a preference model, which is normally unobservable information derived from the reward function; training people to follow a specific preference model; and modifying the preference elicitation question. All intervention types show significant effects, providing practical tools to improve preference data quality and the resultant alignment of the learned reward functions. Overall we establish a novel research direction in model alignment: designing interfaces and training interventions to increase human conformance with the modeling assumptions of the algorithm that will learn from their input.
- Published
- 2025
11. Humans as a Calibration Pattern: Dynamic 3D Scene Reconstruction from Unsynchronized and Uncalibrated Videos
- Author
-
Choi, Changwoon, Kim, Jeongjun, Cha, Geonho, Kim, Minkwan, Wee, Dongyoon, and Kim, Young Min
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent works on dynamic neural field reconstruction assume input from synchronized multi-view videos with known poses. These input constraints are often unmet in real-world setups, making the approach impractical. We demonstrate that unsynchronized videos with unknown poses can generate dynamic neural fields if the videos capture human motion. Humans are one of the most common dynamic subjects whose poses can be estimated using state-of-the-art methods. While noisy, the estimated human shape and pose parameters provide a decent initialization for the highly non-convex and under-constrained problem of training a consistent dynamic neural representation. Given the sequences of pose and shape of humans, we estimate the time offsets between videos, followed by camera pose estimations by analyzing 3D joint locations. Then, we train dynamic NeRF employing multiresolution rids while simultaneously refining both time offsets and camera poses. The setup still involves optimizing many parameters, therefore, we introduce a robust progressive learning strategy to stabilize the process. Experiments show that our approach achieves accurate spatiotemporal calibration and high-quality scene reconstruction in challenging conditions.
- Published
- 2024
12. Map Imagination Like Blind Humans: Group Diffusion Model for Robotic Map Generation
- Author
-
Song, Qijin and Bai, Weibang
- Subjects
Computer Science - Robotics ,Computer Science - Artificial Intelligence - Abstract
Can robots imagine or generate maps like humans do, especially when only limited information can be perceived like blind people? To address this challenging task, we propose a novel group diffusion model (GDM) based architecture for robots to generate point cloud maps with very limited input information.Inspired from the blind humans' natural capability of imagining or generating mental maps, the proposed method can generate maps without visual perception data or depth data. With additional limited super-sparse spatial positioning data, like the extra contact-based positioning information the blind individuals can obtain, the map generation quality can be improved even more.Experiments on public datasets are conducted, and the results indicate that our method can generate reasonable maps solely based on path data, and produce even more refined maps upon incorporating exiguous LiDAR data.Compared to conventional mapping approaches, our novel method significantly mitigates sensor dependency, enabling the robots to imagine and generate elementary maps without heavy onboard sensory devices.
- Published
- 2024
13. Computational Sociology of Humans and Machines; Conflict and Collaboration
- Author
-
Yasseri, Taha
- Subjects
Computer Science - Computers and Society ,Computer Science - Human-Computer Interaction ,Computer Science - Social and Information Networks ,Physics - Physics and Society - Abstract
This Chapter examines the dynamics of conflict and collaboration in human-machine systems, with a particular focus on large-scale, internet-based collaborative platforms. While these platforms represent successful examples of collective knowledge production, they are also sites of significant conflict, as diverse participants with differing intentions and perspectives interact. The analysis identifies recurring patterns of interaction, including serial attacks, reciprocal revenge, and third-party interventions. These microstructures reveal the role of experience, cultural differences, and topic sensitivity in shaping human-human, human-machine, and machine-machine interactions. The chapter further investigates the role of algorithmic agents and bots, highlighting their dual nature: they enhance collaboration by automating tasks but can also contribute to persistent conflicts with both humans and other machines. We conclude with policy recommendations that emphasize transparency, balance, cultural sensitivity, and governance to maximize the benefits of human-machine synergy while minimizing potential detriments., Comment: Please cite as: Yasseri, T. (2025). Computational Sociology of Humans and Machines; Conflict and Collaboration. In: T. Yasseri (Ed.), Handbook of Computational Social Science. Edward Elgar Publishing Ltd
- Published
- 2024
14. The AI Double Standard: Humans Judge All AIs for the Actions of One
- Author
-
Manoli, Aikaterina, Pauketat, Janet V. T., and Anthis, Jacy Reese
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computers and Society ,Computer Science - Emerging Technologies ,Computer Science - Human-Computer Interaction - Abstract
Robots and other artificial intelligence (AI) systems are widely perceived as moral agents responsible for their actions. As AI proliferates, these perceptions may become entangled via the moral spillover of attitudes towards one AI to attitudes towards other AIs. We tested how the seemingly harmful and immoral actions of an AI or human agent spill over to attitudes towards other AIs or humans in two preregistered experiments. In Study 1 (N = 720), we established the moral spillover effect in human-AI interaction by showing that immoral actions increased attributions of negative moral agency (i.e., acting immorally) and decreased attributions of positive moral agency (i.e., acting morally) and moral patiency (i.e., deserving moral concern) to both the agent (a chatbot or human assistant) and the group to which they belong (all chatbot or human assistants). There was no significant difference in the spillover effects between the AI and human contexts. In Study 2 (N = 684), we tested whether spillover persisted when the agent was individuated with a name and described as an AI or human, rather than specifically as a chatbot or personal assistant. We found that spillover persisted in the AI context but not in the human context, possibly because AIs were perceived as more homogeneous due to their outgroup status relative to humans. This asymmetry suggests a double standard whereby AIs are judged more harshly than humans when one agent morally transgresses. With the proliferation of diverse, autonomous AI systems, HCI research and design should account for the fact that experiences with one AI could easily generalize to perceptions of all AIs and negative HCI outcomes, such as reduced trust.
- Published
- 2024
15. Large Language Models show both individual and collective creativity comparable to humans
- Author
-
Sun, Luning, Yuan, Yuzhuo, Yao, Yuan, Li, Yanyan, Zhang, Hao, Xie, Xing, Wang, Xiting, Luo, Fang, and Stillwell, David
- Subjects
Computer Science - Artificial Intelligence - Abstract
Artificial intelligence has, so far, largely automated routine tasks, but what does it mean for the future of work if Large Language Models (LLMs) show creativity comparable to humans? To measure the creativity of LLMs holistically, the current study uses 13 creative tasks spanning three domains. We benchmark the LLMs against individual humans, and also take a novel approach by comparing them to the collective creativity of groups of humans. We find that the best LLMs (Claude and GPT-4) rank in the 52nd percentile against humans, and overall LLMs excel in divergent thinking and problem solving but lag in creative writing. When questioned 10 times, an LLM's collective creativity is equivalent to 8-10 humans. When more responses are requested, two additional responses of LLMs equal one extra human. Ultimately, LLMs, when optimally applied, may compete with a small group of humans in the future of work.
- Published
- 2024
16. A Comprehensive Evaluation of Semantic Relation Knowledge of Pretrained Language Models and Humans
- Author
-
Cao, Zhihan, Yamada, Hiroaki, Teufel, Simone, and Tokunaga, Takenobu
- Subjects
Computer Science - Computation and Language - Abstract
Recently, much work has concerned itself with the enigma of what exactly PLMs (pretrained language models) learn about different aspects of language, and how they learn it. One stream of this type of research investigates the knowledge that PLMs have about semantic relations. However, many aspects of semantic relations were left unexplored. Only one relation was considered, namely hypernymy. Furthermore, previous work did not measure humans' performance on the same task as that solved by the PLMs. This means that at this point in time, there is only an incomplete view of models' semantic relation knowledge. To address this gap, we introduce a comprehensive evaluation framework covering five relations beyond hypernymy, namely hyponymy, holonymy, meronymy, antonymy, and synonymy. We use six metrics (two newly introduced here) for recently untreated aspects of semantic relation knowledge, namely soundness, completeness, symmetry, asymmetry, prototypicality, and distinguishability and fairly compare humans and models on the same task. Our extensive experiments involve 16 PLMs, eight masked and eight causal language models. Up to now only masked language models had been tested although causal and masked language models treat context differently. Our results reveal a significant knowledge gap between humans and models for almost all semantic relations. Antonymy is the outlier relation where all models perform reasonably well. In general, masked language models perform significantly better than causal language models. Nonetheless, both masked and causal language models are likely to confuse non-antonymy relations with antonymy.
- Published
- 2024
17. Molecular characterization and zoonotic potential of Cryptosporidium spp. and Giardia duodenalis in humans and domestic animals in Heilongjiang Province, China
- Author
-
Hao, Yaru, Liu, Aiqin, Li, He, Zhao, Yiyang, Yao, Lan, Yang, Bo, Zhang, Weizhe, and Yang, Fengkun
- Published
- 2024
- Full Text
- View/download PDF
18. Implicit Causality-biases in humans and LLMs as a tool for benchmarking LLM discourse capabilities
- Author
-
Kankowski, Florian, Solstad, Torgrim, Zarriess, Sina, and Bott, Oliver
- Subjects
Computer Science - Computation and Language - Abstract
In this paper, we compare data generated with mono- and multilingual LLMs spanning a range of model sizes with data provided by human participants in an experimental setting investigating well-established discourse biases. Beyond the comparison as such, we aim to develop a benchmark to assess the capabilities of LLMs with discourse biases as a robust proxy for more general discourse understanding capabilities. More specifically, we investigated Implicit Causality verbs, for which psycholinguistic research has found participants to display biases with regard to three phenomena:\ the establishment of (i) coreference relations (Experiment 1), (ii) coherence relations (Experiment 2), and (iii) the use of particular referring expressions (Experiments 3 and 4). With regard to coreference biases we found only the largest monolingual LLM (German Bloom 6.4B) to display more human-like biases. For coherence relation, no LLM displayed the explanation bias usually found for humans. For referring expressions, all LLMs displayed a preference for referring to subject arguments with simpler forms than to objects. However, no bias effect on referring expression was found, as opposed to recent studies investigating human biases., Comment: 38 pages, 8 figures
- Published
- 2025
19. One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor
- Author
-
Wu, Zhikun, Weber, Thomas, and Müller, Florian
- Subjects
Computer Science - Human-Computer Interaction - Abstract
Collaboration has been shown to enhance creativity, leading to more innovative and effective outcomes. While previous research has explored the abilities of Large Language Models (LLMs) to serve as co-creative partners in tasks like writing poetry or creating narratives, the collaborative potential of LLMs in humor-rich and culturally nuanced domains remains an open question. To address this gap, we conducted a user study to explore the potential of LLMs in co-creating memes - a humor-driven and culturally specific form of creative expression. We conducted a user study with three groups of 50 participants each: a human-only group creating memes without AI assistance, a human-AI collaboration group interacting with a state-of-the-art LLM model, and an AI-only group where the LLM autonomously generated memes. We assessed the quality of the generated memes through crowdsourcing, with each meme rated on creativity, humor, and shareability. Our results showed that LLM assistance increased the number of ideas generated and reduced the effort participants felt. However, it did not improve the quality of the memes when humans collaborated with LLM. Interestingly, memes created entirely by AI performed better than both human-only and human-AI collaborative memes in all areas on average. However, when looking at the top-performing memes, human-created ones were better in humor, while human-AI collaborations stood out in creativity and shareability. These findings highlight the complexities of human-AI collaboration in creative tasks. While AI can boost productivity and create content that appeals to a broad audience, human creativity remains crucial for content that connects on a deeper level., Comment: to appear in: 30th International Conference on Intelligent User Interfaces IUI 25 March 2427 2025 Cagliari Italy
- Published
- 2025
- Full Text
- View/download PDF
20. How to Enable Effective Cooperation Between Humans and NLP Models: A Survey of Principles, Formalizations, and Beyond
- Author
-
Huang, Chen, Deng, Yang, Lei, Wenqiang, Lv, Jiancheng, Chua, Tat-Seng, and Huang, Jimmy Xiangji
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction - Abstract
With the advancement of large language models (LLMs), intelligent models have evolved from mere tools to autonomous agents with their own goals and strategies for cooperating with humans. This evolution has birthed a novel paradigm in NLP, i.e., human-model cooperation, that has yielded remarkable progress in numerous NLP tasks in recent years. In this paper, we take the first step to present a thorough review of human-model cooperation, exploring its principles, formalizations, and open challenges. In particular, we introduce a new taxonomy that provides a unified perspective to summarize existing approaches. Also, we discuss potential frontier areas and their corresponding challenges. We regard our work as an entry point, paving the way for more breakthrough research in this regard., Comment: 23 pages
- Published
- 2025
21. Detect Changes like Humans: Incorporating Semantic Priors for Improved Change Detection
- Author
-
Gan, Yuhang, Xuan, Wenjie, Luo, Zhiming, Fang, Lei, Wang, Zengmao, Liu, Juhua, and Du, Bo
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
When given two similar images, humans identify their differences by comparing the appearance ({\it e.g., color, texture}) with the help of semantics ({\it e.g., objects, relations}). However, mainstream change detection models adopt a supervised training paradigm, where the annotated binary change map is the main constraint. Thus, these methods primarily emphasize the difference-aware features between bi-temporal images and neglect the semantic understanding of the changed landscapes, which undermines the accuracy in the presence of noise and illumination variations. To this end, this paper explores incorporating semantic priors to improve the ability to detect changes. Firstly, we propose a Semantic-Aware Change Detection network, namely SA-CDNet, which transfers the common knowledge of the visual foundation models ({\it i.e., FastSAM}) to change detection. Inspired by the human visual paradigm, a novel dual-stream feature decoder is derived to distinguish changes by combining semantic-aware features and difference-aware features. Secondly, we design a single-temporal semantic pre-training strategy to enhance the semantic understanding of landscapes, which brings further increments. Specifically, we construct pseudo-change detection data from public single-temporal remote sensing segmentation datasets for large-scale pre-training, where an extra branch is also introduced for the proxy semantic segmentation task. Experimental results on five challenging benchmarks demonstrate the superiority of our method over the existing state-of-the-art methods. The code is available at \href{https://github.com/thislzm/SA-CD}{SA-CD}.
- Published
- 2024
22. The Digital Ecosystem of Beliefs: does evolution favour AI over humans?
- Author
-
Bossens, David M., Feng, Shanshan, and Ong, Yew-Soon
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Multiagent Systems ,Computer Science - Neural and Evolutionary Computing - Abstract
As AI systems are integrated into social networks, there are AI safety concerns that AI-generated content may dominate the web, e.g. in popularity or impact on beliefs. To understand such questions, this paper proposes the Digital Ecosystem of Beliefs (Digico), the first evolutionary framework for controlled experimentation with multi-population interactions in simulated social networks. The framework models a population of agents which change their messaging strategies due to evolutionary updates following a Universal Darwinism approach, interact via messages, influence each other's beliefs through dynamics based on a contagion model, and maintain their beliefs through cognitive Lamarckian inheritance. Initial experiments with an abstract implementation of Digico show that: a) when AIs have faster messaging, evolution, and more influence in the recommendation algorithm, they get 80% to 95% of the views, depending on the size of the influence benefit; b) AIs designed for propaganda can typically convince 50% of humans to adopt extreme beliefs, and up to 85% when agents believe only a limited number of channels; c) a penalty for content that violates agents' beliefs reduces propaganda effectiveness by up to 8%. We further discuss implications for control (e.g. legislation) and Digico as a means of studying evolutionary principles.
- Published
- 2024
23. Do Multimodal Large Language Models See Like Humans?
- Author
-
Lin, Jiaying, Ye, Shuquan, and Lau, Rynson W. H.
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Multimodal Large Language Models (MLLMs) have achieved impressive results on various vision tasks, leveraging recent advancements in large language models. However, a critical question remains unaddressed: do MLLMs perceive visual information similarly to humans? Current benchmarks lack the ability to evaluate MLLMs from this perspective. To address this challenge, we introduce HVSBench, a large-scale benchmark designed to assess the alignment between MLLMs and the human visual system (HVS) on fundamental vision tasks that mirror human vision. HVSBench curated over 85K multimodal samples, spanning 13 categories and 5 fields in HVS, including Prominence, Subitizing, Prioritizing, Free-Viewing, and Searching. Extensive experiments demonstrate the effectiveness of our benchmark in providing a comprehensive evaluation of MLLMs. Specifically, we evaluate 13 MLLMs, revealing that even the best models show significant room for improvement, with most achieving only moderate results. Our experiments reveal that HVSBench presents a new and significant challenge for cutting-edge MLLMs. We believe that HVSBench will facilitate research on human-aligned and explainable MLLMs, marking a key step in understanding how MLLMs perceive and process visual information., Comment: Project page: https://jiaying.link/HVSBench/
- Published
- 2024
24. Can OpenAI o1 outperform humans in higher-order cognitive thinking?
- Author
-
Latif, Ehsan, Zhou, Yifan, Guo, Shuchen, Shi, Lehong, Gao, Yizhu, Nyaaba, Matthew, Bewerdorff, Arne, Yang, Xiantong, and Zhai, Xiaoming
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence - Abstract
This study evaluates the performance of OpenAI's o1-preview model in higher-order cognitive domains, including critical thinking, systematic thinking, computational thinking, data literacy, creative thinking, logical reasoning, and scientific reasoning. Using established benchmarks, we compared the o1-preview models's performance to human participants from diverse educational levels. o1-preview achieved a mean score of 24.33 on the Ennis-Weir Critical Thinking Essay Test (EWCTET), surpassing undergraduate (13.8) and postgraduate (18.39) participants (z = 1.60 and 0.90, respectively). In systematic thinking, it scored 46.1, SD = 4.12 on the Lake Urmia Vignette, significantly outperforming the human mean (20.08, SD = 8.13, z = 3.20). For data literacy, o1-preview scored 8.60, SD = 0.70 on Merk et al.'s "Use Data" dimension, compared to the human post-test mean of 4.17, SD = 2.02 (z = 2.19). On creative thinking tasks, the model achieved originality scores of 2.98, SD = 0.73, higher than the human mean of 1.74 (z = 0.71). In logical reasoning (LogiQA), it outperformed humans with average 90%, SD = 10% accuracy versus 86%, SD = 6.5% (z = 0.62). For scientific reasoning, it achieved near-perfect performance (mean = 0.99, SD = 0.12) on the TOSLS,, exceeding the highest human scores of 0.85, SD = 0.13 (z = 1.78). While o1-preview excelled in structured tasks, it showed limitations in problem-solving and adaptive reasoning. These results demonstrate the potential of AI to complement education in structured assessments but highlight the need for ethical oversight and refinement for broader applications.
- Published
- 2024
25. Human Variability vs. Machine Consistency: A Linguistic Analysis of Texts Generated by Humans and Large Language Models
- Author
-
Zanotto, Sergio E. and Aroyehun, Segun
- Subjects
Computer Science - Computation and Language - Abstract
The rapid advancements in large language models (LLMs) have significantly improved their ability to generate natural language, making texts generated by LLMs increasingly indistinguishable from human-written texts. Recent research has predominantly focused on using LLMs to classify text as either human-written or machine-generated. In our study, we adopt a different approach by profiling texts spanning four domains based on 250 distinct linguistic features. We select the M4 dataset from the Subtask B of SemEval 2024 Task 8. We automatically calculate various linguistic features with the LFTK tool and additionally measure the average syntactic depth, semantic similarity, and emotional content for each document. We then apply a two-dimensional PCA reduction to all the calculated features. Our analyses reveal significant differences between human-written texts and those generated by LLMs, particularly in the variability of these features, which we find to be considerably higher in human-written texts. This discrepancy is especially evident in text genres with less rigid linguistic style constraints. Our findings indicate that humans write texts that are less cognitively demanding, with higher semantic content, and richer emotional content compared to texts generated by LLMs. These insights underscore the need for incorporating meaningful linguistic features to enhance the understanding of textual outputs of LLMs.
- Published
- 2024
26. ChatCollab: Exploring Collaboration Between Humans and AI Agents in Software Teams
- Author
-
Klieger, Benjamin, Charitsis, Charis, Suzara, Miroslav, Wang, Sierra, Haber, Nick, and Mitchell, John C.
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Artificial Intelligence - Abstract
We explore the potential for productive team-based collaboration between humans and Artificial Intelligence (AI) by presenting and conducting initial tests with a general framework that enables multiple human and AI agents to work together as peers. ChatCollab's novel architecture allows agents - human or AI - to join collaborations in any role, autonomously engage in tasks and communication within Slack, and remain agnostic to whether their collaborators are human or AI. Using software engineering as a case study, we find that our AI agents successfully identify their roles and responsibilities, coordinate with other agents, and await requested inputs or deliverables before proceeding. In relation to three prior multi-agent AI systems for software development, we find ChatCollab AI agents produce comparable or better software in an interactive game development task. We also propose an automated method for analyzing collaboration dynamics that effectively identifies behavioral characteristics of agents with distinct roles, allowing us to quantitatively compare collaboration dynamics in a range of experimental conditions. For example, in comparing ChatCollab AI agents, we find that an AI CEO agent generally provides suggestions 2-4 times more often than an AI product manager or AI developer, suggesting agents within ChatCollab can meaningfully adopt differentiated collaborative roles. Our code and data can be found at: https://github.com/ChatCollab., Comment: Preprint, 25 pages, 7 figures
- Published
- 2024
27. Occurrence and antimicrobial susceptibility of Salmonella enterica in milk along the supply chain, humans, and the environment in Woliata Sodo, Ethiopia
- Author
-
Ayichew, Seblewengel, Zewdu, Ashagrie, Megerrsa, Bekele, Sori, Teshale, and Gutema, Fanta D
- Published
- 2024
- Full Text
- View/download PDF
28. Toxoplasma Gondii in humans, animals and in the environment in Morocco: a literature review
- Author
-
Atif, Ilham, Touloun, Oulaid, and Boussaa, Samia
- Published
- 2024
- Full Text
- View/download PDF
29. A Comparison of Rapid Rule-Learning Strategies in Humans and Monkeys.
- Author
-
Goudar, Vishwa, Kim, Jeong-Woo, Liu, Yue, Dede, Adam, Jutras, Michael, Skelin, Ivan, Ruvalcaba, Michael, Chang, William, Ram, Bhargavi, Fairhall, Adrienne, Lin, Jack, Knight, Robert, Buffalo, Elizabeth, and Wang, Xiao-Jing
- Subjects
Animals ,Female ,Male ,Humans ,Adult ,Macaca mulatta ,Learning ,Young Adult ,Species Specificity ,Choice Behavior ,Reaction Time - Abstract
Interspecies comparisons are key to deriving an understanding of the behavioral and neural correlates of human cognition from animal models. We perform a detailed comparison of the strategies of female macaque monkeys to male and female humans on a variant of the Wisconsin Card Sorting Test (WCST), a widely studied and applied task that provides a multiattribute measure of cognitive function and depends on the frontal lobe. WCST performance requires the inference of a rule change given ambiguous feedback. We found that well-trained monkeys infer new rules three times more slowly than minimally instructed humans. Input-dependent hidden Markov model-generalized linear models were fit to their choices, revealing hidden states akin to feature-based attention in both species. Decision processes resembled a win-stay, lose-shift strategy with interspecies similarities as well as key differences. Monkeys and humans both test multiple rule hypotheses over a series of rule-search trials and perform inference-like computations to exclude candidate choice options. We quantitatively show that perseveration, random exploration, and poor sensitivity to negative feedback account for the slower task-switching performance in monkeys.
- Published
- 2024
30. Small Language Models can Outperform Humans in Short Creative Writing: A Study Comparing SLMs with Humans and LLMs
- Author
-
Marco, Guillermo, Rello, Luz, and Gonzalo, Julio
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
In this paper, we evaluate the creative fiction writing abilities of a fine-tuned small language model (SLM), BART-large, and compare its performance to human writers and two large language models (LLMs): GPT-3.5 and GPT-4o. Our evaluation consists of two experiments: (i) a human study in which 68 participants rated short stories from humans and the SLM on grammaticality, relevance, creativity, and attractiveness, and (ii) a qualitative linguistic analysis examining the textual characteristics of stories produced by each model. In the first experiment, BART-large outscored average human writers overall (2.11 vs. 1.85), a 14% relative improvement, though the slight human advantage in creativity was not statistically significant. In the second experiment, qualitative analysis showed that while GPT-4o demonstrated near-perfect coherence and used less cliche phrases, it tended to produce more predictable language, with only 3% of its synopses featuring surprising associations (compared to 15% for BART). These findings highlight how model size and fine-tuning influence the balance between creativity, fluency, and coherence in creative writing tasks, and demonstrate that smaller models can, in certain contexts, rival both humans and larger models., Comment: Accepted as Main Conference Paper at COLING 2025
- Published
- 2024
31. AgoraSpeech: A multi-annotated comprehensive dataset of political discourse through the lens of humans and AI
- Author
-
Sermpezis, Pavlos, Karamanidis, Stelios, Paraschou, Eva, Dimitriadis, Ilias, Yfantidou, Sofia, Kouskouveli, Filitsa-Ioanna, Troboukis, Thanasis, Kiki, Kelly, Galanopoulos, Antonis, and Vakali, Athena
- Subjects
Computer Science - Computation and Language - Abstract
Political discourse datasets are important for gaining political insights, analyzing communication strategies or social science phenomena. Although numerous political discourse corpora exist, comprehensive, high-quality, annotated datasets are scarce. This is largely due to the substantial manual effort, multidisciplinarity, and expertise required for the nuanced annotation of rhetorical strategies and ideological contexts. In this paper, we present AgoraSpeech, a meticulously curated, high-quality dataset of 171 political speeches from six parties during the Greek national elections in 2023. The dataset includes annotations (per paragraph) for six natural language processing (NLP) tasks: text classification, topic identification, sentiment analysis, named entity recognition, polarization and populism detection. A two-step annotation was employed, starting with ChatGPT-generated annotations and followed by exhaustive human-in-the-loop validation. The dataset was initially used in a case study to provide insights during the pre-election period. However, it has general applicability by serving as a rich source of information for political and social scientists, journalists, or data scientists, while it can be used for benchmarking and fine-tuning NLP and large language models (LLMs).
- Published
- 2025
32. How do Humans take an Object from a Robot: Behavior changes observed in a User Study
- Author
-
Khanna, Parag, Yadollahi, Elmira, Leite, Iolanda, Björkman, Mårten, and Smith, Christian
- Subjects
Computer Science - Robotics - Abstract
To facilitate human-robot interaction and gain human trust, a robot should recognize and adapt to changes in human behavior. This work documents different human behaviors observed while taking objects from an interactive robot in an experimental study, categorized across two dimensions: pull force applied and handedness. We also present the changes observed in human behavior upon repeated interaction with the robot to take various objects.
- Published
- 2025
- Full Text
- View/download PDF
33. Reconstructing contact and a potential interbreeding geographical zone between Neanderthals and anatomically modern humans
- Author
-
Guran, Saman H., Yousefi, Masoud, Kafash, Anooshe, and Ghasidian, Elham
- Published
- 2024
- Full Text
- View/download PDF
34. Symbolic metaprogram search improves learning efficiency and explains rule learning in humans.
- Author
-
Rule, Joshua, Piantadosi, Steven, Cropper, Andrew, Ellis, Kevin, Nye, Maxwell, and Tenenbaum, Joshua
- Subjects
Humans ,Learning ,Algorithms - Abstract
Throughout their lives, humans seem to learn a variety of rules for things like applying category labels, following procedures, and explaining causal relationships. These rules are often algorithmically rich but are nonetheless acquired with minimal data and computation. Symbolic models based on program learning successfully explain rule-learning in many domains, but performance degrades quickly as program complexity increases. It remains unclear how to scale symbolic rule-learning methods to model human performance in challenging domains. Here we show that symbolic search over the space of metaprograms-programs that revise programs-dramatically improves learning efficiency. On a behavioral benchmark of 100 algorithmically rich rules, this approach fits human learning more accurately than alternative models while also using orders of magnitude less search. The computation required to match median human performance is consistent with conservative estimates of human thinking time. Our results suggest that metaprogram-like representations may help human learners to efficiently acquire rules.
- Published
- 2024
35. Hemorrhage at high altitude: impact of sustained hypobaric hypoxia on cerebral blood flow, tissue oxygenation, and tolerance to simulated hemorrhage in humans.
- Author
-
Rosenberg, Alexander, Anderson, Garen, McKeefer, Haley, Bird, Jordan, Pentz, Brandon, Byman, Britta, Jendzjowsky, Nicholas, Wilson, Richard, Day, Trevor, and Rickards, Caroline
- Subjects
Central hypovolemia ,Cerebral blood velocity ,Hypoxia ,Internal carotid artery blood flow ,Lower body negative pressure ,Humans ,Cerebrovascular Circulation ,Male ,Altitude ,Adult ,Female ,Hypoxia ,Hemorrhage ,Oxygen ,Oxygen Consumption ,Carotid Artery ,Internal ,Oxygen Saturation ,Lower Body Negative Pressure - Abstract
With ascent to high altitude (HA), compensatory increases in cerebral blood flow and oxygen delivery must occur to preserve cerebral metabolism and consciousness. We hypothesized that this compensation in cerebral blood flow and oxygen delivery preserves tolerance to simulated hemorrhage (via lower body negative pressure, LBNP), such that tolerance is similar during sustained exposure to HA vs. low altitude (LA). Healthy humans (4F/4 M) participated in LBNP protocols to presyncope at LA (1130 m) and 5-7 days following ascent to HA (3800 m). Internal carotid artery (ICA) blood flow, cerebral delivery of oxygen (CDO2) through the ICA, and cerebral tissue oxygen saturation (ScO2) were determined. LBNP tolerance was similar between conditions (LA: 1276 ± 304 s vs. HA: 1208 ± 306 s; P = 0.58). Overall, ICA blood flow and CDO2 were elevated at HA vs. LA (P ≤ 0.01) and decreased with LBNP under both conditions (P
- Published
- 2024
36. Urban birds tolerance towards humans was largely unaffected by COVID-19 shutdown-induced variation in human presence.
- Author
-
Mikula, Peter, Bulla, Martin, Blumstein, Daniel, Benedetti, Yanina, Floigl, Kristina, Jokimäki, Jukka, Kaisanlahti-Jokimäki, Marja-Liisa, Markó, Gábor, Morelli, Federico, Møller, Anders, Siretckaia, Anastasiia, Szakony, Sára, Weston, Michael, Zeid, Farah, Tryjanowski, Piotr, and Albrecht, Tomáš
- Subjects
Animals ,COVID-19 ,Humans ,SARS-CoV-2 ,Birds ,Fear ,Escape Reaction ,Pandemics ,Cities - Abstract
The coronavirus disease 2019 (COVID-19) pandemic and respective shutdowns dramatically altered human activities, potentially changing human pressures on urban-dwelling animals. Here, we use such COVID-19-induced variation in human presence to evaluate, across multiple temporal scales, how urban birds from five countries changed their tolerance towards humans, measured as escape distance. We collected 6369 escape responses for 147 species and found that human numbers in parks at a given hour, day, week or year (before and during shutdowns) had a little effect on birds escape distances. All effects centered around zero, except for the actual human numbers during escape trial (hourly scale) that correlated negatively, albeit weakly, with escape distance. The results were similar across countries and most species. Our results highlight the resilience of birds to changes in human numbers on multiple temporal scales, the complexities of linking animal fear responses to human behavior, and the challenge of quantifying both simultaneously in situ.
- Published
- 2024
37. Ethnography and ethnohistory support the efficiency of hunting through endurance running in humans
- Author
-
Morin, Eugène and Winterhalder, Bruce
- Subjects
Biomedical and Clinical Sciences ,Health Sciences ,Psychology ,Humans ,Running ,Anthropology ,Cultural ,Physical Endurance ,Animals ,Biological Evolution ,Hominidae ,Biomedical and clinical sciences ,Health sciences - Abstract
Humans have two features rare in mammals: our locomotor muscles are dominated by fatigue-resistant fibres and we effectively dissipate through sweating the metabolic heat generated through prolonged, elevated activity. A promising evolutionary explanation of these features is the endurance pursuit (EP) hypothesis, which argues that both traits evolved to facilitate running down game by persistence. However, this hypothesis has faced two challenges: running is energetically costly and accounts of EPs among late twentieth century foragers are rare. While both observations appear to suggest that EPs would be ineffective, we use foraging theory to demonstrate that EPs can be quite efficient. We likewise analyse an ethnohistoric and ethnographic database of nearly 400 EP cases representing 272 globally distributed locations. We provide estimates for return rates of EPs and argue that these are comparable to other pre-modern hunting methods in specified contexts. EP hunting as a method of food procurement would have probably been available and attractive to Plio/Pleistocene hominins.
- Published
- 2024
38. Assessing Social Alignment: Do Personality-Prompted Large Language Models Behave Like Humans?
- Author
-
Zakazov, Ivan, Boronski, Mikolaj, Drudi, Lorenzo, and West, Robert
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
The ongoing revolution in language modelling has led to various novel applications, some of which rely on the emerging "social abilities" of large language models (LLMs). Already, many turn to the new "cyber friends" for advice during pivotal moments of their lives and trust them with their deepest secrets, implying that accurate shaping of LLMs' "personalities" is paramount. Leveraging the vast diversity of data on which LLMs are pretrained, state-of-the-art approaches prompt them to adopt a particular personality. We ask (i) if personality-prompted models behave (i.e. "make" decisions when presented with a social situation) in line with the ascribed personality, and (ii) if their behavior can be finely controlled. We use classic psychological experiments - the Milgram Experiment and the Ultimatum Game - as social interaction testbeds and apply personality prompting to GPT-3.5/4/4o-mini/4o. Our experiments reveal failure modes of the prompt-based modulation of the models' "behavior", thus challenging the feasibility of personality prompting with today's LLMs., Comment: Accepted to NeurIPS 2024 Workshop on Behavioral Machine Learning
- Published
- 2024
39. Fearful Falcons and Angry Llamas: Emotion Category Annotations of Arguments by Humans and LLMs
- Author
-
Greschner, Lynn and Klinger, Roman
- Subjects
Computer Science - Computation and Language - Abstract
Arguments evoke emotions, influencing the effect of the argument itself. Not only the emotional intensity but also the category influence the argument's effects, for instance, the willingness to adapt stances. While binary emotionality has been studied in arguments, there is no work on discrete emotion categories (e.g., "Anger") in such data. To fill this gap, we crowdsource subjective annotations of emotion categories in a German argument corpus and evaluate automatic LLM-based labeling methods. Specifically, we compare three prompting strategies (zero-shot, one-shot, chain-of-thought) on three large instruction-tuned language models (Falcon-7b-instruct, Llama-3.1-8B-instruct, GPT-4o-mini). We further vary the definition of the output space to be binary (is there emotionality in the argument?), closed-domain (which emotion from a given label set is in the argument?), or open-domain (which emotion is in the argument?). We find that emotion categories enhance the prediction of emotionality in arguments, emphasizing the need for discrete emotion annotations in arguments. Across all prompt settings and models, automatic predictions show a high recall but low precision for predicting anger and fear, indicating a strong bias toward negative emotions.
- Published
- 2024
40. Ask Humans or AI? Exploring Their Roles in Visualization Troubleshooting
- Author
-
Shen, Shuyu, Lu, Sirong, Shen, Leixian, Sheng, Zhonghua, Tang, Nan, and Luo, Yuyu
- Subjects
Computer Science - Human-Computer Interaction - Abstract
Visualization authoring is an iterative process requiring users to modify parameters like color schemes and data transformations to achieve desired aesthetics and effectively convey insights. Due to the complexity of these adjustments, users often create defective visualizations and require troubleshooting support. In this paper, we examine two primary approaches for visualization troubleshooting: (1) Human-assisted support via forums, where users receive advice from other individuals, and (2) AI-assisted support using large language models (LLMs). Our goal is to understand the strengths and limitations of each approach in supporting visualization troubleshooting tasks. To this end, we collected 889 Vega-Lite cases from Stack Overflow. We then conducted a comprehensive analysis to understand the types of questions users ask, the effectiveness of human and AI guidance, and the impact of supplementary resources, such as documentation and examples, on troubleshooting outcomes. Our findings reveal a striking contrast between human- and AI-assisted troubleshooting: Human-assisted troubleshooting provides tailored, context-sensitive advice but often varies in response quality, while AI-assisted troubleshooting offers rapid feedback but often requires additional contextual resources to achieve desired results., Comment: 14 pages, 7 figures
- Published
- 2024
41. TriDi: Trilateral Diffusion of 3D Humans, Objects, and Interactions
- Author
-
Petrov, Ilya A., Marin, Riccardo, Chibane, Julian, and Pons-Moll, Gerard
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Modeling 3D human-object interaction (HOI) is a problem of great interest for computer vision and a key enabler for virtual and mixed-reality applications. Existing methods work in a one-way direction: some recover plausible human interactions conditioned on a 3D object; others recover the object pose conditioned on a human pose. Instead, we provide the first unified model - TriDi which works in any direction. Concretely, we generate Human, Object, and Interaction modalities simultaneously with a new three-way diffusion process, allowing to model seven distributions with one network. We implement TriDi as a transformer attending to the various modalities' tokens, thereby discovering conditional relations between them. The user can control the interaction either as a text description of HOI or a contact map. We embed these two representations into a shared latent space, combining the practicality of text descriptions with the expressiveness of contact maps. Using a single network, TriDi unifies all the special cases of prior work and extends to new ones, modeling a family of seven distributions. Remarkably, despite using a single model, TriDi generated samples surpass one-way specialized baselines on GRAB and BEHAVE in terms of both qualitative and quantitative metrics, and demonstrating better diversity. We show the applicability of TriDi to scene population, generating objects for human-contact datasets, and generalization to unseen object geometry. The project page is available at: https://virtualhumans.mpi-inf.mpg.de/tridi.
- Published
- 2024
42. Homogeneous Dynamics Space for Heterogeneous Humans
- Author
-
Liu, Xinpeng, Liang, Junxuan, Zhang, Chenshuo, Cai, Zixuan, Lu, Cewu, and Li, Yong-Lu
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Analyses of human motion kinematics have achieved tremendous advances. However, the production mechanism, known as human dynamics, is still undercovered. In this paper, we aim to push data-driven human dynamics understanding forward. We identify a major obstacle to this as the heterogeneity of existing human motion understanding efforts. Specifically, heterogeneity exists in not only the diverse kinematics representations and hierarchical dynamics representations but also in the data from different domains, namely biomechanics and reinforcement learning. With an in-depth analysis of the existing heterogeneity, we propose to emphasize the beneath homogeneity: all of them represent the homogeneous fact of human motion, though from different perspectives. Given this, we propose Homogeneous Dynamics Space (HDyS) as a fundamental space for human dynamics by aggregating heterogeneous data and training a homogeneous latent space with inspiration from the inverse-forward dynamics procedure. Leveraging the heterogeneous representations and datasets, HDyS achieves decent mapping between human kinematics and dynamics. We demonstrate the feasibility of HDyS with extensive experiments and applications. The project page is https://foruck.github.io/HDyS., Comment: Cewu Lu and Yong-Lu Li are the corresponding authors
- Published
- 2024
43. BOTracle: A framework for Discriminating Bots and Humans
- Author
-
Kadel, Jan, See, August, Sinha, Ritwik, and Fischer, Mathias
- Subjects
Computer Science - Machine Learning ,I.2 ,I.5 ,D.2 - Abstract
Bots constitute a significant portion of Internet traffic and are a source of various issues across multiple domains. Modern bots often become indistinguishable from real users, as they employ similar methods to browse the web, including using real browsers. We address the challenge of bot detection in high-traffic scenarios by analyzing three distinct detection methods. The first method operates on heuristics, allowing for rapid detection. The second method utilizes, well known, technical features, such as IP address, window size, and user agent. It serves primarily for comparison with the third method. In the third method, we rely solely on browsing behavior, omitting all static features and focusing exclusively on how clients behave on a website. In contrast to related work, we evaluate our approaches using real-world e-commerce traffic data, comprising 40 million monthly page visits. We further compare our methods against another bot detection approach, Botcha, on the same dataset. Our performance metrics, including precision, recall, and AUC, reach 98 percent or higher, surpassing Botcha., Comment: Bot Detection; User Behaviour Analysis; Published at ESORICS International Workshops 2024
- Published
- 2024
44. Interactions Between Humans and White-Tailed Deer in Illinois: A Cross-Sectional Survey: Interactions Between Humans and White-Tailed Deer in Illinois
- Author
-
Pratt, Ambrielle, Prezioso, Tara, Mateus-Pinilla, Nohra, Pepin, Kimberly M., and Smith, Rebecca
- Published
- 2025
- Full Text
- View/download PDF
45. CRISPR-Cas9 genetic modification of humans; a new era of medicine
- Author
-
McKean, Natasha
- Published
- 2024
46. Structural variation in humans and our primate kin in the era of telomere-to-telomere genomes and pangenomics.
- Author
-
L Rocha, Joana, Lou, Runyang, and Sudmant, Peter
- Subjects
Humans ,Animals ,Primates ,Telomere ,Genomics ,Genome ,Human ,Genome ,Evolution ,Molecular ,Genomic Structural Variation - Abstract
Structural variants (SVs) account for the majority of base pair differences both within and between primate species. However, our understanding of inter- and intra-species SV has been historically hampered by the quality of draft primate genomes and the absence of genome resources for key taxa. Recently, advances in long-read sequencing and genome assembly have begun to radically reshape our understanding of SVs. Two landmark achievements include the publication of a human telomere-to-telomere (T2T) genome as well as the development of the first human pangenome reference. In this review, we first look back to the major works laying the foundation for these projects. We then examine the ways in which T2T genome assemblies and pangenomes are transforming our understanding of and approach to primate SV. Finally, we discuss what the future of primate SV research may look like in the era of T2T genomes and pangenomics.
- Published
- 2024
47. An algorithmic account for how humans efficiently learn, transfer, and compose hierarchically structured decision policies
- Author
-
Li, Jing-Jing and Collins, Anne GE
- Subjects
Information and Computing Sciences ,Cognitive and Computational Psychology ,Machine Learning ,Psychology ,Clinical Research ,Basic Behavioral and Social Science ,Behavioral and Social Science ,Mental health ,Quality Education ,Decision-making ,Reinforcement learning ,Computational cognitive modeling ,Abstraction ,Hierarchy ,Compositionality ,Meta-learning ,Transfer learning ,Psychology and Cognitive Sciences ,Language ,Communication and Culture ,Experimental Psychology - Abstract
Learning structures that effectively abstract decision policies is key to the flexibility of human intelligence. Previous work has shown that humans use hierarchically structured policies to efficiently navigate complex and dynamic environments. However, the computational processes that support the learning and construction of such policies remain insufficiently understood. To address this question, we tested 1026 human participants, who made over 1 million choices combined, in a decision-making task where they could learn, transfer, and recompose multiple sets of hierarchical policies. We propose a novel algorithmic account for the learning processes underlying observed human behavior. We show that humans rely on compressed policies over states in early learning, which gradually unfold into hierarchical representations via meta-learning and Bayesian inference. Our modeling evidence suggests that these hierarchical policies are structured in a temporally backward, rather than forward, fashion. Taken together, these algorithmic architectures characterize how the interplay between reinforcement learning, policy compression, meta-learning, and working memory supports structured decision-making and compositionality in a resource-rational way.
- Published
- 2025
48. Taste Bud-Derived Stem Cells in Humans
- Published
- 2025
49. Aligning Generalisation Between Humans and Machines
- Author
-
Ilievski, Filip, Hammer, Barbara, van Harmelen, Frank, Paassen, Benjamin, Saralajew, Sascha, Schmid, Ute, Biehl, Michael, Bolognesi, Marianna, Dong, Xin Luna, Gashteovski, Kiril, Hitzler, Pascal, Marra, Giuseppe, Minervini, Pasquale, Mundt, Martin, Ngomo, Axel-Cyrille Ngonga, Oltramari, Alessandro, Pasi, Gabriella, Saribatur, Zeynep G., Serafini, Luciano, Shawe-Taylor, John, Shwartz, Vered, Skitalinskaya, Gabriella, Stachl, Clemens, van de Ven, Gido M., and Villmann, Thomas
- Subjects
Computer Science - Artificial Intelligence - Abstract
Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals. The responsible use of AI increasingly shows the need for human-AI teaming, necessitating effective interaction between humans and machines. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neuro-symbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of generalisation, methods for generalisation, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios.
- Published
- 2024
50. Learning to Cooperate with Humans using Generative Agents
- Author
-
Liang, Yancheng, Chen, Daphne, Gupta, Abhishek, Du, Simon S., and Jaques, Natasha
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Multiagent Systems - Abstract
Training agents that can coordinate zero-shot with humans is a key mission in multi-agent reinforcement learning (MARL). Current algorithms focus on training simulated human partner policies which are then used to train a Cooperator agent. The simulated human is produced either through behavior cloning over a dataset of human cooperation behavior, or by using MARL to create a population of simulated agents. However, these approaches often struggle to produce a Cooperator that can coordinate well with real humans, since the simulated humans fail to cover the diverse strategies and styles employed by people in the real world. We show \emph{learning a generative model of human partners} can effectively address this issue. Our model learns a latent variable representation of the human that can be regarded as encoding the human's unique strategy, intention, experience, or style. This generative model can be flexibly trained from any (human or neural policy) agent interaction data. By sampling from the latent space, we can use the generative model to produce different partners to train Cooperator agents. We evaluate our method -- \textbf{G}enerative \textbf{A}gent \textbf{M}odeling for \textbf{M}ulti-agent \textbf{A}daptation (GAMMA) -- on Overcooked, a challenging cooperative cooking game that has become a standard benchmark for zero-shot coordination. We conduct an evaluation with real human teammates, and the results show that GAMMA consistently improves performance, whether the generative model is trained on simulated populations or human datasets. Further, we propose a method for posterior sampling from the generative model that is biased towards the human data, enabling us to efficiently improve performance with only a small amount of expensive human interaction data.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.