10,489 results on '"Humphrey P"'
Search Results
2. P675: IMATINIB THERAPY IN PREVIOUSLY UNTREATED CHRONIC MYELOID LEUKAEMIA PATIENTS WHO ACHIEVE MMR AFTER 12 MONTHS THERAPY WITH DASATINIB: A STRATEGY TO AVOID LONG TERM OFF TARGET TOXICITY
- Author
-
Lucy Pemberton, Katrina Sharples, Yujin Kim, Emma-Jane Mcdonald, Humphrey Pullon, Bart Baker, Merit Hanna, Gordon Royle, Shahidul Islam, Victoria Campion, Michael Findlay, and Peter Browett
- Subjects
Diseases of the blood and blood-forming organs ,RC633-647.5 - Published
- 2023
- Full Text
- View/download PDF
3. Quantifying neuro-motor correlations during awake deep brain stimulation surgery using markerless tracking
- Author
-
Anand Tekriwal, Sunderland Baker, Elijah Christensen, Humphrey Petersen-Jones, Rex N. Tien, Steven G. Ojemann, Drew S. Kern, Daniel R. Kramer, Gidon Felsen, and John A. Thompson
- Subjects
Medicine ,Science - Abstract
Abstract The expanding application of deep brain stimulation (DBS) therapy both drives and is informed by our growing understanding of disease pathophysiology and innovations in neurosurgical care. Neurophysiological targeting, a mainstay for identifying optimal, motor responsive targets, has remained largely unchanged for decades. Utilizing deep learning-based computer vision and related computational methods, we developed an effective and simple intraoperative approach to objectively correlate neural signals with movements, automating and standardizing the otherwise manual and subjective process of identifying ideal DBS electrode placements. Kinematics are extracted from video recordings of intraoperative motor testing using a trained deep neural network and compared to multi-unit activity recorded from the subthalamic nucleus. Neuro-motor correlations were quantified using dynamic time warping with the strength of a given comparison measured by comparing against a null distribution composed of related neuro-motor correlations. This objective measure was then compared to clinical determinations as recorded in surgical case notes. In seven DBS cases for treatment of Parkinson’s disease, 100 distinct motor testing epochs were extracted for which clear clinical determinations were made. Neuro-motor correlations derived by our automated system compared favorably with expert clinical decision making in post-hoc comparisons, although follow-up studies are necessary to determine if improved correlation detection leads to improved outcomes. By improving the classification of neuro-motor relationships, the automated system we have developed will enable clinicians to maximize the therapeutic impact of DBS while also providing avenues for improving continued care of treated patients.
- Published
- 2022
- Full Text
- View/download PDF
4. Euclid preparation. XLIX. Selecting active galactic nuclei using observed colours
- Author
-
Euclid Collaboration, Bisigello, L., Massimo, M., Tortora, C., Fotopoulou, S., Allevato, V., Bolzonella, M., Gruppioni, C., Pozzetti, L., Rodighiero, G., Serjeant, S., Cunha, P. A. C., Gabarra, L., Feltre, A., Humphrey, A., La Franca, F., Landt, H., Mannucci, F., Prandoni, I., Radovich, M., Ricci, F., Salvato, M., Shankar, F., Stern, D., Spinoglio, L., Vergani, D., Vignali, C., Zamorani, G., Yung, L. Y. A., Charlot, S., Aghanim, N., Amara, A., Andreon, S., Auricchio, N., Baldi, M., Bardelli, S., Battaglia, P., Bender, R., Bonino, D., Branchini, E., Brau-Nogue, S., Brescia, M., Camera, S., Capobianco, V., Carbone, C., Carretero, J., Casas, S., Castander, F. J., Castellano, M., Cavuoti, S., Cimatti, A., Congedo, G., Conselice, C. J., Conversi, L., Copin, Y., Corcione, L., Courbin, F., Courtois, H. M., Cropper, M., Da Silva, A., Degaudenzi, H., Di Giorgio, A. M., Dinis, J., Dupac, X., Dusini, S., Ealet, A., Farina, M., Farrens, S., Ferriol, S., Frailis, M., Franceschi, E., Franzetti, P., Fumana, M., Galeotta, S., Garilli, B., Gillis, B., Giocoli, C., Granett, B. R., Grazian, A., Grupp, F., Guzzo, L., Haugan, S. V. H., Holmes, W., Hook, I., Hormuth, F., Hornstrup, A., Jahnke, K., Keihänen, E., Kermiche, S., Kiessling, A., Kilbinger, M., Kitching, T., Kümmel, M., Kunz, M., Kurki-Suonio, H., Ligori, S., Lilje, P. B., Lindholm, V., Lloro, I., Maiorano, E., Mansutti, O., Marggraf, O., Markovic, K., Martinet, N., Marulli, F., Massey, R., Maurogordato, S., Medinaceli, E., Mei, S., Mellier, Y., Meneghetti, M., Merlin, E., Meylan, G., Moresco, M., Moscardini, L., Munari, E., Niemi, S. -M., Padilla, C., Paltani, S., Pasian, F., Pedersen, K., Percival, W. J., Pettorino, V., Polenta, G., Poncet, M., Raison, F., Rebolo, R., Renzi, A., Rhodes, J., Riccio, G., Romelli, E., Roncarelli, M., Rossetti, E., Saglia, R., Sapone, D., Sartoris, B., Schirmer, M., Schneider, P., Schrabback, T., Secroun, A., Seidel, G., Serrano, S., Sirignano, C., Sirri, G., Stanco, L., Surace, C., Tallada-Crespí, P., Taylor, A. N., Tereno, I., Toledo-Moreo, R., Torradeflot, F., Tutusaus, I., Valentijn, E. A., Valenziano, L., Vassallo, T., Wang, Y., Zoubian, J., Zucca, E., Biviano, A., Bozzo, E., Colodro-Conde, C., Di Ferdinando, D., Fabbian, G., Graciá-Carpio, J., Marcin, S., Mauri, N., Sakr, Z., Scottez, V., Tenti, M., Akrami, Y., Baccigalupi, C., Ballardini, M., Bethermin, M., Blanchard, A., Borgani, S., Borla, A. S., Bruton, S., Burigana, C., Cabanac, R., Calabro, A., Cappi, A., Carvalho, C. S., Castignani, G., Castro, T., Chambers, K. C., Coupon, A. R. Cooray J., Cucciati, O., Davini, S., De Lucia, G., Desprez, G., Díaz-Sánchez, A., Di Domizio, S., Dole, H., Vigo, J. A. Escartin, Escoffier, S., Ferrero, I., Finelli, F., Ganga, K., García-Bellido, J., Giacomini, F., Gozaliasl, G., Gregorio, A., Hildebrandt, H., Muñoz, A. Jiminez, Kajava, J. J. E., Kansal, V., Karagiannis, D., Kirkpatrick, C. C., Legrand, L., Loureiro, A., Macias-Perez, J., Maggio, G., Magliocchetti, M., Mainetti, G., Maoli, R., Martinelli, M., Martins, C. J. A. P., Matthew, S., Maurin, L., Metcalf, R. B., Migliaccio, M., Monaco, P., Morgante, G., Nadathur, S., Patrizii, L., Popa, V., Porciani, C., Potter, D., Pöntinen, M., Rocci, P. -F., Sánchez, A. G., Schneider, A., Sereno, M., Simon, P., Stadel, J., Stanford, S. A., Steinwagner, J., Testera, G., Tewes, M., Teyssier, R., Toft, S., Tosi, S., Troja, A., Tucci, M., Valiviita, J., Viel, M., and Zinchenko, I. A.
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
Euclid will cover over 14000 $deg^{2}$ with two optical and near-infrared spectro-photometric instruments, and is expected to detect around ten million active galactic nuclei (AGN). This unique data set will make a considerable impact on our understanding of galaxy evolution and AGN. In this work we identify the best colour selection criteria for AGN, based only on Euclid photometry or including ancillary photometric observations, such as the data that will be available with the Rubin legacy survey of space and time (LSST) and observations already available from Spitzer/IRAC. The analysis is performed for unobscured AGN, obscured AGN, and composite (AGN and star-forming) objects. We make use of the spectro-photometric realisations of infrared-selected targets at all-z (SPRITZ) to create mock catalogues mimicking both the Euclid Wide Survey (EWS) and the Euclid Deep Survey (EDS). Using these catalogues we estimate the best colour selection, maximising the harmonic mean (F1) of completeness and purity. The selection of unobscured AGN in both Euclid surveys is possible with Euclid photometry alone with F1=0.22-0.23, which can increase to F1=0.43-0.38 if we limit at z>0.7. Such selection is improved once the Rubin/LSST filters (a combination of the u, g, r, or z filters) are considered, reaching F1=0.84 and 0.86 for the EDS and EWS, respectively. The combination of a Euclid colour with the [3.6]-[4.5] colour, which is possible only in the EDS, results in an F1-score of 0.59, improving the results using only Euclid filters, but worse than the selection combining Euclid and LSST. The selection of composite ($f_{{\rm AGN}}$=0.05-0.65 at 8-40 $\mu m$) and obscured AGN is challenging, with F1<0.3 even when including ancillary data. This is driven by the similarities between the broad-band spectral energy distribution of these AGN and star-forming galaxies in the wavelength range 0.3-5 $\mu m$., Comment: 25 pages, 28 figures, accepted for publication on A&A
- Published
- 2024
5. GradBias: Unveiling Word Influence on Bias in Text-to-Image Generative Models
- Author
-
D'Incà, Moreno, Peruzzo, Elia, Mancini, Massimiliano, Xu, Xingqian, Shi, Humphrey, and Sebe, Nicu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent progress in Text-to-Image (T2I) generative models has enabled high-quality image generation. As performance and accessibility increase, these models are gaining significant attraction and popularity: ensuring their fairness and safety is a priority to prevent the dissemination and perpetuation of biases. However, existing studies in bias detection focus on closed sets of predefined biases (e.g., gender, ethnicity). In this paper, we propose a general framework to identify, quantify, and explain biases in an open set setting, i.e. without requiring a predefined set. This pipeline leverages a Large Language Model (LLM) to propose biases starting from a set of captions. Next, these captions are used by the target generative model for generating a set of images. Finally, Vision Question Answering (VQA) is leveraged for bias evaluation. We show two variations of this framework: OpenBias and GradBias. OpenBias detects and quantifies biases, while GradBias determines the contribution of individual prompt words on biases. OpenBias effectively detects both well-known and novel biases related to people, objects, and animals and highly aligns with existing closed-set bias detection methods and human judgment. GradBias shows that neutral words can significantly influence biases and it outperforms several baselines, including state-of-the-art foundation models. Code available here: https://github.com/Moreno98/GradBias., Comment: Under review. Code: https://github.com/Moreno98/GradBias
- Published
- 2024
6. Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
- Author
-
Shi, Min, Liu, Fuxiao, Wang, Shihao, Liao, Shijia, Radhakrishnan, Subhashree, Huang, De-An, Yin, Hongxu, Sapra, Karan, Yacoob, Yaser, Shi, Humphrey, Catanzaro, Bryan, Tao, Andrew, Kautz, Jan, Yu, Zhiding, and Liu, Guilin
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Computer Science - Robotics - Abstract
The ability to accurately interpret complex visual information is a crucial topic of multimodal large language models (MLLMs). Recent work indicates that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks, such as optical character recognition and document analysis. A number of recent MLLMs achieve this goal using a mixture of vision encoders. Despite their success, there is a lack of systematic comparisons and detailed ablation studies addressing critical aspects, such as expert selection and the integration of multiple vision experts. This study provides an extensive exploration of the design space for MLLMs using a mixture of vision encoders and resolutions. Our findings reveal several underlying principles common to various existing strategies, leading to a streamlined yet effective design approach. We discover that simply concatenating visual tokens from a set of complementary vision encoders is as effective as more complex mixing architectures or strategies. We additionally introduce Pre-Alignment to bridge the gap between vision-focused encoders and language tokens, enhancing model coherence. The resulting family of MLLMs, Eagle, surpasses other leading open-source models on major MLLM benchmarks. Models and code: https://github.com/NVlabs/Eagle, Comment: Github: https://github.com/NVlabs/Eagle, HuggingFace: https://huggingface.co/NVEagle
- Published
- 2024
7. Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation
- Author
-
Jiao, Siyu, Zhu, Hongguang, Huang, Jiannan, Zhao, Yao, Wei, Yunchao, and Shi, Humphrey
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Pre-trained vision-language models, e.g. CLIP, have been increasingly used to address the challenging Open-Vocabulary Segmentation (OVS) task, benefiting from their well-aligned vision-text embedding space. Typical solutions involve either freezing CLIP during training to unilaterally maintain its zero-shot capability, or fine-tuning CLIP vision encoder to achieve perceptual sensitivity to local regions. However, few of them incorporate vision-text collaborative optimization. Based on this, we propose the Content-Dependent Transfer to adaptively enhance each text embedding by interacting with the input image, which presents a parameter-efficient way to optimize the text representation. Besides, we additionally introduce a Representation Compensation strategy, reviewing the original CLIP-V representation as compensation to maintain the zero-shot capability of CLIP. In this way, the vision and text representation of CLIP are optimized collaboratively, enhancing the alignment of the vision-text feature space. To the best of our knowledge, we are the first to establish the collaborative vision-text optimizing mechanism within the OVS field. Extensive experiments demonstrate our method achieves superior performance on popular OVS benchmarks. In open-vocabulary semantic segmentation, our method outperforms the previous state-of-the-art approaches by +0.5, +2.3, +3.4, +0.4 and +1.1 mIoU, respectively on A-847, A-150, PC-459, PC-59 and PAS-20. Furthermore, in a panoptic setting on ADE20K, we achieve the performance of 27.1 PQ, 73.5 SQ, and 32.9 RQ. Code will be available at https://github.com/jiaosiyu1999/MAFT-Plus.git ., Comment: ECCV 2024
- Published
- 2024
8. Implicit versus Explicit First Impressions in Performance-Based Assessment: Will Raters Overcome Their First Impressions When Learner Performance Changes?
- Author
-
Timothy J. Wood, Vijay J. Daniels, Debra Pugh, Claire Touchie, Samantha Halman, and Susan Humphrey-Murto
- Abstract
First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < 0.001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < 0.001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = 0.84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.
- Published
- 2024
- Full Text
- View/download PDF
9. Transdisciplinary Perspectives on 'the Narrative' and 'the Analytical' for Critical Literacy
- Author
-
Sally Humphrey, Dragana Stosic, Therese Barrington, Nicki Brake, and Rebecca Pagano
- Abstract
This paper reports on the design of a multimodal metalanguage developed by teacher education researchers to support pre-service teachers' understandings of critical literacy and critical health literacies in a changing communication landscape. The design of metalanguage constitutes the first stage of an ongoing transdisciplinary project, Multiliteracies Across Teaching Areas (MATA), which aims to design and implement cohesive disciplinary multiliteracies pedagogies across teaching areas of an initial teacher education (ITE) programme. The focus on metalanguage design is motivated by concerns shared by critical literacy scholars and scholars in health literacy to balance deconstruction of texts with actionable response through the 'deep moral grammar of narrative' Kindenberg and Freebody ("Australian Journal of Language and Literacy, 44"(2), 90-99, 2021). Such concerns also align with recent World Health Organisation calls for the use of solution-oriented literacies to empower communities. These sociocultural understandings of critical literacy and Kindenberg and Freebody's ("Australian Journal of Language and Literacy, 44"(2), 90-99, 2021) call to balance 'the analytical' and the narrative' in critical literacy practice provide the starting point for designing metalanguage for transdisciplinary research in Health and Physical Education (HPE) and English. We firstly review relevant models of the critical informing both subject areas to establish synergous understandings and then analyse expectations of critical practice in descriptions and elaborations of the HPE and English curricula. We provide an overview of the semiotic resources available for transdisciplinary conversations within the English curriculum with clarifying 'bridging' terminology informed by social semiotic descriptions. Through close analysis of four representative texts selected for critical literacy practice by English and HPE teacher educators in the MATA project, we demonstrate how such metalanguage was shared to build understandings of both critical analysis and actionable response. Along with analytical features to build and analyse issues according to disciplinary criteria, we show how stories are used to build rapport with their diverse audiences and to motivate their peers to take positive health action.
- Published
- 2024
- Full Text
- View/download PDF
10. Frontotemporal lobar degeneration targets brain regions linked to expression of recently evolved genes
- Author
-
Pasquini, Lorenzo, Pereira, Felipe L, Seddighi, Sahba, Zeng, Yi, Wei, Yongbin, Illán-Gala, Ignacio, Vatsavayai, Sarat C, Friedberg, Adit, Lee, Alex J, Brown, Jesse A, Spina, Salvatore, Grinberg, Lea T, Sirkis, Daniel W, Bonham, Luke W, Yokoyama, Jennifer S, Boxer, Adam L, Kramer, Joel H, Rosen, Howard J, Humphrey, Jack, Gitler, Aaron D, Miller, Bruce L, Pollard, Katherine S, Ward, Michael E, and Seeley, William W
- Subjects
Biological Psychology ,Psychology ,Acquired Cognitive Impairment ,Neurodegenerative ,Alzheimer's Disease including Alzheimer's Disease Related Dementias (AD/ADRD) ,Genetics ,Brain Disorders ,Rare Diseases ,Alzheimer's Disease Related Dementias (ADRD) ,Frontotemporal Dementia (FTD) ,Alzheimer's Disease ,Dementia ,Aging ,Neurosciences ,2.1 Biological and endogenous factors ,Neurological ,Humans ,Frontotemporal Lobar Degeneration ,Brain ,Male ,Female ,Aged ,DNA-Binding Proteins ,Middle Aged ,tau Proteins ,Atrophy ,Animals ,Evolution ,Molecular ,Gene Expression ,TDP-43 ,cryptic exon ,frontotemporal lobar degeneration ,gene expression ,human accelerated regions ,tau ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery ,Biomedical and clinical sciences ,Health sciences - Abstract
In frontotemporal lobar degeneration (FTLD), pathological protein aggregation in specific brain regions is associated with declines in human-specialized social-emotional and language functions. In most patients, disease protein aggregates contain either TDP-43 (FTLD-TDP) or tau (FTLD-tau). Here, we explored whether FTLD-associated regional degeneration patterns relate to regional gene expression of human accelerated regions (HARs), conserved sequences that have undergone positive selection during recent human evolution. To this end, we used structural neuroimaging from patients with FTLD and human brain regional transcriptomic data from controls to identify genes expressed in FTLD-targeted brain regions. We then integrated primate comparative genomic data to test our hypothesis that FTLD targets brain regions linked to expression levels of recently evolved genes. In addition, we asked whether genes whose expression correlates with FTLD atrophy are enriched for genes that undergo cryptic splicing when TDP-43 function is impaired. We found that FTLD-TDP and FTLD-tau subtypes target brain regions with overlapping and distinct gene expression correlates, highlighting many genes linked to neuromodulatory functions. FTLD atrophy-correlated genes were strongly enriched for HARs. Atrophy-correlated genes in FTLD-TDP showed greater overlap with TDP-43 cryptic splicing genes and genes with more numerous TDP-43 binding sites compared with atrophy-correlated genes in FTLD-tau. Cryptic splicing genes were enriched for HAR genes, and vice versa, but this effect was due to the confounding influence of gene length. Analyses performed at the individual-patient level revealed that the expression of HAR genes and cryptically spliced genes within putative regions of disease onset differed across FTLD-TDP subtypes. Overall, our findings suggest that FTLD targets brain regions that have undergone recent evolutionary specialization and provide intriguing potential leads regarding the transcriptomic basis for selective vulnerability in distinct FTLD molecular-anatomical subtypes.
- Published
- 2024
11. Euclid preparation. LI. Forecasting the recovery of galaxy physical properties and their relations with template-fitting and machine-learning methods
- Author
-
Euclid Collaboration, Enia, A., Bolzonella, M., Pozzetti, L., Humphrey, A., Cunha, P. A. C., Hartley, W. G., Dubath, F., Paltani, S., Lopez, X. Lopez, Quai, S., Bardelli, S., Bisigello, L., Cavuoti, S., De Lucia, G., Ginolfi, M., Grazian, A., Siudek, M., Tortora, C., Zamorani, G., Aghanim, N., Altieri, B., Amara, A., Andreon, S., Auricchio, N., Baccigalupi, C., Baldi, M., Bender, R., Bodendorf, C., Bonino, D., Branchini, E., Brescia, M., Brinchmann, J., Camera, S., Capobianco, V., Carbone, C., Carretero, J., Casas, S., Castander, F. J., Castellano, M., Castignani, G., Cimatti, A., Colodro-Conde, C., Congedo, G., Conselice, C. J., Conversi, L., Copin, Y., Corcione, L., Courbin, F., Courtois, H. M., Da Silva, A., Degaudenzi, H., Di Giorgio, A. M., Dinis, J., Dupac, X., Dusini, S., Fabricius, M., Farina, M., Farrens, S., Ferriol, S., Fosalba, P., Fotopoulou, S., Frailis, M., Franceschi, E., Fumana, M., Galeotta, S., Gillis, B., Giocoli, C., Grupp, F., Haugan, S. V. H., Holmes, W., Hook, I., Hormuth, F., Hornstrup, A., Jahnke, K., Joachimi, B., Keihänen, E., Kermiche, S., Kiessling, A., Kubik, B., Kümmel, M., Kunz, M., Kurki-Suonio, H., Ligori, S., Lilje, P. B., Lindholm, V., Lloro, I., Maiorano, E., Mansutti, O., Marggraf, O., Markovic, K., Martinelli, M., Martinet, N., Marulli, F., Massey, R., McCracken, H. J., Medinaceli, E., Mei, S., Melchior, M., Mellier, Y., Meneghetti, M., Merlin, E., Meylan, G., Moresco, M., Moscardini, L., Munari, E., Neissner, C., Niemi, S. -M., Nightingale, J. W., Padilla, C., Pasian, F., Pedersen, K., Pettorino, V., Polenta, G., Poncet, M., Popa, L. A., Raison, F., Rebolo, R., Renzi, A., Rhodes, J., Riccio, G., Romelli, E., Roncarelli, M., Rossetti, E., Saglia, R., Sakr, Z., Sapone, D., Schneider, P., Schrabback, T., Scodeggio, M., Secroun, A., Sefusatti, E., Seidel, G., Serrano, S., Sirignano, C., Sirri, G., Stanco, L., Steinwagner, J., Surace, C., Tallada-Crespí, P., Tavagnacco, D., Taylor, A. N., Teplitz, H. I., Tereno, I., Toledo-Moreo, R., Torradeflot, F., Tutusaus, I., Valenziano, L., Vassallo, T., Kleijn, G. Verdoes, Veropalumbo, A., Wang, Y., Weller, J., Zucca, E., Biviano, A., Boucaud, A., Burigana, C., Calabrese, M., Vigo, J. A. Escartin, Gracia-Carpio, J., Mauri, N., Pezzotta, A., Pöntinen, M., Porciani, C., Scottez, V., Tenti, M., Viel, M., Wiesmann, M., Akrami, Y., Allevato, V., Anselmi, S., Ballardini, M., Bergamini, P., Bethermin, M., Blanchard, A., Blot, L., Borgani, S., Bruton, S., Cabanac, R., Calabro, A., Canas-Herrera, G., Cappi, A., Carvalho, C. S., Castro, T., Chambers, K. C., Contarini, S., Contini, T., Cooray, A. R., Cucciati, O., Davini, S., De Caro, B., Desprez, G., Díaz-Sánchez, A., Di Domizio, S., Dole, H., Escoffier, S., Ferrari, A. G., Ferreira, P. G., Ferrero, I., Finoguenov, A., Fornari, F., Gabarra, L., Ganga, K., García-Bellido, J., Gautard, V., Gaztanaga, E., Giacomini, F., Gianotti, F., Gozaliasl, G., Hall, A., Hemmati, S., Hildebrandt, H., Hjorth, J., Muñoz, A. Jimenez, Joudaki, S., Kajava, J. J. E., Kansal, V., Karagiannis, D., Kirkpatrick, C. C., Graet, J. Le, Legrand, L., Loureiro, A., Macias-Perez, J., Maggio, G., Magliocchetti, M., Mancini, C., Mannucci, F., Maoli, R., Martins, C. J. A. P., Matthew, S., Maurin, L., Metcalf, R. B., Monaco, P., Moretti, C., Morgante, G., Walton, Nicholas A., Patrizii, L., Popa, V., Potter, D., Risso, I., Rocci, P. -F., Sahlén, M., Schneider, A., Schultheis, M., Sereno, M., Simon, P., Mancini, A. Spurio, Stanford, S. A., Tanidis, K., Tao, C., Testera, G., Teyssier, R., Toft, S., Tosi, S., Troja, A., Tucci, M., Valieri, C., Valiviita, J., Vergani, D., Verza, G., Zinchenko, I. A., Rodighiero, G., and Talia, M.
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
Euclid will collect an enormous amount of data during the mission's lifetime, observing billions of galaxies in the extragalactic sky. Along with traditional template-fitting methods, numerous machine learning algorithms have been presented for computing their photometric redshifts and physical parameters (PPs), requiring significantly less computing effort while producing equivalent performance measures. However, their performance is limited by the quality and amount of input information, to the point where the recovery of some well-established physical relationships between parameters might not be guaranteed. To forecast the reliability of Euclid photo-$z$s and PPs calculations, we produced two mock catalogs simulating Euclid photometry. We simulated the Euclid Wide Survey (EWS) and Euclid Deep Fields (EDF). We tested the performance of a template-fitting algorithm (Phosphoros) and four ML methods in recovering photo-$z$s, PPs (stellar masses and star formation rates), and the SFMS. To mimic the Euclid processing as closely as possible, the models were trained with Phosphoros-recovered labels. For the EWS, we found that the best results are achieved with a mixed labels approach, training the models with wide survey features and labels from the Phosphoros results on deeper photometry, that is, with the best possible set of labels for a given photometry. This imposes a prior, helping the models to better discern cases in degenerate regions of feature space, that is, when galaxies have similar magnitudes and colors but different redshifts and PPs, with performance metrics even better than those found with Phosphoros. We found no more than 3% performance degradation using a COSMOS-like reference sample or removing u band data, which will not be available until after data release DR1. The best results are obtained for the EDF, with appropriate recovery of photo-$z$, PPs, and the SFMS., Comment: 26 pages, 13 figures. Accepted for publication on A&A
- Published
- 2024
12. Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment
- Author
-
Guo, Jiayi, Zhao, Junhao, Ge, Chunjiang, Du, Chaoqun, Ni, Zanlin, Song, Shiji, Shi, Humphrey, and Huang, Gao
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Test-time adaptation (TTA) aims to enhance the performance of source-domain pretrained models when tested on unknown shifted target domains. Traditional TTA methods primarily adapt model weights based on target data streams, making model performance sensitive to the amount and order of target data. Recently, diffusion-driven TTA methods have demonstrated strong performance by using an unconditional diffusion model, which is also trained on the source domain to transform target data into synthetic data as a source domain projection. This allows the source model to make predictions without weight adaptation. In this paper, we argue that the domains of the source model and the synthetic data in diffusion-driven TTA methods are not aligned. To adapt the source model to the synthetic domain of the unconditional diffusion model, we introduce a Synthetic-Domain Alignment (SDA) framework to fine-tune the source model with synthetic data. Specifically, we first employ a conditional diffusion model to generate labeled samples, creating a synthetic dataset. Subsequently, we use the aforementioned unconditional diffusion model to add noise to and denoise each sample before fine-tuning. This process mitigates the potential domain gap between the conditional and unconditional models. Extensive experiments across various models and benchmarks demonstrate that SDA achieves superior domain alignment and consistently outperforms existing diffusion-driven TTA methods. Our code is available at https://github.com/SHI-Labs/Diffusion-Driven-Test-Time-Adaptation-via-Synthetic-Domain-Alignment., Comment: GitHub: https://github.com/SHI-Labs/Diffusion-Driven-Test-Time-Adaptation-via-Synthetic-Domain-Alignment
- Published
- 2024
13. Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis
- Author
-
Ohanyan, Marianna, Manukyan, Hayk, Wang, Zhangyang, Navasardyan, Shant, and Shi, Humphrey
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We present Zero-Painter, a novel training-free framework for layout-conditional text-to-image synthesis that facilitates the creation of detailed and controlled imagery from textual prompts. Our method utilizes object masks and individual descriptions, coupled with a global text prompt, to generate images with high fidelity. Zero-Painter employs a two-stage process involving our novel Prompt-Adjusted Cross-Attention (PACA) and Region-Grouped Cross-Attention (ReGCA) blocks, ensuring precise alignment of generated objects with textual prompts and mask shapes. Our extensive experiments demonstrate that Zero-Painter surpasses current state-of-the-art methods in preserving textual details and adhering to mask shapes.
- Published
- 2024
14. Kinematic Model of Magnetic Domain Wall Motion for Fast, High-Accuracy Simulations
- Author
-
Doleh, Kristi, Humphrey, Leonard, Linseisen, Chandler M., Kitcher, Michael D., Martin, Joanna M., Cui, Can, Incorvia, Jean Anne C., Garcia-Sanchez, Felipe, Hassan, Naimul, Edwards, Alexander J., and Friedman, Joseph S.
- Subjects
Computer Science - Emerging Technologies ,Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
Domain wall (DW) devices have garnered recent interest for diverse applications including memory, logic, and neuromorphic primitives; fast, accurate device models are therefore imperative for large-scale system design and verification. Extant DW motion models are sub-optimal for large-scale system design either over-consuming compute resources with physics-heavy equations or oversimplifying the physics, drastically reducing model accuracy. We propose a DW model inspired by the phenomenological similarities between motions of a DW and a classical object being acted on by forces like air resistance or static friction. Our proposed phenomenological model predicts DW motion within 1.2% on average compared with micromagnetic simulations that are 400 times slower. Additionally our model is seven times faster than extant collective coordinate models and 14 times more accurate than extant hyper-reduced models making it an essential tool for large-scale DW circuit design and simulation. The model is publicly posted along with scripts that automatically extract model parameters from user-provided simulation or experimental data to extend the model to alternative micromagnetic parameters.
- Published
- 2024
15. Wavefront Threading Enables Effective High-Level Synthesis
- Author
-
Pelton, Blake, Sapek, Adam, Eguro, Ken, Lo, Daniel, Forin, Alessandro, Humphrey, Matt, Xi, Jinwen, Cox, David, Karandikar, Rajas, Licht, Johannes de Fine, Babin, Evgeny, Caulfield, Adrian, and Burger, Doug
- Subjects
Computer Science - Programming Languages - Abstract
Digital systems are growing in importance and computing hardware is growing more heterogeneous. Hardware design, however, remains laborious and expensive, in part due to the limitations of conventional hardware description languages (HDLs) like VHDL and Verilog. A longstanding research goal has been programming hardware like software, with high-level languages that can generate efficient hardware designs. This paper describes Kanagawa, a language that takes a new approach to combine the programmer productivity benefits of traditional High-Level Synthesis (HLS) approaches with the expressibility and hardware efficiency of Register-Transfer Level (RTL) design. The language's concise syntax, matched with a hardware design-friendly execution model, permits a relatively simple toolchain to map high-level code into efficient hardware implementations., Comment: Accepted to PLDI'24
- Published
- 2024
- Full Text
- View/download PDF
16. Identifying type II quasars at intermediate redshift with few-shot learning photometric classification
- Author
-
Cunha, P. A. C., Humphrey, A., Brinchmann, J., Morais, S. G., Carvajal, R., Gomes, J. M., Matute, I., and Paulino-Afonso, A.
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Astrophysics of Galaxies - Abstract
We aim to identify QSO2 candidates in the redshift desert using optical and infrared photometry. At this intermediate redshift range, most of the prominent optical emission lines in QSO2 sources (e.g. CIV1549; [OIII]4959,5008) fall either outside the wavelength range of the SDSS optical spectra or in particularly noisy wavelength ranges, making QSO2 identification challenging. Therefore, we adopted a semi-supervised machine learning approach to select candidates in the SDSS galaxy sample. Recent applications of machine learning in astronomy focus on problems involving large data sets, with small data sets often being overlooked. We developed a few-shot learning approach for the identification and classification of rare-object classes using limited training data (200 sources). The new AMELIA pipeline uses a transfer-learning based approach with decision trees, distance-based, and deep learning methods to build a classifier capable of identifying rare objects on the basis of an observational training data set. We validated the performance of AMELIA by addressing the problem of identifying QSO2s at 1 $\leq$ z $\leq$ 2 using SDSS and WISE photometry, obtaining an F1-score above 0.8 in a supervised approach. We then used AMELIA to select new QSO2 candidates in the redshift desert and examined the nature of the candidates using SDSS spectra, when available. In particular, we identified a sub-population of [NeV]3426 emitters at z $\sim$ 1.1, which are highly likely to contain obscured AGNs. We used X-ray and radio cross-matching to validate our classification and investigated the performance of photometric criteria from the literature showing that our candidates have an inherent dusty nature. Finally, we derived physical properties for our QSO2 sample using photoionisation models and verified the AGN classification using an SED fitting., Comment: 20 pages, 9 figures, Accepted for publication in A&A
- Published
- 2024
- Full Text
- View/download PDF
17. Euclid. I. Overview of the Euclid mission
- Author
-
Euclid Collaboration, Mellier, Y., Abdurro'uf, Barroso, J. A. Acevedo, Achúcarro, A., Adamek, J., Adam, R., Addison, G. E., Aghanim, N., Aguena, M., Ajani, V., Akrami, Y., Al-Bahlawan, A., Alavi, A., Albuquerque, I. S., Alestas, G., Alguero, G., Allaoui, A., Allen, S. W., Allevato, V., Alonso-Tetilla, A. V., Altieri, B., Alvarez-Candal, A., Alvi, S., Amara, A., Amendola, L., Amiaux, J., Andika, I. T., Andreon, S., Andrews, A., Angora, G., Angulo, R. E., Annibali, F., Anselmi, A., Anselmi, S., Arcari, S., Archidiacono, M., Aricò, G., Arnaud, M., Arnouts, S., Asgari, M., Asorey, J., Atayde, L., Atek, H., Atrio-Barandela, F., Aubert, M., Aubourg, E., Auphan, T., Auricchio, N., Aussel, B., Aussel, H., Avelino, P. P., Avgoustidis, A., Avila, S., Awan, S., Azzollini, R., Baccigalupi, C., Bachelet, E., Bacon, D., Baes, M., Bagley, M. B., Bahr-Kalus, B., Balaguera-Antolinez, A., Balbinot, E., Balcells, M., Baldi, M., Baldry, I., Balestra, A., Ballardini, M., Ballester, O., Balogh, M., Bañados, E., Barbier, R., Bardelli, S., Baron, M., Barreiro, T., Barrena, R., Barriere, J. -C., Barros, B. J., Barthelemy, A., Bartolo, N., Basset, A., Battaglia, P., Battisti, A. J., Baugh, C. M., Baumont, L., Bazzanini, L., Beaulieu, J. -P., Beckmann, V., Belikov, A. N., Bel, J., Bellagamba, F., Bella, M., Bellini, E., Benabed, K., Bender, R., Benevento, G., Bennett, C. L., Benson, K., Bergamini, P., Bermejo-Climent, J. R., Bernardeau, F., Bertacca, D., Berthe, M., Berthier, J., Bethermin, M., Beutler, F., Bevillon, C., Bhargava, S., Bhatawdekar, R., Bianchi, D., Bisigello, L., Biviano, A., Blake, R. P., Blanchard, A., Blazek, J., Blot, L., Bosco, A., Bodendorf, C., Boenke, T., Böhringer, H., Boldrini, P., Bolzonella, M., Bonchi, A., Bonici, M., Bonino, D., Bonino, L., Bonvin, C., Bon, W., Booth, J. T., Borgani, S., Borlaff, A. S., Borsato, E., Bose, B., Botticella, M. T., Boucaud, A., Bouche, F., Boucher, J. S., Boutigny, D., Bouvard, T., Bouwens, R., Bouy, H., Bowler, R. A. A., Bozza, V., Bozzo, E., Branchini, E., Brando, G., Brau-Nogue, S., Brekke, P., Bremer, M. N., Brescia, M., Breton, M. -A., Brinchmann, J., Brinckmann, T., Brockley-Blatt, C., Brodwin, M., Brouard, L., Brown, M. L., Bruton, S., Bucko, J., Buddelmeijer, H., Buenadicha, G., Buitrago, F., Burger, P., Burigana, C., Busillo, V., Busonero, D., Cabanac, R., Cabayol-Garcia, L., Cagliari, M. S., Caillat, A., Caillat, L., Calabrese, M., Calabro, A., Calderone, G., Calura, F., Quevedo, B. Camacho, Camera, S., Campos, L., Canas-Herrera, G., Candini, G. P., Cantiello, M., Capobianco, V., Cappellaro, E., Cappelluti, N., Cappi, A., Caputi, K. I., Cara, C., Carbone, C., Cardone, V. F., Carella, E., Carlberg, R. G., Carle, M., Carminati, L., Caro, F., Carrasco, J. M., Carretero, J., Carrilho, P., Duque, J. Carron, Carry, B., Carvalho, A., Carvalho, C. S., Casas, R., Casas, S., Casenove, P., Casey, C. M., Cassata, P., Castander, F. J., Castelao, D., Castellano, M., Castiblanco, L., Castignani, G., Castro, T., Cavet, C., Cavuoti, S., Chabaud, P. -Y., Chambers, K. C., Charles, Y., Charlot, S., Chartab, N., Chary, R., Chaumeil, F., Cho, H., Chon, G., Ciancetta, E., Ciliegi, P., Cimatti, A., Cimino, M., Cioni, M. -R. L., Claydon, R., Cleland, C., Clément, B., Clements, D. L., Clerc, N., Clesse, S., Codis, S., Cogato, F., Colbert, J., Cole, R. E., Coles, P., Collett, T. E., Collins, R. S., Colodro-Conde, C., Colombo, C., Combes, F., Conforti, V., Congedo, G., Conseil, S., Conselice, C. J., Contarini, S., Contini, T., Conversi, L., Cooray, A. R., Copin, Y., Corasaniti, P. -S., Corcho-Caballero, P., Corcione, L., Cordes, O., Corpace, O., Correnti, M., Costanzi, M., Costille, A., Courbin, F., Mifsud, L. Courcoult, Courtois, H. M., Cousinou, M. -C., Covone, G., Cowell, T., Cragg, C., Cresci, G., Cristiani, S., Crocce, M., Cropper, M., Crouzet, P. E, Csizi, B., Cuby, J. -G., Cucchetti, E., Cucciati, O., Cuillandre, J. -C., Cunha, P. A. C., Cuozzo, V., Daddi, E., D'Addona, M., Dafonte, C., Dagoneau, N., Dalessandro, E., Dalton, G. B., D'Amico, G., Dannerbauer, H., Danto, P., Das, I., Da Silva, A., da Silva, R., Doumerg, W. d'Assignies, Daste, G., Davies, J. E., Davini, S., Dayal, P., de Boer, T., Decarli, R., De Caro, B., Degaudenzi, H., Degni, G., de Jong, J. T. A., de la Bella, L. F., de la Torre, S., Delhaise, F., Delley, D., Delucchi, G., De Lucia, G., Denniston, J., De Paolis, F., De Petris, M., Derosa, A., Desai, S., Desjacques, V., Despali, G., Desprez, G., De Vicente-Albendea, J., Deville, Y., Dias, J. D. F., Díaz-Sánchez, A., Diaz, J. J., Di Domizio, S., Diego, J. M., Di Ferdinando, D., Di Giorgio, A. M., Dimauro, P., Dinis, J., Dolag, K., Dolding, C., Dole, H., Sánchez, H. Domínguez, Doré, O., Dournac, F., Douspis, M., Dreihahn, H., Droge, B., Dryer, B., Dubath, F., Duc, P. -A., Ducret, F., Duffy, C., Dufresne, F., Duncan, C. A. J., Dupac, X., Duret, V., Durrer, R., Durret, F., Dusini, S., Ealet, A., Eggemeier, A., Eisenhardt, P. R. M., Elbaz, D., Elkhashab, M. Y., Ellien, A., Endicott, J., Enia, A., Erben, T., Vigo, J. A. Escartin, Escoffier, S., Sanz, I. Escudero, Essert, J., Ettori, S., Ezziati, M., Fabbian, G., Fabricius, M., Fang, Y., Farina, A., Farina, M., Farinelli, R., Farrens, S., Faustini, F., Feltre, A., Ferguson, A. M. N., Ferrando, P., Ferrari, A. G., Ferré-Mateu, A., Ferreira, P. G., Ferreras, I., Ferrero, I., Ferriol, S., Ferruit, P., Filleul, D., Finelli, F., Finkelstein, S. L., Finoguenov, A., Fiorini, B., Flentge, F., Focardi, P., Fonseca, J., Fontana, A., Fontanot, F., Fornari, F., Fosalba, P., Fossati, M., Fotopoulou, S., Fouchez, D., Fourmanoit, N., Frailis, M., Fraix-Burnet, D., Franceschi, E., Franco, A., Franzetti, P., Freihoefer, J., Frenk, C. . S., Frittoli, G., Frugier, P. -A., Frusciante, N., Fumagalli, A., Fumagalli, M., Fumana, M., Fu, Y., Gabarra, L., Galeotta, S., Galluccio, L., Ganga, K., Gao, H., García-Bellido, J., Garcia, K., Gardner, J. P., Garilli, B., Gaspar-Venancio, L. -M., Gasparetto, T., Gautard, V., Gavazzi, R., Gaztanaga, E., Genolet, L., Santos, R. Genova, Gentile, F., George, K., Gerbino, M., Ghaffari, Z., Giacomini, F., Gianotti, F., Gibb, G. P. S., Gillard, W., Gillis, B., Ginolfi, M., Giocoli, C., Girardi, M., Giri, S. K., Goh, L. W. K., Gómez-Alvarez, P., Gonzalez-Perez, V., Gonzalez, A. H., Gonzalez, E. J., Gonzalez, J. C., Beauchamps, S. Gouyou, Gozaliasl, G., Gracia-Carpio, J., Grandis, S., Granett, B. R., Granvik, M., Grazian, A., Gregorio, A., Grenet, C., Grillo, C., Grupp, F., Gruppioni, C., Gruppuso, A., Guerbuez, C., Guerrini, S., Guidi, M., Guillard, P., Gutierrez, C. M., Guttridge, P., Guzzo, L., Gwyn, S., Haapala, J., Haase, J., Haddow, C. R., Hailey, M., Hall, A., Hall, D., Hamaus, N., Haridasu, B. S., Harnois-Déraps, J., Harper, C., Hartley, W. G., Hasinger, G., Hassani, F., Hatch, N. A., Haugan, S. V. H., Häußler, B., Heavens, A., Heisenberg, L., Helmi, A., Helou, G., Hemmati, S., Henares, K., Herent, O., Hernández-Monteagudo, C., Heuberger, T., Hewett, P. C., Heydenreich, S., Hildebrandt, H., Hirschmann, M., Hjorth, J., Hoar, J., Hoekstra, H., Holland, A. D., Holliman, M. S., Holmes, W., Hook, I., Horeau, B., Hormuth, F., Hornstrup, A., Hosseini, S., Hu, D., Hudelot, P., Hudson, M. J., Huertas-Company, M., Huff, E. M., Hughes, A. C. N., Humphrey, A., Hunt, L. K., Huynh, D. D., Ibata, R., Ichikawa, K., Iglesias-Groth, S., Ilbert, O., Ilić, S., Ingoglia, L., Iodice, E., Israel, H., Israelsson, U. E., Izzo, L., Jablonka, P., Jackson, N., Jacobson, J., Jafariyazani, M., Jahnke, K., Jain, B., Jansen, H., Jarvis, M. J., Jasche, J., Jauzac, M., Jeffrey, N., Jhabvala, M., Jimenez-Teja, Y., Muñoz, A. Jimenez, Joachimi, B., Johansson, P. H., Joudaki, S., Jullo, E., Kajava, J. J. E., Kang, Y., Kannawadi, A., Kansal, V., Karagiannis, D., Kärcher, M., Kashlinsky, A., Kazandjian, M. V., Keck, F., Keihänen, E., Kerins, E., Kermiche, S., Khalil, A., Kiessling, A., Kiiveri, K., Kilbinger, M., Kim, J., King, R., Kirkpatrick, C. C., Kitching, T., Kluge, M., Knabenhans, M., Knapen, J. H., Knebe, A., Kneib, J. -P., Kohley, R., Koopmans, L. V. E., Koskinen, H., Koulouridis, E., Kou, R., Kovács, A., Kovačić, I., Kowalczyk, A., Koyama, K., Kraljic, K., Krause, O., Kruk, S., Kubik, B., Kuchner, U., Kuijken, K., Kümmel, M., Kunz, M., Kurki-Suonio, H., Lacasa, F., Lacey, C. G., La Franca, F., Lagarde, N., Lahav, O., Laigle, C., La Marca, A., La Marle, O., Lamine, B., Lam, M. C., Lançon, A., Landt, H., Langer, M., Lapi, A., Larcheveque, C., Larsen, S. S., Lattanzi, M., Laudisio, F., Laugier, D., Laureijs, R., Laurent, V., Lavaux, G., Lawrenson, A., Lazanu, A., Lazeyras, T., Boulc'h, Q. Le, Brun, A. M. C. Le, Brun, V. Le, Leclercq, F., Lee, S., Graet, J. Le, Legrand, L., Leirvik, K. N., Jeune, M. Le, Lembo, M., Mignant, D. Le, Lepinzan, M. D., Lepori, F., Reun, A. Le, Leroy, G., Lesci, G. F., Lesgourgues, J., Leuzzi, L., Levi, M. E., Liaudat, T. I., Libet, G., Liebing, P., Ligori, S., Lilje, P. B., Lin, C. -C., Linde, D., Linder, E., Lindholm, V., Linke, L., Li, S. -S., Liu, S. J., Lloro, I., Lobo, F. S. N., Lodieu, N., Lombardi, M., Lombriser, L., Lonare, P., Longo, G., López-Caniego, M., Lopez, X. Lopez, Alvarez, J. Lorenzo, Loureiro, A., Loveday, J., Lusso, E., Macias-Perez, J., Maciaszek, T., Maggio, G., Magliocchetti, M., Magnard, F., Magnier, E. A., Magro, A., Mahler, G., Mainetti, G., Maino, D., Maiorano, E., Malavasi, N., Mamon, G. A., Mancini, C., Mandelbaum, R., Manera, M., Manjón-García, A., Mannucci, F., Mansutti, O., Outeiro, M. Manteiga, Maoli, R., Maraston, C., Marcin, S., Marcos-Arenal, P., Margalef-Bentabol, B., Marggraf, O., Marinucci, D., Marinucci, M., Markovic, K., Marleau, F. R., Marpaud, J., Martignac, J., Martín-Fleitas, J., Martin-Moruno, P., Martin, E. L., Martinelli, M., Martinet, N., Martin, H., Martins, C. J. A. P., Marulli, F., Massari, D., Massey, R., Masters, D. C., Matarrese, S., Matsuoka, Y., Matthew, S., Maughan, B. J., Mauri, N., Maurin, L., Maurogordato, S., McCarthy, K., McConnachie, A. W., McCracken, H. J., McDonald, I., McEwen, J. D., McPartland, C. J. R., Medinaceli, E., Mehta, V., Mei, S., Melchior, M., Melin, J. -B., Ménard, B., Mendes, J., Mendez-Abreu, J., Meneghetti, M., Mercurio, A., Merlin, E., Metcalf, R. B., Meylan, G., Migliaccio, M., Mignoli, M., Miller, L., Miluzio, M., Milvang-Jensen, B., Mimoso, J. P., Miquel, R., Miyatake, H., Mobasher, B., Mohr, J. J., Monaco, P., Monguió, M., Montoro, A., Mora, A., Dizgah, A. Moradinezhad, Moresco, M., Moretti, C., Morgante, G., Morisset, N., Moriya, T. J., Morris, P. W., Mortlock, D. J., Moscardini, L., Mota, D. F., Mottet, S., Moustakas, L. A., Moutard, T., Müller, T., Munari, E., Murphree, G., Murray, C., Murray, N., Musi, P., Nadathur, S., Nagam, B. C., Nagao, T., Naidoo, K., Nakajima, R., Nally, C., Natoli, P., Navarro-Alsina, A., Girones, D. Navarro, Neissner, C., Nersesian, A., Nesseris, S., Nguyen-Kim, H. N., Nicastro, L., Nichol, R. C., Nielbock, M., Niemi, S. -M., Nieto, S., Nilsson, K., Noller, J., Norberg, P., Nouri-Zonoz, A., Ntelis, P., Nucita, A. A., Nugent, P., Nunes, N. J., Nutma, T., Ocampo, I., Odier, J., Oesch, P. A., Oguri, M., Oliveira, D. Magalhaes, Onoue, M., Oosterbroek, T., Oppizzi, F., Ordenovic, C., Osato, K., Pacaud, F., Pace, F., Padilla, C., Paech, K., Pagano, L., Page, M. J., Palazzi, E., Paltani, S., Pamuk, S., Pandolfi, S., Paoletti, D., Paolillo, M., Papaderos, P., Pardede, K., Parimbelli, G., Parmar, A., Partmann, C., Pasian, F., Passalacqua, F., Paterson, K., Patrizii, L., Pattison, C., Paulino-Afonso, A., Paviot, R., Peacock, J. A., Pearce, F. R., Pedersen, K., Peel, A., Peletier, R. F., Ibanez, M. Pellejero, Pello, R., Penny, M. T., Percival, W. J., Perez-Garrido, A., Perotto, L., Pettorino, V., Pezzotta, A., Pezzuto, S., Philippon, A., Pierre, M., Piersanti, O., Pietroni, M., Piga, L., Pilo, L., Pires, S., Pisani, A., Pizzella, A., Pizzuti, L., Plana, C., Polenta, G., Pollack, J. E., Poncet, M., Pöntinen, M., Pool, P., Popa, L. A., Popa, V., Popp, J., Porciani, C., Porth, L., Potter, D., Poulain, M., Pourtsidou, A., Pozzetti, L., Prandoni, I., Pratt, G. W., Prezelus, S., Prieto, E., Pugno, A., Quai, S., Quilley, L., Racca, G. D., Raccanelli, A., Rácz, G., Radinović, S., Radovich, M., Ragagnin, A., Ragnit, U., Raison, F., Ramos-Chernenko, N., Ranc, C., Rasera, Y., Raylet, N., Rebolo, R., Refregier, A., Reimberg, P., Reiprich, T. H., Renk, F., Renzi, A., Retre, J., Revaz, Y., Reylé, C., Reynolds, L., Rhodes, J., Ricci, F., Ricci, M., Riccio, G., Ricken, S. O., Rissanen, S., Risso, I., Rix, H. -W., Robin, A. C., Rocca-Volmerange, B., Rocci, P. -F., Rodenhuis, M., Rodighiero, G., Monroy, M. Rodriguez, Rollins, R. P., Romanello, M., Roman, J., Romelli, E., Romero-Gomez, M., Roncarelli, M., Rosati, P., Rosset, C., Rossetti, E., Roster, W., Rottgering, H. J. A., Rozas-Fernández, A., Ruane, K., Rubino-Martin, J. A., Rudolph, A., Ruppin, F., Rusholme, B., Sacquegna, S., Sáez-Casares, I., Saga, S., Saglia, R., Sahlén, M., Saifollahi, T., Sakr, Z., Salvalaggio, J., Salvaterra, R., Salvati, L., Salvato, M., Salvignol, J. -C., Sánchez, A. G., Sanchez, E., Sanders, D. B., Sapone, D., Saponara, M., Sarpa, E., Sarron, F., Sartori, S., Sartoris, B., Sassolas, B., Sauniere, L., Sauvage, M., Sawicki, M., Scaramella, R., Scarlata, C., Scharré, L., Schaye, J., Schewtschenko, J. A., Schindler, J. -T., Schinnerer, E., Schirmer, M., Schmidt, F., Schmidt, M., Schneider, A., Schneider, M., Schneider, P., Schöneberg, N., Schrabback, T., Schultheis, M., Schulz, S., Schuster, N., Schwartz, J., Sciotti, D., Scodeggio, M., Scognamiglio, D., Scott, D., Scottez, V., Secroun, A., Sefusatti, E., Seidel, G., Seiffert, M., Sellentin, E., Selwood, M., Semboloni, E., Sereno, M., Serjeant, S., Serrano, S., Setnikar, G., Shankar, F., Sharples, R. M., Short, A., Shulevski, A., Shuntov, M., Sias, M., Sikkema, G., Silvestri, A., Simon, P., Sirignano, C., Sirri, G., Skottfelt, J., Slezak, E., Sluse, D., Smith, G. P., Smith, L. C., Smith, R. E., Smit, S. J. A., Soldano, F., Solheim, B. G. B., Sorce, J. G., Sorrenti, F., Soubrie, E., Spinoglio, L., Mancini, A. Spurio, Stadel, J., Stagnaro, L., Stanco, L., Stanford, S. A., Starck, J. -L., Stassi, P., Steinwagner, J., Stern, D., Stone, C., Strada, P., Strafella, F., Stramaccioni, D., Surace, C., Sureau, F., Suyu, S. H., Swindells, I., Szafraniec, M., Szapudi, I., Taamoli, S., Talia, M., Tallada-Crespí, P., Tanidis, K., Tao, C., Tarrío, P., Tavagnacco, D., Taylor, A. N., Taylor, J. E., Taylor, P. L., Teixeira, E. M., Tenti, M., Idiago, P. Teodoro, Teplitz, H. I., Tereno, I., Tessore, N., Testa, V., Testera, G., Tewes, M., Teyssier, R., Theret, N., Thizy, C., Thomas, P. D., Toba, Y., Toft, S., Toledo-Moreo, R., Tolstoy, E., Tommasi, E., Torbaniuk, O., Torradeflot, F., Tortora, C., Tosi, S., Tosti, S., Trifoglio, M., Troja, A., Trombetti, T., Tronconi, A., Tsedrik, M., Tsyganov, A., Tucci, M., Tutusaus, I., Uhlemann, C., Ulivi, L., Urbano, M., Vacher, L., Vaillon, L., Valageas, P., Valdes, I., Valentijn, E. A., Valenziano, L., Valieri, C., Valiviita, J., Broeck, M. Van den, Vassallo, T., Vavrek, R., Vega-Ferrero, J., Venemans, B., Venhola, A., Ventura, S., Kleijn, G. Verdoes, Vergani, D., Verma, A., Vernizzi, F., Veropalumbo, A., Verza, G., Vescovi, C., Vibert, D., Viel, M., Vielzeuf, P., Viglione, C., Viitanen, A., Villaescusa-Navarro, F., Vinciguerra, S., Visticot, F., Voggel, K., von Wietersheim-Kramsta, M., Vriend, W. J., Wachter, S., Walmsley, M., Walth, G., Walton, D. M., Walton, N. A., Wander, M., Wang, L., Wang, Y., Weaver, J. R., Weller, J., Wetzstein, M., Whalen, D. J., Whittam, I. H., Widmer, A., Wiesmann, M., Wilde, J., Williams, O. R., Winther, H. -A., Wittje, A., Wong, J. H. W., Wright, A. H., Yankelevich, V., Yeung, H. W., Yoon, M., Youles, S., Yung, L. Y. A., Zacchei, A., Zalesky, L., Zamorani, G., Vitorelli, A. Zamorano, Marc, M. Zanoni, Zennaro, M., Zerbi, F. M., Zinchenko, I. A., Zoubian, J., Zucca, E., and Zumalacarregui, M.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
The current standard model of cosmology successfully describes a variety of measurements, but the nature of its main ingredients, dark matter and dark energy, remains unknown. Euclid is a medium-class mission in the Cosmic Vision 2015-2025 programme of the European Space Agency (ESA) that will provide high-resolution optical imaging, as well as near-infrared imaging and spectroscopy, over about 14,000 deg^2 of extragalactic sky. In addition to accurate weak lensing and clustering measurements that probe structure formation over half of the age of the Universe, its primary probes for cosmology, these exquisite data will enable a wide range of science. This paper provides a high-level overview of the mission, summarising the survey characteristics, the various data-processing steps, and data products. We also highlight the main science objectives and expected performance., Comment: Accepted for publication in the A&A special issue`Euclid on Sky'
- Published
- 2024
18. Designing Adaptive User Interfaces for mHealth applications targeting chronic disease: A User-Centric Approach
- Author
-
Wang, Wei, Grundy, John, Khalajzadeh, Hourieh, Madugalla, Anuradha, and Obie, Humphrey O.
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Software Engineering - Abstract
mHealth interventions show significant potential to help in the self-management of chronic diseases, but their under use remains a problem. Considering the substantial diversity among individuals dealing with chronic diseases, tailored strategies are essential. \emph{Adaptive User Interfaces} (AUIs) may help address the diverse and evolving needs of this demographic. To investigate this approach, we developed an AUI prototype informed by existing literature findings. We then used this prototype as the basis for focus group discussions and interview studies with 22 participants managing various chronic diseases, and follow-up surveys of all participants. Through these investigations, we pinpointed key challenges related to the use of AUIs, strategies to improve adaptation design, and potential trade-offs between these challenges and strategies. Concurrently, a quantitative survey was conducted to extract preferences for AUIs in chronic disease-related applications with 90 further participants. This uncovered participants' preferences for various adaptations, data types, collection methods, and involvement levels. Finally, we synthesised these insights and categories, aligning them with existing guidelines and design considerations for mHealth app adaptation design. This resulted in nine guidelines that we refined by a final feedback survey conducted with 20 participants.
- Published
- 2024
19. CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
- Author
-
Li, Jiachen, Wang, Xinyao, Zhu, Sijie, Kuo, Chia-Wen, Xu, Lu, Chen, Fan, Jain, Jitesh, Shi, Humphrey, and Wen, Longyin
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advancements in Multimodal Large Language Models (LLMs) have focused primarily on scaling by increasing text-image pair data and enhancing LLMs to improve performance on multimodal tasks. However, these scaling approaches are computationally expensive and overlook the significance of improving model capabilities from the vision side. Inspired by the successful applications of Mixture-of-Experts (MoE) in LLMs, which improves model scalability during training while keeping inference costs similar to those of smaller models, we propose CuMo. CuMo incorporates Co-upcycled Top-K sparsely-gated Mixture-of-experts blocks into both the vision encoder and the MLP connector, thereby enhancing the multimodal LLMs with minimal additional activated parameters during inference. CuMo first pre-trains the MLP blocks and then initializes each expert in the MoE block from the pre-trained MLP block during the visual instruction tuning stage. Auxiliary losses are used to ensure a balanced loading of experts. CuMo outperforms state-of-the-art multimodal LLMs across various VQA and visual-instruction-following benchmarks using models within each model size group, all while training exclusively on open-sourced datasets. The code and model weights for CuMo are open-sourced at https://github.com/SHI-Labs/CuMo.
- Published
- 2024
20. Musculoskeletal injuries in real tennis
- Author
-
Humphrey JA, Humphrey PP, Greenwood AS, Anderson JL, Markus HS, and Ajuied A
- Subjects
Epidemiology ,real (court) tennis ,musculoskeletal injuries ,Sports medicine ,RC1200-1245 - Abstract
JA Humphrey,1 PP Humphrey,2 AS Greenwood,3 JL Anderson,4 HS Markus,5 A Ajuied61Orthopaedic Department, Milton Keynes University Hospital, Milton Keynes, MK65LD, UK; 2School of Pharmacy, University College London, London, WC1N 1AX, UK; 3Department of Sport and Exercise Sciences, St Mary’s University, Twickenham, TW1 4SX, UK; 4Medical Education Department, University of Brighton, Brighton, BN1 9PH, UK; 5Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 2PY, UK; 6Orthopaedic Department, Guys’ and St Thomas’ NHS Trust, London SE1 9RT, UKIntroduction: Real tennis is a growing, unique, and well-established sport. To date, there has been no epidemiological data on real tennis injuries. The primary aim of this retrospective study is to record the incidence and document any trends in real tennis musculoskeletal injuries, so as to improve injury awareness of common and possibly preventable injuries.Methods: A surveillance questionnaire e-mailed to 2,036 Tennis & Rackets Association members to retrospectively capture injuries sustained by amateur and professional real tennis players over their playing careers.Results: A total of 485 (438 males and 47 females) questionnaires were fully completed over 4 weeks. A total of 662 musculoskeletal injuries were recorded with a mean of 1.4 injuries per player (range 0–7). The incidence of sustaining an acute real tennis musculoskeletal injury is 0.4/1000 hrs. The three main anatomical locations reported injured were elbow 15.6% (103/662), knee 11.6% (77/662), and face 10.0% (66/662). The most common structures reported injured were muscle 24% (161/661), tendon 23.4% (155/661), ligament 7.0% (46/661), soft tissue bruising 6.5% (43/661), and eye 6.2% (41/661). The majority of the upper limb injuries were gradual onset (64.7%, 143/221), and the lower limb injuries were sudden onset (72.0%, 188/261).Conclusion: This study uniquely provides valuable preliminary data on the incidence and patterns of musculoskeletal injuries in real tennis players. In addition, it highlights a number of reported eye injuries. The study is also a benchmark for future prospective studies on academy and professional real tennis players.Keywords: epidemiology, musculoskeletal injuries, real tennis
- Published
- 2019
21. Predictors of Listening-Related Fatigue in Adolescents with Hearing Loss
- Author
-
Kelsey E. Klein, Lauren A. Harris, Elizabeth L. Humphrey, Emily C. Noss, Autumn M. Sanderson, and Kelly R. Yeager
- Abstract
Purpose: Self-reported listening-related fatigue in adolescents with hearing loss (HL) was investigated. Specifically, the extent to which listening-related fatigue is associated with school accommodations, audiologic characteristics, and listening breaks was examined. Method: Participants were 144 adolescents with HL ages 12-19 years. Data were collected online via Qualtrics. The Vanderbilt Fatigue Scale-Child was used to measure listening-related fatigue. Participants also reported on their use of listening breaks and school accommodations, including an Individualized Education Program (IEP) or 504 plan, remote microphone systems, closed captioning, preferential seating, sign language interpreters, live transcriptions, and notetakers. Results: After controlling for age, HL laterality, and self-perceived listening difficulty, adolescents with an IEP or a 504 plan reported lower listening-related fatigue compared to adolescents without an IEP or a 504 plan. Adolescents who more frequently used remote microphone systems or notetakers reported higher listening-related fatigue compared to adolescents who used these accommodations less frequently, whereas increased use of a sign language interpreter was associated with decreased listening-related fatigue. Among adolescents with unilateral HL, higher age was associated with lower listening-related fatigue; no effect of age was found among adolescents with bilateral HL. Listening-related fatigue did not differ based on hearing device configuration. Conclusions: Adolescents with HL should be considered at risk for listening-related fatigue regardless of the type of hearing devices used or the degree of HL. The individualized support provided by an IEP or 504 plan may help alleviate listening-related fatigue, especially by empowering adolescents with HL to be self-advocates in terms of their listening needs and accommodations in school. Additional research is needed to better understand the role of specific school accommodations and listening breaks in addressing listening-related fatigue.
- Published
- 2024
- Full Text
- View/download PDF
22. Implementing Targeted Social and Emotional Learning Interventions in Schools--Are More Specific Models Needed?
- Author
-
Caroline Bond, Vanessa Evans, and Neil Humphrey
- Abstract
Schools are increasingly encouraged to adopt evidence-based or evidence informed interventions and implement them using insights from implementation science. The literature relating to implementation of interventions in schools has focused largely on universal interventions, particularly for social and emotional learning (SEL), which are designed for all children and young people. In contrast, targeted interventions provide additional support for those pupils who may require small group or individual support over and above that provided at the universal level. To date there has been limited consideration of factors which are important for the implementation of targeted SEL interventions. Data from an exploratory case study with two schools implementing Lego therapy are used to illustrate the implementation factors relevant to this targeted intervention. Findings indicate similarities in universal and targeted intervention core components and factors but also a number of distinct elements that are important to consider when implementing Lego therapy and potentially other targeted SEL interventions. Key considerations include the interaction with the wider school system, the pivotal role of the intervention champion, and the importance of external support for problem solving and sustainability. The resulting model may inform further development of implementation frameworks for Lego therapy and other targeted SEL interventions.
- Published
- 2024
- Full Text
- View/download PDF
23. MP17-06 IMPACT OF SUBSEQUENT FELLOWSHIP ON CHIEF RESIDENT CASE LOG VOLUMES
- Author
-
Mercedes, Raidizon, Corey, Zachary, Gaither, Talmadge, Lehman, Erik, Lemack, Gary E, Clifton, Marisa M, Klausner, Adam P, Mehta, Akanksha, Atiemo, Humphrey, Lee, Richard, Sorensen, Matthew, Smith, Ryan, Buckley, Jill, Thompson, R Houston, Breyer, Benjamin N, Badalato, Gina M, Wallen, Eric M, and Raman, Jay D
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences - Published
- 2024
24. MP17-04 TRENDS IN CHIEF RESIDENT CASE LOGS VERSUS SUBSEQUENT CASE LOG DATA IN CLINICAL PRACTICE
- Author
-
Corey, Zachary, Lehman, Erik, Lemack, Gary E, Clifton, Marisa M, Klausner, Adam P, Mehta, Akanksha, Atiemo, Humphrey, Lee, Richard, Sorensen, Mathew, Smith, Ryan, Buckley, Jill, Thompson, R Houston, Breyer, Benjamin N, Badalato, Gina M, Wallen, Erik M, Cain, Mark, Wolf, J Stuart, and Raman, Jay D
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences - Published
- 2024
25. UVMap-ID: A Controllable and Personalized UV Map Generative Model
- Author
-
Wang, Weijie, Zhang, Jichao, Liu, Chang, Li, Xia, Xu, Xingqian, Shi, Humphrey, Sebe, Nicu, and Lepri, Bruno
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recently, diffusion models have made significant strides in synthesizing realistic 2D human images based on provided text prompts. Building upon this, researchers have extended 2D text-to-image diffusion models into the 3D domain for generating human textures (UV Maps). However, some important problems about UV Map Generative models are still not solved, i.e., how to generate personalized texture maps for any given face image, and how to define and evaluate the quality of these generated texture maps. To solve the above problems, we introduce a novel method, UVMap-ID, which is a controllable and personalized UV Map generative model. Unlike traditional large-scale training methods in 2D, we propose to fine-tune a pre-trained text-to-image diffusion model which is integrated with a face fusion module for achieving ID-driven customized generation. To support the finetuning strategy, we introduce a small-scale attribute-balanced training dataset, including high-quality textures with labeled text and Face ID. Additionally, we introduce some metrics to evaluate the multiple aspects of the textures. Finally, both quantitative and qualitative analyses demonstrate the effectiveness of our method in controllable and personalized UV Map generation. Code is publicly available via https://github.com/twowwj/UVMap-ID., Comment: Accepted to ACMMM2024
- Published
- 2024
26. FSGe: A fast and strongly-coupled 3D fluid-solid-growth interaction method
- Author
-
Pfaller, Martin R., Latorre, Marcos, Schwarz, Erica L., Gerosa, Fannie M., Szafron, Jason M., Humphrey, Jay D., and Marsden, Alison L.
- Subjects
Computer Science - Computational Engineering, Finance, and Science - Abstract
Equilibrated fluid-solid-growth (FSGe) is a fast, open source, three-dimensional (3D) computational platform for simulating interactions between instantaneous hemodynamics and long-term vessel wall adaptation through mechanobiologically equilibrated growth and remodeling (G&R). Such models can capture evolving geometry, composition, and material properties in health and disease and following clinical interventions. In traditional G&R models, this feedback is modeled through highly simplified fluid solutions, neglecting local variations in blood pressure and wall shear stress (WSS). FSGe overcomes these inherent limitations by strongly coupling the 3D Navier-Stokes equations for blood flow with a 3D equilibrated constrained mixture model (CMMe) for vascular tissue G&R. CMMe allows one to predict long-term evolved mechanobiological equilibria from an original homeostatic state at a computational cost equivalent to that of a standard hyperelastic material model. In illustrative computational examples, we focus on the development of a stable aortic aneurysm in a mouse model to highlight key differences in growth patterns between FSGe and solid-only G&R models. We show that FSGe is especially important in blood vessels with asymmetric stimuli. Simulation results reveal greater local variation in fluid-derived WSS than in intramural stress (IMS). Thus, differences between FSGe and G&R models became more pronounced with the growing influence of WSS relative to pressure. Future applications in highly localized disease processes, such as for lesion formation in atherosclerosis, can now include spatial and temporal variations of WSS.
- Published
- 2024
- Full Text
- View/download PDF
27. OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
- Author
-
D'Incà, Moreno, Peruzzo, Elia, Mancini, Massimiliano, Xu, Dejia, Goel, Vidit, Xu, Xingqian, Wang, Zhangyang, Shi, Humphrey, and Sebe, Nicu
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Text-to-image generative models are becoming increasingly popular and accessible to the general public. As these models see large-scale deployments, it is necessary to deeply investigate their safety and fairness to not disseminate and perpetuate any kind of biases. However, existing works focus on detecting closed sets of biases defined a priori, limiting the studies to well-known concepts. In this paper, we tackle the challenge of open-set bias detection in text-to-image generative models presenting OpenBias, a new pipeline that identifies and quantifies the severity of biases agnostically, without access to any precompiled set. OpenBias has three stages. In the first phase, we leverage a Large Language Model (LLM) to propose biases given a set of captions. Secondly, the target generative model produces images using the same set of captions. Lastly, a Vision Question Answering model recognizes the presence and extent of the previously proposed biases. We study the behavior of Stable Diffusion 1.5, 2, and XL emphasizing new biases, never investigated before. Via quantitative experiments, we demonstrate that OpenBias agrees with current closed-set bias detection methods and human judgement., Comment: CVPR 2024 Highlight - Code: https://github.com/Picsart-AI-Research/OpenBias
- Published
- 2024
28. Learning Trimaps via Clicks for Image Matting
- Author
-
Zhang, Chenyi, Hu, Yihan, Ding, Henghui, Shi, Humphrey, Zhao, Yao, and Wei, Yunchao
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Despite significant advancements in image matting, existing models heavily depend on manually-drawn trimaps for accurate results in natural image scenarios. However, the process of obtaining trimaps is time-consuming, lacking user-friendliness and device compatibility. This reliance greatly limits the practical application of all trimap-based matting methods. To address this issue, we introduce Click2Trimap, an interactive model capable of predicting high-quality trimaps and alpha mattes with minimal user click inputs. Through analyzing real users' behavioral logic and characteristics of trimaps, we successfully propose a powerful iterative three-class training strategy and a dedicated simulation function, making Click2Trimap exhibit versatility across various scenarios. Quantitative and qualitative assessments on synthetic and real-world matting datasets demonstrate Click2Trimap's superior performance compared to all existing trimap-free matting methods. Especially, in the user study, Click2Trimap achieves high-quality trimap and matting predictions in just an average of 5 seconds per image, demonstrating its substantial practical value in real-world applications.
- Published
- 2024
29. Benchmarking Object Detectors with COCO: A New Path Forward
- Author
-
Singh, Shweta, Yadav, Aayan, Jain, Jitesh, Shi, Humphrey, Johnson, Justin, and Desai, Karan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The Common Objects in Context (COCO) dataset has been instrumental in benchmarking object detectors over the past decade. Like every dataset, COCO contains subtle errors and imperfections stemming from its annotation procedure. With the advent of high-performing models, we ask whether these errors of COCO are hindering its utility in reliably benchmarking further progress. In search for an answer, we inspect thousands of masks from COCO (2017 version) and uncover different types of errors such as imprecise mask boundaries, non-exhaustively annotated instances, and mislabeled masks. Due to the prevalence of COCO, we choose to correct these errors to maintain continuity with prior research. We develop COCO-ReM (Refined Masks), a cleaner set of annotations with visibly better mask quality than COCO-2017. We evaluate fifty object detectors and find that models that predict visually sharper masks score higher on COCO-ReM, affirming that they were being incorrectly penalized due to errors in COCO-2017. Moreover, our models trained using COCO-ReM converge faster and score higher than their larger variants trained using COCO-2017, highlighting the importance of data quality in improving object detectors. With these findings, we advocate using COCO-ReM for future object detection research. Our dataset is available at https://cocorem.xyz, Comment: Technical report. Dataset website: https://cocorem.xyz and code: https://github.com/kdexd/coco-rem
- Published
- 2024
30. StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
- Author
-
Henschel, Roberto, Khachatryan, Levon, Hayrapetyan, Daniil, Poghosyan, Hayk, Tadevosyan, Vahram, Wang, Zhangyang, Navasardyan, Shant, and Shi, Humphrey
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended to the case of long video synthesis. To overcome these limitations, we introduce StreamingT2V, an autoregressive approach for long video generation of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:(i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the previous chunk via an attentional mechanism, leading to consistent chunk transitions, (ii) a long-term memory block called appearance preservation module, which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that enables to apply a video enhancer autoregressively for infinitely long videos without inconsistencies between chunks. Experiments show that StreamingT2V generates high motion amount. In contrast, all competing image-to-video methods are prone to video stagnation when applied naively in an autoregressive manner. Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator that outperforms competitors with consistency and motion. Our code will be available at: https://github.com/Picsart-AI-Research/StreamingT2V, Comment: https://github.com/Picsart-AI-Research/StreamingT2V
- Published
- 2024
31. GustosonicSense: Towards understanding the design of playful gustosonic eating experiences
- Author
-
Wang, Yan, Obie, Humphrey O., Li, Zhuying, Salim, Flora D., Grundy, John, and Mueller, Florian 'Floyd'
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Multimedia - Abstract
The pleasure that often comes with eating can be further enhanced with intelligent technology, as the field of human-food interaction suggests. However, knowledge on how to design such pleasure-supporting eating systems is limited. To begin filling this knowledge gap, we designed "GustosonicSense", a novel gustosonic eating system that utilizes wireless earbuds for sensing different eating and drinking actions with a machine learning algorithm and trigger playful sounds as a way to facilitate pleasurable eating experiences. We present the findings from our design and a study that revealed how we can support the "stimulation", "hedonism", and "reflexivity" for playful human-food interactions. Ultimately, with our work, we aim to support interaction designers in facilitating playful experiences with food., Comment: To appear at CHI'24: The ACM Conference on Human Factors in Computing Systems (CHI), Honolulu, Hawaii, 2024
- Published
- 2024
32. Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level
- Author
-
Hassani, Ali, Hwu, Wen-Mei, and Shi, Humphrey
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Neighborhood attention reduces the cost of self attention by restricting each token's attention span to its nearest neighbors. This restriction, parameterized by a window size and dilation factor, draws a spectrum of possible attention patterns between linear projection and self attention. Neighborhood attention, and more generally sliding window attention patterns, have long been bounded by infrastructure, particularly in higher-rank spaces (2-D and 3-D), calling for the development of custom kernels, which have been limited in either functionality, or performance, if not both. In this work, we first show that neighborhood attention can be represented as a batched GEMM problem, similar to standard attention, and implement it for 1-D and 2-D neighborhood attention. These kernels on average provide 895% and 272% improvement in full precision latency compared to existing naive kernels for 1-D and 2-D neighborhood attention respectively. We find certain inherent inefficiencies in all unfused neighborhood attention kernels that bound their performance and lower-precision scalability. We also developed fused neighborhood attention; an adaptation of fused dot-product attention kernels that allow fine-grained control over attention across different spatial axes. Known for reducing the quadratic time complexity of self attention to a linear complexity, neighborhood attention can now enjoy a reduced and constant memory footprint, and record-breaking half precision latency. We observe that our fused kernels successfully circumvent some of the unavoidable inefficiencies in unfused implementations. While our unfused GEMM-based kernels only improve half precision performance compared to naive kernels by an average of 496% and 113% in 1-D and 2-D problems respectively, our fused kernels improve naive kernels by an average of 1607% and 581% in 1-D and 2-D problems respectively., Comment: Project page: https://github.com/SHI-Labs/NATTEN
- Published
- 2024
33. Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community
- Author
-
Isajanyan, Arman, Shatveryan, Artur, Kocharyan, David, Wang, Zhangyang, and Shi, Humphrey
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to engage and contribute with content. The recent progress of text-conditioned image synthesis has ushered in a collaborative era where AI empowers users to craft original visual artworks seeking community validation. Nevertheless, assessing these models in the context of collective community preference introduces distinct challenges. Existing evaluation methods predominantly center on limited size user studies guided by image quality and prompt alignment. This work pioneers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework that leverages implicit feedback from social network users engaged in creative editing of generated images. We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform, yielding a first million-user-scale dataset of implicit human preferences for user-generated visual art named Picsart Image-Social. Our analysis exposes the shortcomings of current metrics in modeling community creative preference of text-to-image models' outputs, compelling us to introduce a novel predictive model explicitly tailored to address these limitations. Rigorous quantitative experiments and user study show that our Social Reward model aligns better with social popularity than existing metrics. Furthermore, we utilize Social Reward to fine-tune text-to-image models, yielding images that are more favored by not only Social Reward, but also other established metrics. These findings highlight the relevance and effectiveness of Social Reward in assessing community appreciation for AI-generated artworks, establishing a closer alignment with users' creative goals: creating popular visual art. Codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward, Comment: 16 pages with 10 figures, accepted at ICLR 2024 as a spotlight, codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward
- Published
- 2024
34. Euclid: Identifying the reddest high-redshift galaxies in the Euclid Deep Fields with gradient-boosted trees
- Author
-
Signor, T., Rodighiero, G., Bisigello, L., Bolzonella, M., Caputi, K. I., Daddi, E., De Lucia, G., Enia, A., Gabarra, L., Gruppioni, C., Humphrey, A., La Franca, F., Mancini, C., Pozzetti, L., Serjeant, S., Spinoglio, L., van Mierlo, S. E., Andreon, S., Auricchio, N., Baldi, M., Bardelli, S., Battaglia, P., Bender, R., Bodendorf, C., Bonino, D., Branchini, E., Brescia, M., Brinchmann, J., Camera, S., Capobianco, V., Carbone, C., Carretero, J., Casas, S., Castellano, M., Cavuoti, S., Cimatti, A., Cledassou, R., Congedo, G., Conselice, C. J., Conversi, L., Copin, Y., Corcione, L., Courbin, F., Courtois, H. M., Da Silva, A., Degaudenzi, H., Di Giorgio, A. M., Dinis, J., Dubath, F., Dupac, X., Dusini, S., Ealet, A., Farina, M., Farrens, S., Ferriol, S., Fotopoulou, S., Franceschi, E., Galeotta, S., Garilli, B., Gillard, W., Gillis, B., Giocoli, C., Grazian, A., Grupp, F., Guzzo, L., Haugan, S. V. H., Hook, I., Hormuth, F., Hornstrup, A., Jahnke, K., Kümmel, M., Kermiche, S., Kiessling, A., Kilbinger, M., Kitching, T., Kurki-Suonio, H., Ligori, S., Lilje, P. B., Lindholm, V., Lloro, I., Maino, D., Maiorano, E., Mansutti, O., Marggraf, O., Martinet, N., Marulli, F., Massey, R., Medinaceli, E., Melchior, M., Mellier, Y., Meneghetti, M., Merlin, E., Moresco, M., Moscardini, L., Munari, E., Nichol, R. C., Niemi, S. -M., Padilla, C., Paltani, S., Pasian, F., Pedersen, K., Pettorino, V., Pires, S., Polenta, G., Poncet, M., Popa, L. A., Raison, F., Renzi, A., Rhodes, J., Riccio, G., Romelli, E., Roncarelli, M., Rossetti, E., Saglia, R., Sapone, D., Sartoris, B., Schneider, P., Schrabback, T., Secroun, A., Seidel, G., Serrano, S., Sirignano, C., Sirri, G., Stanco, L., Surace, C., Tallada-Crespí, P., Teplitz, H. I., Tereno, I., Toledo-Moreo, R., Torradeflot, F., Tutusaus, I., Valentijn, E. A., Vassallo, T., Veropalumbo, A., Wang, Y., Weller, J., Williams, O. R., Zoubian, J., Zucca, E., Burigana, C., and Scottez, V.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Dusty, distant, massive ($M_*\gtrsim 10^{11}\,\rm M_\odot$) galaxies are usually found to show a remarkable star-formation activity, contributing on the order of $25\%$ of the cosmic star-formation rate density at $z\approx3$--$5$, and up to $30\%$ at $z\sim7$ from ALMA observations. Nonetheless, they are elusive in classical optical surveys, and current near-infrared surveys are able to detect them only in very small sky areas. Since these objects have low space densities, deep and wide surveys are necessary to obtain statistically relevant results about them. Euclid will be potentially capable of delivering the required information, but, given the lack of spectroscopic features at these distances within its bands, it is still unclear if it will be possible to identify and characterize these objects. The goal of this work is to assess the capability of Euclid, together with ancillary optical and near-infrared data, to identify these distant, dusty and massive galaxies, based on broadband photometry. We used a gradient-boosting algorithm to predict both the redshift and spectral type of objects at high $z$. To perform such an analysis we make use of simulated photometric observations derived using the SPRITZ software. The gradient-boosting algorithm was found to be accurate in predicting both the redshift and spectral type of objects within the Euclid Deep Survey simulated catalog at $z>2$. In particular, we study the analog of HIEROs (i.e. sources with $H-[4.5]>2.25$), combining Euclid and Spitzer data at the depth of the Deep Fields. We found that the dusty population at $3\lesssim z\lesssim 7$ is well identified, with a redshift RMS and OLF of only $0.55$ and $8.5\%$ ($H_E\leq26$), respectively. Our findings suggest that with Euclid we will obtain meaningful insights into the role of massive and dusty galaxies in the cosmic star-formation rate over time., Comment: 18 pages, 13 figures, accepted in A&A
- Published
- 2024
- Full Text
- View/download PDF
35. Pharmacovigilance in Pregnancy Studies, Exposures and Outcomes Ascertainment, and Findings from Low- and Middle-Income Countries: A Scoping Review
- Author
-
Shafi, Jenine, Virk, Maneet K., Kalk, Emma, Carlucci, James G., Chepkemoi, Audrey, Bernard, Caitlin, McHenry, Megan S., Were, Edwin, Humphrey, John, Davies, Mary-Ann, Mehta, Ushma C., and Patel, Rena C.
- Published
- 2024
- Full Text
- View/download PDF
36. CDK5–cyclin B1 regulates mitotic fidelity
- Author
-
Zheng, Xiao-Feng, Sarkar, Aniruddha, Lotana, Humphrey, Syed, Aleem, Nguyen, Huy, Ivey, Richard G., Kennedy, Jacob J., Whiteaker, Jeffrey R., Tomasik, Bartłomiej, Huang, Kaimeng, Li, Feng, D’Andrea, Alan D., Paulovich, Amanda G., Shah, Kavita, Spektor, Alexander, and Chowdhury, Dipanjan
- Published
- 2024
- Full Text
- View/download PDF
37. Computational analysis of heart valve growth and remodeling after the Ross procedure
- Author
-
Middendorp, Elmer, Braeu, Fabian, Baaijens, Frank P. T., Humphrey, Jay D., Cyron, Christian J., and Loerakker, Sandra
- Published
- 2024
- Full Text
- View/download PDF
38. Implicit versus explicit first impressions in performance-based assessment: will raters overcome their first impressions when learner performance changes?
- Author
-
Wood, Timothy J., Daniels, Vijay J., Pugh, Debra, Touchie, Claire, Halman, Samantha, and Humphrey-Murto, Susan
- Published
- 2024
- Full Text
- View/download PDF
39. The impact of universal, school based, interventions on help seeking in children and young people: a systematic literature review
- Author
-
Hayes, Daniel, Mansfield, Rosie, Mason, Carla, Santos, Joao, Moore, Anna, Boehnke, Jan, Ashworth, Emma, Moltrecht, Bettina, Humphrey, Neil, Stallard, Paul, Patalay, Praveetha, and Deighton, Jessica
- Published
- 2024
- Full Text
- View/download PDF
40. A thermodynamic investigation into protein–excipient interactions involving different grades of polysorbate 20 and 80
- Author
-
Whiteley, Joseph, Waters, Laura J., Humphrey, James, and Mellor, Steve
- Published
- 2024
- Full Text
- View/download PDF
41. Analyzing spatiotemporal variations and dynamics of vegetation over Amathole district municipality in South Africa
- Author
-
Afuye, Gbenga Abayomi, Kalumba, Ahmed Mukalazi, Owolabi, Solomon Temidayo, Thamaga, Kgabo Humphrey, Ndou, Naledzani, Sibandze, Phila, and Orimoloye, Israel Ropo
- Published
- 2024
- Full Text
- View/download PDF
42. Fragmentation of care in breast cancer: greater than the sum of its parts
- Author
-
Freeman, Hadley D., Burke, Linnea C., Humphrey, Ja’Neil G., Wilbers, Ashley J., Vora, Halley, Khorfan, Rhami, Solomon, Naveenraj L., Namm, Jukes P., Ji, Liang, and Lum, Sharon S.
- Published
- 2024
- Full Text
- View/download PDF
43. µPhos: a scalable and sensitive platform for high-dimensional phosphoproteomics
- Author
-
Oliinyk, Denys, Will, Andreas, Schneidmadel, Felix R, Böhme, Maximilian, Rinke, Jenny, Hochhaus, Andreas, Ernst, Thomas, Hahn, Nina, Geis, Christian, Lubeck, Markus, Raether, Oliver, Humphrey, Sean J, and Meier, Florian
- Published
- 2024
- Full Text
- View/download PDF
44. Transdisciplinary perspectives on ‘the narrative’ and ‘the analytical’ for critical literacy
- Author
-
Humphrey, Sally, Stosic, Dragana, Barrington, Therese, Brake, Nicki, and Pagano, Rebecca
- Published
- 2024
- Full Text
- View/download PDF
45. Response surface methodology optimization of trimethoprim degradation in wastewater using Eosin-Y sensitized 25%ZnFe2O4-g-C3N4 composite under natural sunlight
- Author
-
Samuel, Humphrey Mutuma, Mecha, Cleophas Achisa, and M’Arimi, Milton M.
- Published
- 2024
- Full Text
- View/download PDF
46. Widespread AGN feedback in a forming brightest cluster galaxy at $z=4.1$ unveiled by JWST
- Author
-
Saxena, Aayush, Overzier, Roderik A., Villar-Martín, Montserrat, Heckman, Tim, Roy, Namrata, Duncan, Kenneth J., Röttgering, Huub, Miley, George, Aydar, Catarina, Best, Philip, Bosman, Sarah E. I., Cameron, Alex J., Gabányi, Krisztina Éva, Humphrey, Andrew, Morais, Sandy, Onoue, Masafusa, Pentericci, Laura, Reynaldi, Victoria, and Venemans, Bram
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
We present rest-frame optical spectroscopy using JWST/NIRSpec IFU for the radio galaxy TN J1338-1942 at z=4.1, one of the most luminous galaxies in the early Universe with powerful extended radio jets. Previous observations showed evidence for strong, large-scale outflows on the basis of its large (~150 kpc) halo detected in Ly-alpha, and high velocity [O II] emission features detected in ground-based IFU data. Our NIRSpec/IFU observations spatially resolve the emission line properties across the host galaxy in great detail. We find at least five concentrations of line emission, coinciding with discrete continuum features previously detected in imaging from HST and JWST, over an extent of ~2'' (~15 kpc). The spectral diagnostics enabled by NIRSpec unambiguously trace the activity of the obscured AGN plus interaction between the interstellar medium and the radio jet as the dominant mechanisms for the ionization state and kinematics of the gas in the system. A secondary region of very high ionization lies at roughly 5 kpc distance from the nucleus, and within the context of an expanding cocoon enveloping the radio lobe, this may be explained by strong shock-ionization of the entrained gas. However, it could also signal the presence of a second obscured AGN, which may also offer an explanation for an intriguing outflow feature seen perpendicular to the radio axis. The presence of a dual SMBH system in this galaxy would support that large galaxies in the early Universe quickly accumulated their mass through the merging of smaller units (each with their own SMBH), at the centers of large overdensities. The inferred black hole mass to stellar mass ratio of 0.01-0.1 for TNJ1338 points to a more rapid assembly of black holes compared to the stellar mass of galaxies at high redshifts, consistent with other recent observations., Comment: 17 pages, 11 figures, submitted to MNRAS, comments welcome!
- Published
- 2024
47. VASE: Object-Centric Appearance and Shape Manipulation of Real Videos
- Author
-
Peruzzo, Elia, Goel, Vidit, Xu, Dejia, Xu, Xingqian, Jiang, Yifan, Wang, Zhangyang, Shi, Humphrey, and Sebe, Nicu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recently, several works tackled the video editing task fostered by the success of large-scale text-to-image generative models. However, most of these methods holistically edit the frame using the text, exploiting the prior given by foundation diffusion models and focusing on improving the temporal consistency across frames. In this work, we introduce a framework that is object-centric and is designed to control both the object's appearance and, notably, to execute precise and explicit structural modifications on the object. We build our framework on a pre-trained image-conditioned diffusion model, integrate layers to handle the temporal dimension, and propose training strategies and architectural modifications to enable shape control. We evaluate our method on the image-driven video editing task showing similar performance to the state-of-the-art, and showcasing novel shape-editing capabilities. Further details, code and examples are available on our project page: https://helia95.github.io/vase-website/, Comment: Project Page https://helia95.github.io/vase-website/
- Published
- 2024
48. Funding and Delivery of Syringe Services Programs in the United States, 2022.
- Author
-
Facente, Shelley N, Humphrey, Jamie L, Akiba, Christopher, Patel, Sheila V, Wenger, Lynn D, Tookes, Hansel, Bluthenthal, Ricky N, LaKosky, Paul, Prohaska, Stephanie, Morris, Terry, Kral, Alex H, and Lambdin, Barrot H
- Subjects
Health Services and Systems ,Public Health ,Health Sciences ,Prevention ,Good Health and Well Being ,United States ,Humans ,Needle-Exchange Programs ,Naloxone ,Benchmarking ,Substance Abuse ,Intravenous ,Medical and Health Sciences ,Biomedical and clinical sciences ,Health sciences - Abstract
Objectives. To describe the current financial health of syringe services programs (SSPs) in the United States and to assess the predictors of SSP budget levels and associations with delivery of public health interventions. Methods. We surveyed all known SSPs operating in the United States from February to June 2022 (n = 456), of which 68% responded (n = 311). We used general estimating equations to assess factors influencing SSP budget size and estimated the effects of budget size on multiple measures of SSP services. Results. The median SSP annual budget was $100 000 (interquartile range = $20 159‒$290 000). SSPs operating in urban counties and counties with higher levels of opioid overdose mortality had significantly higher budget levels, while SSPs located in counties with higher levels of Republican voting in 2020 had significantly lower budget levels. SSP budget levels were significantly and positively associated with syringe and naloxone distribution coverage. Conclusions. Current SSP funding levels do not meet minimum benchmarks. Increased funding would help SSPs meet community health needs. Public Health Implications. Federal, state, and local initiatives should prioritize sustained SSP funding to optimize their potential in addressing multiple public health crises. (Am J Public Health. 2024;114(4):435-443. https://doi.org/10.2105/AJPH.2024.307583).
- Published
- 2024
49. Practice Readiness? Trends in Chief Resident Year Training Experience Across 13 Residency Programs
- Author
-
Corey, Zachary, Lehman, Erik, Lemack, Gary E, Clifton, Marisa M, Klausner, Adam P, Mehta, Akanksha, Atiemo, Humphrey, Lee, Richard, Sorensen, Mathew, Smith, Ryan, Buckley, Jill, Thompson, Houston, Breyer, Benjamin N, Badalato, Gina M, Wallen, Eric M, and Raman, Jay D
- Subjects
Health Services and Systems ,Biomedical and Clinical Sciences ,Clinical Sciences ,Health Sciences ,Rare Diseases ,Cancer ,Urologic Diseases ,Child ,Humans ,Internship and Residency ,Education ,Medical ,Graduate ,Urology ,Accreditation ,Clinical Competence ,urology ,resident education ,Accreditation Council for Graduate Medical Education ,Clinical sciences ,Public health - Abstract
IntroductionUrology residency prepares trainees for independent practice. The optimal operative chief resident year experience to prepare for practice is undefined. We analyzed the temporal arc of cases residents complete during their residency compared to their chief year in a multi-institutional cohort.MethodsAccreditation Council for Graduate Medical Education case logs of graduating residents from 2010 to 2022 from participating urology residency programs were aggregated. Resident data for 5 categorized index procedures were recorded: (1) general urology, (2) endourology, (3) reconstructive urology, (4) urologic oncology, and (5) pediatric urology. Interactions were tested between the trends for total case exposure in residency training relative to the chief resident year.ResultsFrom a sample of 479 resident graduates, a total of 1,287,433 total cases were logged, including 375,703 during the chief year (29%). Urologic oncology cases had the highest median percentage completed during chief year (56%) followed by reconstructive urology (27%), general urology (24%), endourology (17%), and pediatric urology (2%). Across the study period, all categories of cases had a downward trend in median percentage completed during chief year except for urologic oncology. However, only trends in general urology (slope of -0.68, P = .013) and endourology (slope of -1.71, P ≤ .001) were significant.ConclusionsOver 50% of cases completed by chief residents are urologic oncology procedures. Current declining trends indicate that residents are being exposed to proportionally fewer general urology and endourology cases during their chief year prior to entering independent practice.
- Published
- 2024
50. The bii4africa dataset of faunal and floral population intactness estimates across Africas major land uses.
- Author
-
Clements, Hayley, Do Linh San, Emmanuel, Hempson, Gareth, Linden, Birthe, Maritz, Bryan, Monadjem, Ara, Reynolds, Chevonne, Siebert, Frances, Stevens, Nicola, Biggs, Reinette, De Vos, Alta, Blanchard, Ryan, Child, Matthew, Esler, Karen, Hamann, Maike, Loft, Ty, Reyers, Belinda, Selomane, Odirilwe, Skowno, Andrew, Tshoke, Tshegofatso, Abdoulaye, Diarrassouba, Aebischer, Thierry, Aguirre-Gutiérrez, Jesús, Alexander, Graham, Ali, Abdullahi, Allan, David, Amoako, Esther, Angedakin, Samuel, Aruna, Edward, Avenant, Nico, Badjedjea, Gabriel, Bakayoko, Adama, Bamba-Kaya, Abraham, Bates, Michael, Bates, Paul, Belmain, Steven, Bennitt, Emily, Bradley, James, Brewster, Chris, Brown, Michael, Bryja, Josef, Butynski, Thomas, Carvalho, Filipe, Channing, Alan, Chapman, Colin, Cohen, Callan, Cords, Marina, Cramer, Jennifer, Cronk, Nadine, Cunneyworth, Pamela, Dalerum, Fredrik, Danquah, Emmanuel, Davies-Mostert, Harriet, de Blocq, Andrew, De Jong, Yvonne, Demos, Terrence, Denys, Christiane, Djagoun, Chabi, Doherty-Bone, Thomas, Drouilly, Marine, du Toit, Johan, Ehlers Smith, David, Ehlers Smith, Yvette, Eiseb, Seth, Fashing, Peter, Ferguson, Adam, Fernández-García, José, Finckh, Manfred, Fischer, Claude, Gandiwa, Edson, Gaubert, Philippe, Gaugris, Jerome, Gibbs, Dalton, Gilchrist, Jason, Gil-Sánchez, Jose, Githitho, Anthony, Goodman, Peter, Granjon, Laurent, Grobler, J, Gumbi, Bonginkosi, Gvozdik, Vaclav, Harvey, James, Hauptfleisch, Morgan, Hayder, Firas, Hema, Emmanuel, Herbst, Marna, Houngbédji, Mariano, Huntley, Brian, Hutterer, Rainer, Ivande, Samuel, Jackson, Kate, Jongsma, Gregory, Juste, Javier, Kadjo, Blaise, Kaleme, Prince, Kamugisha, Edwin, Kaplin, Beth, Kato, Humphrey, Kiffner, Christian, and Kimuyu, Duncan
- Subjects
Animals ,Humans ,Ecosystem ,Conservation of Natural Resources ,Biodiversity ,Vertebrates ,Mammals - Abstract
Sub-Saharan Africa is under-represented in global biodiversity datasets, particularly regarding the impact of land use on species population abundances. Drawing on recent advances in expert elicitation to ensure data consistency, 200 experts were convened using a modified-Delphi process to estimate intactness scores: the remaining proportion of an intact reference population of a species group in a particular land use, on a scale from 0 (no remaining individuals) to 1 (same abundance as the reference) and, in rare cases, to 2 (populations that thrive in human-modified landscapes). The resulting bii4africa dataset contains intactness scores representing terrestrial vertebrates (tetrapods: ±5,400 amphibians, reptiles, birds, mammals) and vascular plants (±45,000 forbs, graminoids, trees, shrubs) in sub-Saharan Africa across the regions major land uses (urban, cropland, rangeland, plantation, protected, etc.) and intensities (e.g., large-scale vs smallholder cropland). This dataset was co-produced as part of the Biodiversity Intactness Index for Africa Project. Additional uses include assessing ecosystem condition; rectifying geographic/taxonomic biases in global biodiversity indicators and maps; and informing the Red List of Ecosystems.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.