5,807,619 results on '"Kim, So"'
Search Results
2. Disproving the Sophomore Slump Using Data Envelopment Analysis—Focusing on English Premier League’s 2014/2015 to 2018/2019 Seasons
- Author
-
Kim, Soo Yong, Kim, Changhee, and Kang, Hee Jay
- Published
- 2024
3. Between World-Imagining and World-Making: Politics of Fin-de-Siècle Universalism and Transimperial Indo-U.S. Brotherhood
- Author
-
Kim, Sophie-Jung Hyun
- Published
- 2024
- Full Text
- View/download PDF
4. Speaking In-Between: Vernacular Spirituality of a Woman in Late Chosǒn Korea
- Author
-
Kim, So Jung
- Published
- 2023
- Full Text
- View/download PDF
5. Commentary and Multilingualism in the Ottoman Reception of Texts: Three Perspectives
- Author
-
Gürbüzel, Aslıhan, Kim, Sooyong, and Miller, Jeannie
- Published
- 2023
6. Lost in Aestheticization: Bong Joon-Ho's Parasite
- Author
-
Kim, Soo Yeon
- Published
- 2023
7. Reinforcing Media Ideology: Framing Controversy Between South Korea and Japan in Han'gyŏre and Chosŏn ilbo Editorial Translation
- Author
-
Mah, Seunghye and Kim, Soon-Young
- Published
- 2023
8. The Racial Gap in Friendships among High-Achieving Students. EdWorkingPaper No. 24-1025
- Author
-
Annenberg Institute for School Reform at Brown University, Weonhyeok Chung, and Jeonghyeok Kim
- Abstract
High-achieving minority students have fewer friends than their majority counterparts. Exploring patterns of friendship formation in the Add Health data, we find strong racial homophily in friendship formations as well as strong achievement homophily within race. However, we find that achievement matters less in cross-racial friendships. As a result, high-achieving Black students lose Black friends as they move away from the mean achievement of their group, but do not gain high-achieving White friends in offsetting fashion. We find that high-achieving Black students have 0.9 fewer friends, mainly attributable to the fact that they are exposed to fewer high-achieving peers within their own race. We find that this could account for as much as 5 to 9 percent of the racial wage gap observed among high achievers.
- Published
- 2024
9. Interim Report 2 on the Implementation, Impact, and Cost Effectiveness of Developmental Education Reform in California's Community Colleges
- Author
-
Research for Action (RFA), Texas Education Research Center, Kri Burkander, Dae Kim, Mark Duffy, Lindsey Liu, Taylor Stenley, Keerthanya Rajesh, and Sean Vannata
- Abstract
Research for Action (RFA) in partnership with the University of Texas at Austin is engaged in a five-year mixed-methods study of the reforms associated with California AB 705. Over the course of the study, our team will assess the implementation, impact, and cost effectiveness of reforms associated with the law. This second interim report, presented at the conclusion of year three of the study, focuses on gaining a deeper understanding of on-campus implementation through a faculty survey administered to math and English departments across our study sample, an Interrupted Time Series analysis with nine cohorts of FTIC student data, and preliminary data collection for our cost effectiveness study. Collectively, these data highlight significant changes that colleges have made on campus regarding shifting enrollments from developmental education into transfer-level coursework in both English and math, and providing additional supports to students to promote retention and completion. We find that AB 705 has demonstrated notable successes in improving enrollment and completion rates in transfer-level courses, particularly in math, among FTIC students in California's community colleges. While our survey results suggest that faculty believe additional resources and supports would be helpful, most faculty report that implementation supports have been adequate.
- Published
- 2024
10. A Case Study of South Korean Elementary School Teachers' Emergency Remote Teaching
- Author
-
Gi Woong Choi, Jieun Lim, Soo Hyeon Kim, Jewoong Moon, and Yong Ju Jung
- Abstract
COVID-19 is an unprecedented pandemic that has impacted the whole world. The pandemic made researchers and educators realize the critical need to prepare for future disasters. This study explored a context-specific case for elementary online learning where we investigated how elementary school teachers transitioned to emergency remote teaching (ERT) from face-to-face to online learning during the pandemic. A case study approach was used to explore South Korean elementary teachers' ERT approaches and experiences during COVID-19. Using the CIPP (Context, Input, Process, and Product) framework, we sought to understand how the transition occurred from the perspectives of the teachers. The analysis uncovered several themes that fall under each category of the framework. In terms of context, limited technological aptitude and lack of training in online instructional design as well as policy issues and socio-economic differences were identified as key factors in assessing the current state of the ERT. In terms of input, instructors' efforts as well as support from in and out of school were discussed. Student interaction and engagement were identified as key factors in understanding the process of ERT. Lastly, learning outcomes, instructional strategies, and systemic transformation emerged as products of ERT.
- Published
- 2024
11. Voices from the Industry: How EdTech Leaders Responded to the COVID-19 Pandemic
- Author
-
Deoksoon Kim, Katrina Borowiec, Drina Kei Yatsu, and Stanton Wortham
- Abstract
Purpose: Educational technology ("EdTech") served a pivotal role in keeping schools functioning during the beginning of the COVID-19 pandemic. Little is known about EdTech leaders' roles in shaping this response. This study explores EdTech leaders' perspectives and backgrounds, their response to the pandemic, how they envision their roles as educators, and their perspectives about how technology facilitates educational innovation. Design/Approach/Methods: This study uses a qualitative, phenomenological approach to understand how 11 EdTech leaders experienced the pandemic. Participants were recruited for interviews in summer 2021 via purposive sampling to include diverse backgrounds and perspectives. Data were analyzed inductively. Findings: The findings show that a four-category typology can be used to describe EdTech leaders' diverse backgrounds and experiences. Leaders emphasized equity and open collaboration in their pandemic responses, by expanding access to their tools and adapting their products as users' needs evolved. EdTech leaders anticipate streamlined user experiences, improvements in online learning, and increased adoption of artificial intelligence and simulated learning environments. Originality/Value: This study addresses a gap in the research concerning EdTech leaders' perspectives on their efforts to support educators and their experiences during the pandemic. We hope this study sparks additional research on EdTech leaders' experiences and roles in education.
- Published
- 2024
12. Exploring the Relationship between Test-Optional Admissions and Selectivity and Enrollment Outcomes during the Pandemic. EdWorkingPaper No. 24-982
- Author
-
Annenberg Institute for School Reform at Brown University, Kelly Rosinger, Dominique J. Baker, Joseph Sturm, Wan Yu, Julie J. Park, OiYan Poon, Brian Heseung Kim, and Stephanie Breen
- Abstract
Most selective colleges implemented test-optional admissions during the pandemic, making college entrance exam scores optional for applicants. We draw on descriptive, two-way fixed effects, and event study methods to examine variation in test-optional implementation during the pandemic and how implementation relates to selectivity and enrollment. For "test-optional" colleges during the pandemic, we found substantial variation in policy type (e.g., test optional, test free) and whether the policy extended to all applicants and scholarship consideration. Findings suggest test-optional implementation related to increases in Black student enrollment, mostly at moderately selective colleges and when policies extended to all applicants and scholarships. At highly selective colleges, findings suggest test-optional implementation related to an increase in applications but not consistent gains in enrollment.
- Published
- 2024
13. Zoomorphizing the Asterisms: Indigenous Interpretations of the Twenty-Eight Lunar Mansions in the History of China
- Author
-
Kim, Soyeon
- Published
- 2022
14. Cascade hot carriers via broad-band resonant tunneling
- Author
-
Paul, Kamal Kumar, Mondal, Ashok, Kim, Jae Woo, Kim, Ji-Hee, and Lee, Young Hee
- Subjects
Physics - Applied Physics - Abstract
Extraction of hot carriers (HCs) over the band-edge is a key to harvest solar energy beyond Shockley-Queisser limit1. Graphene is known as a HC-layered material due to phonon bottleneck effect near Dirac point, but limited by low photocarrier density2. Graphene/transition metal dichalcogenide (TMD) heterostructures circumvent this issue by ultrafast carrier transfer from TMD to graphene2,3. Nevertheless, efficient extraction of photocurrent by means of HCs together with carrier multiplication (CM) is still missing. Here, we introduce an ultrathin broadband resonant tunneling (BRT) barrier, TiOX to efficiently extract photocurrent with simultaneous CM and HC measurements in MoS2/graphene/TiOX heterostructure. The BRT layer gives rise to boosting open circuit voltage which is linearly proportional to incident photon energy. Meanwhile, short circuit current rises rapidly over 2Eg with obvious CM feature. This was explained by defining the joint density of states between graphene and TiOX layer over positive and negative voltage. The broadband resonant tunneling states inherently constructed from oxidation states varying from Ti3+ to Ti4+ allow the ultrafast HCs to efficiently transfer from graphene to TiOX layer. We find that the number of available tunneling states is directly proportional to short circuit current, which is well corroborated with TiOX and MoS2 thickness variance. We obtained an optimum thickness of BRT layer of ~2.8 nm, yielding cascade open circuit voltage as high as ~0.7 V, two orders of magnitude higher than that without BRT layer to reach a record efficiency of 5.3% with improved fill factor owing to synergistic HC and CM conversion under 1-SUN with long-term stability.
- Published
- 2024
15. Assessing the Answerability of Queries in Retrieval-Augmented Code Generation
- Author
-
Kim, Geonmin, Kim, Jaeyeon, Park, Hancheol, Shin, Wooksu, and Kim, Tae-Ho
- Subjects
Computer Science - Computation and Language - Abstract
Thanks to unprecedented language understanding and generation capabilities of large language model (LLM), Retrieval-augmented Code Generation (RaCG) has recently been widely utilized among software developers. While this has increased productivity, there are still frequent instances of incorrect codes being provided. In particular, there are cases where plausible yet incorrect codes are generated for queries from users that cannot be answered with the given queries and API descriptions. This study proposes a task for evaluating answerability, which assesses whether valid answers can be generated based on users' queries and retrieved APIs in RaCG. Additionally, we build a benchmark dataset called Retrieval-augmented Code Generability Evaluation (RaCGEval) to evaluate the performance of models performing this task. Experimental results show that this task remains at a very challenging level, with baseline models exhibiting a low performance of 46.7%. Furthermore, this study discusses methods that could significantly improve performance.
- Published
- 2024
16. KMT-2024-BLG-1044L: A sub-Uranus microlensing planet around a host at the star-brown dwarf mass boundary
- Author
-
Han, Cheongho, Ryu, Yoon-Hyun, Lee, Chung-Uk, Gould, Andrew, Albrow, Michael D., Chung, Sun-Ju, Hwang, Kyu-Ha, Jung, Youn Kil, Shvartzvald, Yossi, Shin, In-Gu, Yee, Jennifer C., Yang, Hongjing, Zang, Weicheng, Kim, Doeon, Kim, Dong-Jin, Park, Byeong-Gon, and Pogge, Richard W.
- Subjects
Astrophysics - Earth and Planetary Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Astrophysics - Solar and Stellar Astrophysics - Abstract
We analysed microlensing data to uncover the nature of the anomaly that appeared near the peak of the short-timescale microlensing event KMT-2024-BLG-1044. Despite the anomaly's brief duration of less than a day, it was densely observed through high-cadence monitoring conducted by the KMTNet survey. Detailed modelling of the light curve confirmed the planetary origin of the anomaly and revealed two possible solutions, due to an inner--outer degeneracy. The two solutions provide different measured planet parameters: $(s, q)_{\rm inner} = [1.0883 \pm 0.0027, (3.125 \pm 0.248)\times 10^{-4}]$ for the inner solutions and $(s, q)_{\rm outer} = [1.0327 \pm 0.0054, (3.350 \pm 0.316)\times 10^{-4}]$ for the outer solutions. Using Bayesian analysis with constraints provided by the short event timescale ($t_{\rm E} \sim 9.1$~day) and the small angular Einstein radius ($\theta_{\rm E}\sim 0.16$~mas for the inner solution and $\sim 0.10$~mas for the outer solutio), we determined that the lens is a planetary system consisting of a host near the boundary between a star and a brown dwarf and a planet with a mass lower than that of Uranus. The discovery of the planetary system highlights the crucial role of the microlensing technique in detecting planets that orbit substellar brown dwarfs or very low-mass stars., Comment: 8 pages, 10 figures
- Published
- 2024
17. Radiopurity measurements of liquid scintillator for the COSINE-100 Upgrade
- Author
-
Kim, J., Ha, C., Kim, S. H., Kim, W. K., Kim, Y. D., Ko, Y. J., Lee, E. K., Lee, H., Lee, H. S., Lee, I. S., Lee, J., Lee, S. H., Lee, S. M., Lee, Y. J., and Yu, G. H.
- Subjects
Physics - Instrumentation and Detectors ,High Energy Physics - Experiment - Abstract
A new 2,400 L liquid scintillator has been produced for the COSINE-100 Upgrade, which is under construction at Yemilab for the next COSINE dark matter experiment phase. The linear-alkyl-benzene-based scintillator is designed to serve as a veto for NaI(Tl) crystal targets and a separate platform for rare event searches. We measured using a sample consisting of a custom-made 445 mL cylindrical Teflon container equipped with two 3-inch photomultiplier tubes. Analyses show activity levels of $0.091 \pm 0.042$ mBq/kg for $^{238}$U and $0.012 \pm 0.007$ mBq/kg for $^{232}$Th.
- Published
- 2024
18. Fast Unconditional Reset and Leakage Reduction of a Tunable Superconducting Qubit via an Engineered Dissipative Bath
- Author
-
Kim, Gihwan, Butler, Andreas, Ferreira, Vinicius S., Zhang, Xueyue, Hadley, Alex, Kim, Eunjong, and Painter, Oskar
- Subjects
Quantum Physics - Abstract
Rapid and accurate initialization of qubits, reset, is a crucial building block for various tasks in quantum information processing, such as quantum error-correction and estimation of statistics of noisy quantum devices with many qubits. We demonstrate unconditional reset of a frequency-tunable transmon qubit that simultaneously resets multiple excited states by utilizing a metamaterial waveguide engineered to provide a cold bath over a wide spectral range, while providing strong protection against Purcell decay of the qubit. We report reset error below 0.13% (0.16%) when prepared in the first (second) excited state of the transmon within 88ns. Additionally, through the sharp roll-off in the density of states of the metamaterial waveguide, we implement a leakage reduction unit that selectively resets the transmon's second excited state to 0.285(3)% residual population within 44ns while acting trivially in the computational subspace as an identity operation that preserves encoded information with an infidelity of 0.72(1)%., Comment: 17 pages, 11 figures
- Published
- 2024
19. The JCMT BISTRO Survey: The Magnetic Fields of the IC 348 Star-forming Region
- Author
-
Choi, Youngwoo, Kwon, Woojin, Pattle, Kate, Arzoumanian, Doris, Bourke, Tyler L., Hoang, Thiem, Hwang, Jihye, Koch, Patrick M., Sadavoy, Sarah, Bastien, Pierre, Furuya, Ray, Lai, Shih-Ping, Qiu, Keping, Ward-Thompson, Derek, Berry, David, Byun, Do-Young, Chen, Huei-Ru Vivien, Chen, Wen Ping, Chen, Mike, Chen, Zhiwei, Ching, Tao-Chung, Cho, Jungyeon, Choi, Minho, Choi, Yunhee, Coudé, Simon, Chrysostomou, Antonio, Chung, Eun Jung, Dai, Sophia, Debattista, Victor, Di Francesco, James, Diep, Pham Ngoc, Doi, Yasuo, Duan, Hao-Yuan, Duan, Yan, Eswaraiah, Chakali, Fanciullo, Lapo, Fiege, Jason, Fissel, Laura M., Franzmann, Erica, Friberg, Per, Friesen, Rachel, Fuller, Gary, Gledhill, Tim, Graves, Sarah, Greaves, Jane, Griffin, Matt, Gu, Qilao, Han, Ilseung, Hasegawa, Tetsuo, Houde, Martin, Hull, Charles L. H., Inoue, Tsuyoshi, Inutsuka, Shu-ichiro, Iwasaki, Kazunari, Jeong, Il-Gyo, Johnstone, Doug, Karoly, Janik, Könyves, Vera, Kang, Ji-hyun, Lacaille, Kevin, Law, Chi-Yan, Lee, Chang Won, Lee, Hyeseung, Lee, Chin-Fei, Lee, Jeong-Eun, Lee, Sang-Sung, Li, Dalei, Li, Di, Li, Guangxing, Li, Hua-bai, Lin, Sheng-Jun, Liu, Hong-Li, Liu, Tie, Liu, Sheng-Yuan, Liu, Junhao, Longmore, Steven, Lu, Xing, Lyo, A-Ran, Mairs, Steve, Matsumura, Masafumi, Matthews, Brenda, Moriarty-Schieven, Gerald, Nagata, Tetsuya, Nakamura, Fumitaka, Nakanishi, Hiroyuki, Ngoc, Nguyen Bich, Ohashi, Nagayoshi, Onaka, Takashi, Park, Geumsook, Parsons, Harriet, Peretto, Nicolas, Priestley, Felix, Pyo, Tae-Soo, Qian, Lei, Rao, Ramprasad, Rawlings, Jonathan, Rawlings, Mark, Retter, Brendan, Richer, John, Rigby, Andrew, Saito, Hiro, Savini, Giorgio, Seta, Masumichi, Sharma, Ekta, Shimajiri, Yoshito, Shinnaga, Hiroko, Soam, Archana, Kang, Miju, Kataoka, Akimasa, Kawabata, Koji, Kemper, Francisca, Kim, Jongsoo, Kim, Shinyoung, Kim, Gwanjeong, Kim, Kyoung Hee, Kim, Mi-Ryang, Kim, Kee-Tae, Kim, Hyosung, Kirchschlager, Florian, Kirk, Jason, Kobayashi, Masato I. N., Kusune, Takayoshi, Kwon, Jungmi, Tamura, Motohide, Tang, Ya-Wen, Tang, Xindi, Tomisaka, Kohji, Tsukamoto, Yusuke, Viti, Serena, Wang, Hongchi, Wang, Jia-Wei, Wu, Jintai, Xie, Jinjin, Yang, Meng-Zhe, Yen, Hsi-Wei, Yoo, Hyunju, Yuan, Jinghua, Yun, Hyeong-Sik, Zenko, Tetsuya, Zhang, Guoyin, Zhang, Yapeng, Zhang, Chuan-Peng, Zhou, Jianjun, Zhu, Lei, de Looze, Ilse, André, Philippe, Dowell, C. Darren, Eden, David, Eyres, Stewart, Falle, Sam, Gouellec, Valentin J. M. Le, Poidevin, Frédérick, and van Loo, Sven
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
We present 850 $\mu$m polarization observations of the IC 348 star-forming region in the Perseus molecular cloud as part of the B-fields In STar-forming Region Observation (BISTRO) survey. We study the magnetic properties of two cores (HH 211 MMS and IC 348 MMS) and a filamentary structure of IC 348. We find that the overall field tends to be more perpendicular than parallel to the filamentary structure of the region. The polarization fraction decreases with intensity, and we estimate the trend by power-law and the mean of the Rice distribution fittings. The power indices for the cores are much smaller than 1, indicative of possible grain growth to micron size in the cores. We also measure the magnetic field strengths of the two cores and the filamentary area separately by applying the Davis-Chandrasekhar-Fermi method and its alternative version for compressed medium. The estimated mass-to-flux ratios are 0.45-2.20 and 0.63-2.76 for HH 211 MMS and IC 348 MMS, respectively, while the ratios for the filament is 0.33-1.50. This result may suggest that the transition from subcritical to supercritical conditions occurs at the core scale ($\sim$ 0.05 pc) in the region. In addition, we study the energy balance of the cores and find that the relative strength of turbulence to the magnetic field tends to be stronger for IC 348 MMS than HH 211 MMS. The result could potentially explain the different configurations inside the two cores: a single protostellar system in HH 211 MMS and multiple protostars in IC 348 MMS., Comment: Accepted for publication in ApJ. 21 pages, 12 figures
- Published
- 2024
20. Physics-Constrained Graph Neural Networks for Spatio-Temporal Prediction of Drop Impact on OLED Display Panels
- Author
-
Kim, Jiyong, Park, Jangseop, Kim, Nayong, Yu, Younyeol, Chang, Kiseok, Woo, Chang-Seung, Yang, Sunwoong, and Kang, Namwoo
- Subjects
Physics - Computational Physics - Abstract
This study aims to predict the spatio-temporal evolution of physical quantities observed in multi-layered display panels subjected to the drop impact of a ball. To model these complex interactions, graph neural networks have emerged as promising tools, effectively representing objects and their relationships as graph structures. In particular, MeshGraphNets (MGNs) excel in capturing dynamics in dynamic physics simulations using irregular mesh data. However, conventional MGNs often suffer from non-physical artifacts, such as the penetration of overlapping objects. To resolve this, we propose a physics-constrained MGN that mitigates these penetration issues while maintaining high level of accuracy in temporal predictions. Furthermore, to enhance the model's robustness, we explore noise injection strategies with varying magnitudes and different combinations of targeted components, such as the ball, the plate, or both. In addition, our analysis on model stability in spatio-temporal predictions reveals that during the inference, deriving next time-step node positions by predicting relative changes (e.g., displacement or velocity) between the current and future states yields superior accuracy compared to direct absolute position predictions. This approach consistently shows greater stability and reliability in determining subsequent node positions across various scenarios. Building on this validated model, we evaluate its generalization performance by examining its ability to extrapolate with respect to design variables. Furthermore, the physics-constrained MGN serves as a near real-time emulator for the design optimization of multi-layered OLED display panels, where thickness variables are optimized to minimize stress in the light-emitting materials. It outperforms conventional MGN in optimization tasks, demonstrating its effectiveness for practical design applications.
- Published
- 2024
21. Universal Spin Screening Clouds in Local Moment Phases
- Author
-
Kim, Minsoo L., Shim, Jeongmin, Sim, H. -S., and Kim, Donghoon
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
When a local impurity spin interacts with conduction electrons whose density of states (DOS) has a (pseudo)gap or diverges at the Fermi energy, a local moment (LM) phase can be favored over a Kondo phase. Theoretically studying quantum entanglement between the impurity and conduction electrons, we demonstrate that conduction electrons form an ''LM spin cloud'' in general LM phases, which corresponds to, but has fundamental difference from, the Kondo cloud screening the impurity spin in the Kondo phase. The LM cloud algebraically decays over the distance from the impurity when the DOS has a pseudogap or divergence, and exponentially when it has a hard gap. We find an ''LM cloud length'', a single length scale characterizing a universal form of the LM cloud. The findings are supported by both of analytic theories and numerical computations., Comment: 21 pages, 4 figures for the manuscript; Supplementary material included
- Published
- 2024
22. Bootstrapping Top-down Information for Self-modulating Slot Attention
- Author
-
Kim, Dongwon, Kim, Seoyeon, and Kwak, Suha
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Object-centric learning (OCL) aims to learn representations of individual objects within visual scenes without manual supervision, facilitating efficient and effective visual reasoning. Traditional OCL methods primarily employ bottom-up approaches that aggregate homogeneous visual features to represent objects. However, in complex visual environments, these methods often fall short due to the heterogeneous nature of visual features within an object. To address this, we propose a novel OCL framework incorporating a top-down pathway. This pathway first bootstraps the semantics of individual objects and then modulates the model to prioritize features relevant to these semantics. By dynamically modulating the model based on its own output, our top-down pathway enhances the representational quality of objects. Our framework achieves state-of-the-art performance across multiple synthetic and real-world object-discovery benchmarks., Comment: Accepted to NeurIPS 2024
- Published
- 2024
23. Hidden dormant phase mediating the glass transition in disordered matter
- Author
-
Park, Eunyoung, Kim, Sinwoo, Wang, Melody M., Hwang, Junha, Lee, Sung Yun, Shin, Jaeyong, Heo, Seung-Phil, Choi, Jungchan, Lee, Heemin, Jang, Dogeun, Kim, Minseok, Kim, Kyung Sook, Kim, Sangsoo, Eom, Intae, Nam, Daewoong, Gu, X. Wendy, and Song, Changyong
- Subjects
Condensed Matter - Disordered Systems and Neural Networks - Abstract
Metallic glass is a frozen liquid with structural disorder that retains degenerate free energy without spontaneous symmetry breaking to become a solid. For over half a century, this puzzling structure has raised fundamental questions about how structural disorder impacts glass-liquid phase transition kinetics, which remain elusive without direct evidence. In this study, through single-pulse, time-resolved imaging using X-ray free-electron lasers, we visualized the glass-to-liquid transition, revealing a previously hidden dormant phase that does not involve any macroscopic volume change within the crossover regime between the two phases. Although macroscopically inactive, nanoscale redistribution occurs, forming channeld low-density bands within this dormant phase that drives the glass transition. By providing direct microscopic evidence, this work presents a new perspective on the phase transition process in disordered materials, which can be extended to various liquid and solid phases in other complex systems., Comment: 25 pages, 4 figures
- Published
- 2024
24. Finding NeMo: Negative-mined Mosaic Augmentation for Referring Image Segmentation
- Author
-
Ha, Seongsu, Kim, Chaeyun, Kim, Donghwa, Lee, Junho, Lee, Sangho, and Lee, Joonseok
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Referring Image Segmentation is a comprehensive task to segment an object referred by a textual query from an image. In nature, the level of difficulty in this task is affected by the existence of similar objects and the complexity of the referring expression. Recent RIS models still show a significant performance gap between easy and hard scenarios. We pose that the bottleneck exists in the data, and propose a simple but powerful data augmentation method, Negative-mined Mosaic Augmentation (NeMo). This method augments a training image into a mosaic with three other negative images carefully curated by a pretrained multimodal alignment model, e.g., CLIP, to make the sample more challenging. We discover that it is critical to properly adjust the difficulty level, neither too ambiguous nor too trivial. The augmented training data encourages the RIS model to recognize subtle differences and relationships between similar visual entities and to concretely understand the whole expression to locate the right target better. Our approach shows consistent improvements on various datasets and models, verified by extensive experiments., Comment: Accepted at ECCV 2024. Project page: https://dddonghwa.github.io/NeMo/
- Published
- 2024
25. Hierarchical and Density-based Causal Clustering
- Author
-
Kim, Kwangho, Kim, Jisu, Wasserman, Larry A., and Kennedy, Edward H.
- Subjects
Statistics - Methodology ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Understanding treatment effect heterogeneity is vital for scientific and policy research. However, identifying and evaluating heterogeneous treatment effects pose significant challenges due to the typically unknown subgroup structure. Recently, a novel approach, causal k-means clustering, has emerged to assess heterogeneity of treatment effect by applying the k-means algorithm to unknown counterfactual regression functions. In this paper, we expand upon this framework by integrating hierarchical and density-based clustering algorithms. We propose plug-in estimators that are simple and readily implementable using off-the-shelf algorithms. Unlike k-means clustering, which requires the margin condition, our proposed estimators do not rely on strong structural assumptions on the outcome process. We go on to study their rate of convergence, and show that under the minimal regularity conditions, the additional cost of causal clustering is essentially the estimation error of the outcome regression functions. Our findings significantly extend the capabilities of the causal clustering framework, thereby contributing to the progression of methodologies for identifying homogeneous subgroups in treatment response, consequently facilitating more nuanced and targeted interventions. The proposed methods also open up new avenues for clustering with generic pseudo-outcomes. We explore finite sample properties via simulation, and illustrate the proposed methods in voting and employment projection datasets., Comment: 38th Conference on Neural Information Processing Systems (NeurIPS 2024)
- Published
- 2024
26. PLATYPUS: Progressive Local Surface Estimator for Arbitrary-Scale Point Cloud Upsampling
- Author
-
Kim, Donghyun, Kwon, Hyeonkyeong, Kim, Yumin, and Hwang, Seong Jae
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
3D point clouds are increasingly vital for applications like autonomous driving and robotics, yet the raw data captured by sensors often suffer from noise and sparsity, creating challenges for downstream tasks. Consequently, point cloud upsampling becomes essential for improving density and uniformity, with recent approaches showing promise by projecting randomly generated query points onto the underlying surface of sparse point clouds. However, these methods often result in outliers, non-uniformity, and difficulties in handling regions with high curvature and intricate structures. In this work, we address these challenges by introducing the Progressive Local Surface Estimator (PLSE), which more effectively captures local features in complex regions through a curvature-based sampling technique that selectively targets high-curvature areas. Additionally, we incorporate a curriculum learning strategy that leverages the curvature distribution within the point cloud to naturally assess the sample difficulty, enabling curriculum learning on point cloud data for the first time. The experimental results demonstrate that our approach significantly outperforms existing methods, achieving high-quality, dense point clouds with superior accuracy and detail.
- Published
- 2024
27. Cityscape-Adverse: Benchmarking Robustness of Semantic Segmentation with Realistic Scene Modifications via Diffusion-Based Image Editing
- Author
-
Suryanto, Naufal, Adiputra, Andro Aprila, Kadiptya, Ahmada Yusril, Le, Thi-Thu-Huong, Pratama, Derry, Kim, Yongsu, and Kim, Howon
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advancements in generative AI, particularly diffusion-based image editing, have enabled the transformation of images into highly realistic scenes using only text instructions. This technology offers significant potential for generating diverse synthetic datasets to evaluate model robustness. In this paper, we introduce Cityscape-Adverse, a benchmark that employs diffusion-based image editing to simulate eight adverse conditions, including variations in weather, lighting, and seasons, while preserving the original semantic labels. We evaluate the reliability of diffusion-based models in generating realistic scene modifications and assess the performance of state-of-the-art CNN and Transformer-based semantic segmentation models under these challenging conditions. Additionally, we analyze which modifications have the greatest impact on model performance and explore how training on synthetic datasets can improve robustness in real-world adverse scenarios. Our results demonstrate that all tested models, particularly CNN-based architectures, experienced significant performance degradation under extreme conditions, while Transformer-based models exhibited greater resilience. We verify that models trained on Cityscape-Adverse show significantly enhanced resilience when applied to unseen domains. Code and datasets will be released at https://github.com/naufalso/cityscape-adverse., Comment: 19 pages, under review, code and dataset will be available at https://github.com/naufalso/cityscape-adverse
- Published
- 2024
28. A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective
- Author
-
Jung, Yeonsung, Song, Jaeyun, Yang, June Yong, Kim, Jin-Hwa, Kim, Sung-Yub, and Yang, Eunho
- Subjects
Computer Science - Machine Learning ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Learning generalized models from biased data is an important undertaking toward fairness in deep learning. To address this issue, recent studies attempt to identify and leverage bias-conflicting samples free from spurious correlations without prior knowledge of bias or an unbiased set. However, spurious correlation remains an ongoing challenge, primarily due to the difficulty in precisely detecting these samples. In this paper, inspired by the similarities between mislabeled samples and bias-conflicting samples, we approach this challenge from a novel perspective of mislabeled sample detection. Specifically, we delve into Influence Function, one of the standard methods for mislabeled sample detection, for identifying bias-conflicting samples and propose a simple yet effective remedy for biased models by leveraging them. Through comprehensive analysis and experiments on diverse datasets, we demonstrate that our new perspective can boost the precision of detection and rectify biased models effectively. Furthermore, our approach is complementary to existing methods, showing performance improvement even when applied to models that have already undergone recent debiasing techniques.
- Published
- 2024
29. CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
- Author
-
Kim, Yeachan, Kim, Junho, and Lee, SangKeun
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Parameter-efficient fine-tuning (PEFT) has enabled the efficient optimization of cumbersome language models in real-world settings. However, as datasets in such environments often contain noisy labels that adversely affect performance, PEFT methods are inevitably exposed to noisy labels. Despite this challenge, the adaptability of PEFT to noisy environments remains underexplored. To bridge this gap, we investigate various PEFT methods under noisy labels. Interestingly, our findings reveal that PEFT has difficulty in memorizing noisy labels due to its inherently limited capacity, resulting in robustness. However, we also find that such limited capacity simultaneously makes PEFT more vulnerable to interference of noisy labels, impeding the learning of clean samples. To address this issue, we propose Clean Routing (CleaR), a novel routing-based PEFT approach that adaptively activates PEFT modules. In CleaR, PEFT modules are preferentially exposed to clean data while bypassing the noisy ones, thereby minimizing the noisy influence. To verify the efficacy of CleaR, we perform extensive experiments on diverse configurations of noisy labels. The results convincingly demonstrate that CleaR leads to substantially improved performance in noisy environments., Comment: Published at ACL 2024 Main Conference
- Published
- 2024
30. Measurement of the time-integrated CP asymmetry in $D^{0}\rightarrow K^{0}_{S}K^{0}_{S}$ decays using Belle and Belle II data
- Author
-
Belle, Collaborations, Belle II, Adachi, I., Aggarwal, L., Ahmed, H., Aihara, H., Akopov, N., Aloisio, A., Althubiti, N., Ky, N. Anh, Asner, D. M., Atmacan, H., Aushev, V., Aversano, M., Ayad, R., Babu, V., Baghel, N. K., Bahinipati, S., Bambade, P., Banerjee, Sw., Bansal, S., Barrett, M., Bartl, M., Baudot, J., Beaubien, A., Becker, J., Bennett, J. V., Bertacchi, V., Bertemes, M., Bertholet, E., Bessner, M., Bettarini, S., Bhuyan, B., Biswas, D., Bobrov, A., Bodrov, D., Bolz, A., Boschetti, A., Bozek, A., Bračko, M., Branchini, P., Briere, R. A., Browder, T. E., Budano, A., Bussino, S., Campagna, Q., Campajola, M., Casarosa, G., Cecchi, C., Cerasoli, J., Chang, M. -C., Chang, P., Cheaib, R., Cheema, P., Chen, C., Cheon, B. G., Chilikin, K., Chirapatpimol, K., Cho, H. -E., Cho, K., Cho, S. -J., Choi, S. -K., Choudhury, S., Cochran, J., Corona, L., Cui, J. X., Das, S., De La Cruz-Burelo, E., De La Motte, S. A., De Pietro, G., de Sangro, R., Destefanis, M., Di Canto, A., Di Capua, F., Dingfelder, J., Doležal, Z., Dong, T. V., Dorigo, M., Dossett, D., Dujany, G., Ecker, P., Eppelt, J., Feichtinger, P., Ferber, T., Fillinger, T., Finck, C., Finocchiaro, G., Fodor, A., Forti, F., Fulsom, B. G., Gabrielli, A., Ganiev, E., Gaudino, G., Gaur, V., Gaz, A., Gellrich, A., Ghevondyan, G., Ghosh, D., Ghumaryan, H., Giakoustidis, G., Giordano, R., Giri, A., Gironell, P. Gironella, Glazov, A., Gobbo, B., Godang, R., Goldenzweig, P., Gradl, W., Graziani, E., Greenwald, D., Gruberová, Z., Guan, Y., Gudkova, K., Haide, I., Hara, T., Hayasaka, K., Hayashii, H., Hazra, S., Hearty, C., Hedges, M. T., Heidelbach, A., de la Cruz, I. Heredia, Villanueva, M. Hernández, Higuchi, T., Hoek, M., Hohmann, M., Hoppe, R., Hsu, C. -L., Humair, T., Iijima, T., Inami, K., Ipsita, N., Ishikawa, A., Itoh, R., Iwasaki, M., Jacobs, W. W., Jaffe, D. E., Jang, E. -J., Ji, Q. P., Jia, S., Jin, Y., Johnson, A., Joo, K. K., Junkerkalefeld, H., Kaliyar, A. B., Kandra, J., Karyan, G., Keil, F., Kiesling, C., Kim, C. -H., Kim, D. Y., Kim, J. -Y., Kim, K. -H., Kim, Y. -K., Kinoshita, K., Kodyš, P., Koga, T., Kohani, S., Kojima, K., Korobov, A., Korpar, S., Kovalenko, E., Kowalewski, R., Križan, P., Krokovny, P., Kuhr, T., Kumara, K., Kunigo, T., Kuzmin, A., Kwon, Y. -J., Lacaprara, S., Lai, Y. -T., Lalwani, K., Lam, T., Lange, J. S., Lau, T. S., Laurenza, M., Leboucher, R., Diberder, F. R. Le, Lee, M. J., Lemettais, C., Leo, P., Li, C., Li, L. K., Li, Q. M., Li, W. Z., Li, Y. B., Liao, Y. P., Libby, J., Liu, M. H., Liu, Q. Y., Liu, Y., Liu, Z. Q., Liventsev, D., Longo, S., Lueck, T., Lyu, C., Madaan, C., Maggiora, M., Maharana, S. P., Maiti, R., Mancinelli, G., Manfredi, R., Manoni, E., Mantovano, M., Marcantonio, D., Marcello, S., Marinas, C., Martellini, C., Martens, A., Martini, A., Martinov, T., Massaccesi, L., Masuda, M., Matvienko, D., Maurya, S. K., Maushart, M., McKenna, J. A., Mehta, R., Meier, F., Merola, M., Miller, C., Mirra, M., Mitra, S., Miyabayashi, K., Mohanty, G. B., Mondal, S., Moneta, S., Moser, H. -G., Mussa, R., Nakamura, I., Nakao, M., Nakazawa, Y., Naruki, M., Natkaniec, Z., Natochii, A., Nayak, M., Nazaryan, G., Neu, M., Nishida, S., Ogawa, S., Ono, H., Oxford, E. R., Pakhlova, G., Pardi, S., Parham, K., Park, H., Park, J., Park, K., Park, S. -H., Paschen, B., Passeri, A., Patra, S., Pedlar, T. K., Peruzzi, I., Peschke, R., Piccolo, M., Piilonen, L. E., Podesta-Lerma, P. L. M., Podobnik, T., Praz, C., Prell, S., Prencipe, E., Prim, M. T., Purwar, H., Raiz, S., Rauls, N., Rehman, J. U., Reif, M., Reiter, S., Reuter, L., Herrmann, D. Ricalde, Ripp-Baudot, I., Rizzo, G., Roehrken, M., Roney, J. M., Rostomyan, A., Rout, N., Sanders, D. A., Sandilya, S., Santelj, L., Savinov, V., Scavino, B., Schnepf, M., Schwanda, C., Seino, Y., Selce, A., Senyo, K., Serrano, J., Sevior, M. E., Sfienti, C., Shan, W., Shi, X. D., Shiu, J. -G., Shtol, D., Shwartz, B., Sibidanov, A., Simon, F., Skorupa, J., Sobie, R. J., Sobotzik, M., Soffer, A., Sokolov, A., Solovieva, E., Spataro, S., Spruck, B., Starič, M., Stavroulakis, P., Stefkova, S., Stroili, R., Strube, J., Sumihama, M., Sumisawa, K., Svidras, H., Takizawa, M., Tamponi, U., Tanida, K., Tenchini, F., Tittel, O., Tiwary, R., Torassa, E., Trabelsi, K., Ueda, I., Uglov, T., Unger, K., Unno, Y., Uno, K., Uno, S., Urquijo, P., Ushiroda, Y., Vahsen, S. E., van Tonder, R., Varvell, K. E., Veronesi, M., Vinokurova, A., Vismaya, V. S., Vitale, L., Vobbilisetti, V., Volpe, R., Wakai, M., Wallner, S., Wang, M. -Z., Warburton, A., Watanabe, M., Watanuki, S., Wessel, C., Won, E., Yabsley, B. D., Yamada, S., Yan, W., Yelton, J., Yin, J. H., Yoshihara, K., Yuan, J., Zani, L., Zhang, B., Zhilich, V., Zhou, J. S., Zhou, Q. D., Zhu, L., Zhukova, V. I., and Žlebčík, R.
- Subjects
High Energy Physics - Experiment - Abstract
We measure the time-integrated CP asymmetry in $D^{0} \rightarrow K^{0}_{S}K^{0}_{S}$ decays reconstructed in $e^{+}e^{-} \rightarrow c\overline{c}$ events collected by the Belle and Belle II experiments. The corresponding data samples have integrated luminosities of 980 fb$^{-1}$ and 428 fb$^{-1}$, respectively. The $D^{0}$ decays are required to originate from the $D^{*+} \rightarrow D^{0}\pi^{+}$ decay, which determines the charm flavor at production time. A control sample of $D^{0} \rightarrow K^{+}K^{-}$ decays is used to correct for production and detection asymmetries. The result, $(-1.4\pm1.3{\rm(stat)}\pm0.1{\rm (syst)})\%$, is consistent with previous determinations and with CP symmetry., Comment: 10 pages, 3 figures. arXiv admin note: text overlap with arXiv:2410.22961
- Published
- 2024
31. Constant Acceleration Flow
- Author
-
Park, Dogyun, Lee, Sojin, Kim, Sihyeon, Lee, Taehoon, Hong, Youngjoon, and Kim, Hyunwoo J.
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Rectified flow and reflow procedures have significantly advanced fast generation by progressively straightening ordinary differential equation (ODE) flows. They operate under the assumption that image and noise pairs, known as couplings, can be approximated by straight trajectories with constant velocity. However, we observe that modeling with constant velocity and using reflow procedures have limitations in accurately learning straight trajectories between pairs, resulting in suboptimal performance in few-step generation. To address these limitations, we introduce Constant Acceleration Flow (CAF), a novel framework based on a simple constant acceleration equation. CAF introduces acceleration as an additional learnable variable, allowing for more expressive and accurate estimation of the ODE flow. Moreover, we propose two techniques to further improve estimation accuracy: initial velocity conditioning for the acceleration model and a reflow process for the initial velocity. Our comprehensive studies on toy datasets, CIFAR-10, and ImageNet 64x64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation. We also show that CAF dramatically improves few-step coupling preservation and inversion over Rectified flow. Code is available at \href{https://github.com/mlvlab/CAF}{https://github.com/mlvlab/CAF}.
- Published
- 2024
32. C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning
- Author
-
Kim, Yeachan, Kim, Junho, Mok, Wing-Lam, Park, Jun-Hyung, and Lee, SangKeun
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Cryptography and Security - Abstract
Despite the versatility of pre-trained language models (PLMs) across domains, their large memory footprints pose significant challenges in federated learning (FL), where the training model has to be distributed between a server and clients. One potential solution to bypass such constraints might be the use of parameter-efficient fine-tuning (PEFT) in the context of FL. However, we have observed that typical PEFT tends to severely suffer from heterogeneity among clients in FL scenarios, resulting in unstable and slow convergence. In this paper, we propose Client-Customized Adaptation (C2A), a novel hypernetwork-based FL framework that generates client-specific adapters by conditioning the client information. With the effectiveness of the hypernetworks in generating customized weights through learning to adopt the different characteristics of inputs, C2A can maximize the utility of shared model parameters while minimizing the divergence caused by client heterogeneity. To verify the efficacy of C2A, we perform extensive evaluations on FL scenarios involving heterogeneity in label and language distributions. Comprehensive evaluation results clearly support the superiority of C2A in terms of both efficiency and effectiveness in FL scenarios., Comment: Published at Findings of ACL 2023
- Published
- 2024
33. Impact of High-Brightness Entangled Photon Pairs on CHSH Inequality Experiment
- Author
-
Kim, Jin-Woo, Lim, Suseong, Kim, Heonoh, and Rhee, June Koo Kevin
- Subjects
Quantum Physics - Abstract
Verifying the violation of Bell's inequality is one of the most representative methods to demonstrate that entangled photon pairs prepared in a quantum optics-based system exhibit quantum properties. While experiments on Bell inequality violations have been theoretically well-established and extensively conducted to implement various quantum information technologies in laboratory settings, mathematical modeling for accurately predicting the distribution of high-intensity entangled photon pairs in high-loss environments remains an issue that requires further research. As the brightness of the entangled photon pairs increases, the influence of multi-photon effects becomes more significant, leading to a decrease in the CHSH value $S$ and also a reduction in the standard deviation of the CHSH value $\Delta S$. Therefore, a new analysis of the $(S-2)/\Delta S$ value is required to more precisely confirm the degree of CHSH inequality violation including the reliability of $S$. In this paper, we propose a mathematical model to predict the $(S-2)/\Delta S$ value as a function of the brightness of the entangled photon pair source, and we also suggest the need to optimize the brightness of this source. Additionally, we provide experimental evidence supporting this model. The experiment confirms that when the mean photon number is $\mu=0.026$ in an entanglement distribution setup with a total loss of $-19.03$ dB, the CHSH value drops to 2.69, while the $(S-2)/\Delta S$ value increases to 60.95., Comment: 11 pages, 1 table, and 4 figures
- Published
- 2024
34. Learning State Preparation Circuits for Quantum Phases of Matter
- Author
-
Kim, Hyun-Soo, Kim, Isaac H., and Ranard, Daniel
- Subjects
Quantum Physics ,Condensed Matter - Strongly Correlated Electrons - Abstract
Many-body ground state preparation is an important subroutine used in the simulation of physical systems. In this paper, we introduce a flexible and efficient framework for obtaining a state preparation circuit for a large class of many-body ground states. We introduce polynomial-time classical algorithms that take reduced density matrices over $\mathcal{O}(1)$-sized balls as inputs, and output a circuit that prepares the global state. We introduce algorithms applicable to (i) short-range entangled states (e.g., states prepared by shallow quantum circuits in any number of dimensions, and more generally, invertible states) and (ii) long-range entangled ground states (e.g., the toric code on a disk). Both algorithms can provably find a circuit whose depth is asymptotically optimal. Our approach uses a variant of the quantum Markov chain condition that remains robust against constant-depth circuits. The robustness of this condition makes our method applicable to a large class of states, whilst ensuring a classically tractable optimization landscape., Comment: 32 pages, 25 figures + 4 page appendix; corrected typos, added comments based on arXiv:2407.07754, fixed broken links
- Published
- 2024
35. Classical eikonal from Magnus expansion
- Author
-
Kim, Joon-Hwi, Kim, Jung-Wook, Kim, Sungsoo, and Lee, Sangmin
- Subjects
High Energy Physics - Theory - Abstract
In a classical scattering problem, the classical eikonal is defined as the generator of the canonical transformation that maps in-states to out-states. It can be regarded as the classical limit of the log of the quantum S-matrix. In a classical analog of the Born approximation in quantum mechanics, the classical eikonal admits an expansion in oriented tree graphs, where oriented edges denote retarded/advanced worldline propagators. The Magnus expansion, which takes the log of a time-ordered exponential integral, offers an efficient method to compute the coefficients of the tree graphs to all orders. We exploit a Hopf algebra structure behind the Magnus expansion to develop a fast algorithm which can compute the tree coefficients up to the 12th order (over half a million trees) in less than an hour. In a relativistic setting, our methods can be applied to the post-Minkowskian (PM) expansion for gravitational binaries. We demonstrate the methods by computing the 3PM eikonal and find agreement with previous results based on amplitude methods., Comment: 49 pages, 21 figures
- Published
- 2024
36. Model-independent measurement of $D^0$-$\overline{D}{}^0$ mixing parameters in $D^0\rightarrow K^0_{S}\pi^+\pi^-$ decays at Belle and Belle II
- Author
-
Belle, Collaborations, Belle II, Adachi, I., Aggarwal, L., Ahmed, H., Aihara, H., Akopov, N., Aloisio, A., Althubiti, N., Ky, N. Anh, Asner, D. M., Atmacan, H., Aushev, V., Aversano, M., Ayad, R., Baghel, N. K., Bambade, P., Banerjee, Sw., Bansal, S., Barrett, M., Bartl, M., Baudot, J., Beaubien, A., Becker, J., Bennett, J. V., Bertacchi, V., Bertemes, M., Bertholet, E., Bessner, M., Bettarini, S., Bhuyan, B., Biswas, D., Bodrov, D., Bolz, A., Bondar, A., Boschetti, A., Bozek, A., Bračko, M., Branchini, P., Briere, R. A., Browder, T. E., Budano, A., Bussino, S., Campagna, Q., Campajola, M., Casarosa, G., Cecchi, C., Chang, P., Cheaib, R., Cheema, P., Cheon, B. G., Chilikin, K., Chirapatpimol, K., Cho, H. -E., Cho, K., Cho, S. -J., Choi, S. -K., Choudhury, S., Cochran, J., Corona, L., Cui, J. X., Das, S., De La Cruz-Burelo, E., De La Motte, S. A., De Pietro, G., de Sangro, R., Destefanis, M., Di Canto, A., Di Capua, F., Dingfelder, J., Doležal, Z., Dong, T. V., Dorigo, M., Dossett, D., Dujany, G., Ecker, P., Epifanov, D., Eppelt, J., Feichtinger, P., Ferber, T., Fillinger, T., Finck, C., Finocchiaro, G., Fodor, A., Forti, F., Fulsom, B. G., Gabrielli, A., Ganiev, E., Garcia-Hernandez, M., Garg, R., Gaudino, G., Gaur, V., Gaz, A., Gellrich, A., Ghevondyan, G., Ghosh, D., Ghumaryan, H., Giakoustidis, G., Giordano, R., Giri, A., Gironell, P. Gironella, Glazov, A., Gobbo, B., Godang, R., Goldenzweig, P., Gong, G., Gradl, W., Graziani, E., Greenwald, D., Gruberová, Z., Gudkova, K., Haide, I., Hara, T., Hayasaka, K., Hayashii, H., Hazra, S., Hearty, C., Hedges, M. T., Heidelbach, A., de la Cruz, I. Heredia, Higuchi, T., Hoek, M., Hohmann, M., Hoppe, R., Hsu, C. -L., Humair, T., Iijima, T., Inami, K., Ipsita, N., Itoh, R., Iwasaki, M., Jacobs, W. W., Jang, E. -J., Ji, Q. P., Jin, Y., Johnson, A., Junkerkalefeld, H., Kaliyar, A. B., Kandra, J., Karyan, G., Keil, F., Kiesling, C., Kim, C. -H., Kim, D. Y., Kim, J. -Y., Kim, K. -H., Kim, Y. -K., Kinoshita, K., Kodyš, P., Koga, T., Kohani, S., Kojima, K., Korobov, A., Kovalenko, E., Kowalewski, R., Križan, P., Krokovny, P., Kuhr, T., Kumar, R., Kumara, K., Kunigo, T., Kuzmin, A., Kwon, Y. -J., Lalwani, K., Lam, T., Lange, J. S., Lau, T. S., Leboucher, R., Diberder, F. R. Le, Lee, M. J., Lemettais, C., Leo, P., Li, C., Li, L. K., Li, Q. M., Li, W. Z., Li, Y., Li, Y. B., Libby, J., Liu, M. H., Liu, Q. Y., Liu, Z. Q., Liventsev, D., Longo, S., Lueck, T., Lyu, C., Madaan, C., Maggiora, M., Maiti, R., Mancinelli, G., Manfredi, R., Manoni, E., Mantovano, M., Marcello, S., Marinas, C., Martellini, C., Martens, A., Martini, A., Martinov, T., Massaccesi, L., Masuda, M., Matvienko, D., Maurya, S. K., Maushart, M., McKenna, J. A., Meier, F., Merola, M., Miller, C., Mirra, M., Mitra, S., Miyabayashi, K., Mohanty, G. B., Mondal, S., Moneta, S., Moser, H. -G., Mussa, R., Nakamura, I., Nakao, M., Nakazawa, H., Nakazawa, Y., Naruki, M., Natkaniec, Z., Natochii, A., Nayak, M., Nazaryan, G., Neu, M., Nishida, S., Ogawa, S., Ono, H., Oxford, E. R., Pakhlova, G., Pardi, S., Parham, K., Park, H., Park, J., Park, K., Park, S. -H., Paschen, B., Passeri, A., Patra, S., Pedlar, T. K., Peschke, R., Piilonen, L. E., Podesta-Lerma, P. L. M., Podobnik, T., Praz, C., Prell, S., Prencipe, E., Prim, M. T., Purwar, H., Raiz, S., Rehman, J. U., Reif, M., Reiter, S., Reuter, L., Herrmann, D. Ricalde, Ripp-Baudot, I., Rizzo, G., Roehrken, M., Roney, J. M., Rostomyan, A., Rout, N., Sanders, D. A., Sandilya, S., Santelj, L., Savinov, V., Scavino, B., Schwanda, C., Schwartz, A. J., Seino, Y., Selce, A., Senyo, K., Serrano, J., Sevior, M. E., Sfienti, C., Shan, W., Shen, C. P., Shi, X. D., Shillington, T., Shiu, J. -G., Shtol, D., Sibidanov, A., Simon, F., Skorupa, J., Sobie, R. J., Sobotzik, M., Soffer, A., Sokolov, A., Solovieva, E., Spataro, S., Spruck, B., Starič, M., Stavroulakis, P., Stefkova, S., Stroili, R., Sumihama, M., Sumisawa, K., Svidras, H., Takizawa, M., Tanida, K., Tenchini, F., Tittel, O., Tiwary, R., Torassa, E., Trabelsi, K., Uchida, M., Ueda, I., Uglov, T., Unger, K., Unno, Y., Uno, K., Uno, S., Urquijo, P., Vahsen, S. E., van Tonder, R., Varvell, K. E., Veronesi, M., Vinokurova, A., Vismaya, V. S., Vitale, L., Volpe, R., Wakai, M., Wallner, S., Wang, M. -Z., Warburton, A., Watanabe, M., Watanuki, S., Wessel, C., Yabsley, B. D., Yamada, S., Yan, W., Yin, J. H., Yoshihara, K., Yuan, J., Zhilich, V., Zhou, J. S., Zhou, Q. D., Zhu, L., Zhukova, V. I., and Žlebčík, R.
- Subjects
High Energy Physics - Experiment - Abstract
We perform a model-independent measurement of the $D^0$-$\overline{D}{}^0$ mixing parameters using samples of $e^+e^-$-collision data collected by the Belle and Belle II experiments that have integrated luminosities of $951\ \text{fb}^{-1}$ and $408\ \text{fb}^{-1}$, respectively. Approximately $2.05\times10^6$ neutral $D$ mesons are reconstructed in the $D^0\rightarrow K^0_{S}\pi^+\pi^-$ channel, with the neutral $D$ flavor tagged by the charge of the pion in the $D^{*+}\rightarrow D^0\pi^+$ decay. Assuming charge-parity symmetry, the mixing parameters are measured to be $ x = (4.0\pm1.7\pm0.4)\times 10^{-3} $ and $ y = (2.9\pm1.4\pm0.3)\times 10^{-3}$, where the first uncertainties are statistical and the second systematic. The results are consistent with previous determinations.
- Published
- 2024
37. Simulation-Free Training of Neural ODEs on Paired Data
- Author
-
Kim, Semin, Yoo, Jaehoon, Kim, Jinwoo, Cha, Yeonwoo, Kim, Saehoon, and Hong, Seunghoon
- Subjects
Computer Science - Machine Learning - Abstract
In this work, we investigate a method for simulation-free training of Neural Ordinary Differential Equations (NODEs) for learning deterministic mappings between paired data. Despite the analogy of NODEs as continuous-depth residual networks, their application in typical supervised learning tasks has not been popular, mainly due to the large number of function evaluations required by ODE solvers and numerical instability in gradient estimation. To alleviate this problem, we employ the flow matching framework for simulation-free training of NODEs, which directly regresses the parameterized dynamics function to a predefined target velocity field. Contrary to generative tasks, however, we show that applying flow matching directly between paired data can often lead to an ill-defined flow that breaks the coupling of the data pairs (e.g., due to crossing trajectories). We propose a simple extension that applies flow matching in the embedding space of data pairs, where the embeddings are learned jointly with the dynamic function to ensure the validity of the flow which is also easier to learn. We demonstrate the effectiveness of our method on both regression and classification tasks, where our method outperforms existing NODEs with a significantly lower number of function evaluations. The code is available at https://github.com/seminkim/simulation-free-node.
- Published
- 2024
38. A Host-SSD Collaborative Write Accelerator for LSM-Tree-Based Key-Value Stores
- Author
-
Kim, KiHwan, Chung, Hyunsun, Ahn, Seonghoon, Park, Junhyeok, Jamil, Safdar, Byun, Hongsu, Lee, Myungcheol, Choi, Jinchun, and Kim, Youngjae
- Subjects
Computer Science - Hardware Architecture - Abstract
Log-Structured Merge (LSM) tree-based Key-Value Stores (KVSs) are widely adopted for their high performance in write-intensive environments, but they often face performance degradation due to write stalls during compaction. Prior solutions, such as regulating I/O traffic or using multiple compaction threads, can cause unexpected drops in throughput or increase host CPU usage, while hardware-based approaches using FPGA, GPU, and DPU aimed at reducing compaction duration introduce additional hardware costs. In this study, we propose KVACCEL, a novel hardware-software co-design framework that eliminates write stalls by leveraging a dual-interface SSD. KVACCEL allocates logical NAND flash space to support both block and key-value interfaces, using the key-value interface as a temporary write buffer during write stalls. This strategy significantly reduces write stalls, optimizes resource usage, and ensures consistency between the host and device by implementing an in-device LSM-based write buffer with an iterator-based range scan mechanism. Our extensive evaluation shows that for write-intensive workloads, KVACCEL outperforms ADOC by up to 1.17x in terms of throughput and performance-to-CPU-utilization efficiency. For mixed read-write workloads, both demonstrate comparable performance., Comment: 11 pages, 14 figures
- Published
- 2024
39. Highly tunable moir\'e superlattice potentials in twisted hexagonal boron nitrides
- Author
-
Han, Kwanghee, Cho, Minhyun, Kim, Taehyung, Kim, Seung Tae, Kim, Suk Hyun, Park, Sang Hwa, Yang, Sang Mo, Watanabe, Kenji, Taniguchi, Takashi, Menon, Vinod, and Kim, Young Duck
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics ,Condensed Matter - Materials Science ,Physics - Applied Physics - Abstract
Moir\'e superlattice of twisted hexagonal boron nitride (hBN) has emerged as an advanced atomically thin van der Waals interfacial ferroelectricity platform. Nanoscale periodic ferroelectric moir\'e domains with out-of-plane potentials in twisted hBN allow the hosting of remote Coulomb superlattice potentials to adjacent two-dimensional materials for tailoring strongly correlated properties. Therefore, the new strategies for engineering moir\'e length, angle, and potential strength are essential for developing programmable quantum materials and advanced twistronics applications devices. Here, we demonstrate the realization of twisted hBN-based moir\'e superlattice platforms and visualize the moir\'e domains and ferroelectric properties using Kelvin probe force microscopy. Also, we report the KPFM result of regular moir\'e superlattice in the large area. It offers the possibility to reproduce uniform moir\'e structures with precise control piezo stage stacking and heat annealing. We demonstrate the high tunability of twisted hBN moir\'e platforms and achieve cumulative multi-ferroelectric polarization and multi-level domains with multiple angle mismatched interfaces. Additionally, we observe the quasi-1D anisotropic moir\'e domains and show the highest resolution analysis of the local built-in strain between adjacent hBN layers compared to the conventional methods. Furthermore, we demonstrate in-situ manipulation of moir\'e superlattice potential strength using femtosecond pulse laser irradiation, which results in the optical phonon-induced atomic displacement at the hBN moir\'e interfaces. Our results pave the way to develop precisely programmable moir\'e superlattice platforms and investigate strongly correlated physics in van der Waals heterostructures., Comment: 26 pages, 4 figures
- Published
- 2024
40. Auto-Intent: Automated Intent Discovery and Self-Exploration for Large Language Model Web Agents
- Author
-
Kim, Jaekyeom, Kim, Dong-Ki, Logeswaran, Lajanugen, Sohn, Sungryull, and Lee, Honglak
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In this paper, we introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning, where we empirically focus on web navigation tasks. Our approach first discovers the underlying intents from target domain demonstrations unsupervisedly, in a highly compact form (up to three words). With the extracted intents, we train our intent predictor to predict the next intent given the agent's past observations and actions. In particular, we propose a self-exploration approach where top-k probable intent predictions are provided as a hint to the pre-trained LLM agent, which leads to enhanced decision-making capabilities. Auto-Intent substantially improves the performance of GPT-{3.5, 4} and Llama-3.1-{70B, 405B} agents on the large-scale real-website navigation benchmarks from Mind2Web and online navigation tasks from WebArena with its cross-benchmark generalization from Mind2Web., Comment: EMNLP 2024 Findings
- Published
- 2024
41. Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection
- Author
-
Chang, Gyusam, Lee, Jiwon, Kim, Donghyun, Kim, Jinkyu, Lee, Dongwook, Ji, Daehyun, Jang, Sujin, and Kim, Sangpil
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advances in 3D object detection leveraging multi-view cameras have demonstrated their practical and economical value in various challenging vision tasks. However, typical supervised learning approaches face challenges in achieving satisfactory adaptation toward unseen and unlabeled target datasets (\ie, direct transfer) due to the inevitable geometric misalignment between the source and target domains. In practice, we also encounter constraints on resources for training models and collecting annotations for the successful deployment of 3D object detectors. In this paper, we propose Unified Domain Generalization and Adaptation (UDGA), a practical solution to mitigate those drawbacks. We first propose Multi-view Overlap Depth Constraint that leverages the strong association between multi-view, significantly alleviating geometric gaps due to perspective view changes. Then, we present a Label-Efficient Domain Adaptation approach to handle unfamiliar targets with significantly fewer amounts of labels (\ie, 1$\%$ and 5$\%)$, while preserving well-defined source knowledge for training efficiency. Overall, UDGA framework enables stable detection performance in both source and target domains, effectively bridging inevitable domain gaps, while demanding fewer annotations. We demonstrate the robustness of UDGA with large-scale benchmarks: nuScenes, Lyft, and Waymo, where our framework outperforms the current state-of-the-art methods., Comment: Accepted to NeurIPS 2024
- Published
- 2024
42. Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
- Author
-
Park, Dongmin, Kim, Sebin, Moon, Taehong, Kim, Minkyu, Lee, Kangwook, and Cho, Jaewoong
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Computer Vision and Pattern Recognition - Abstract
State-of-the-art text-to-image (T2I) diffusion models often struggle to generate rare compositions of concepts, e.g., objects with unusual attributes. In this paper, we show that the compositional generation power of diffusion models on such rare concepts can be significantly enhanced by the Large Language Model (LLM) guidance. We start with empirical and theoretical analysis, demonstrating that exposing frequent concepts relevant to the target rare concepts during the diffusion sampling process yields more accurate concept composition. Based on this, we propose a training-free approach, R2F, that plans and executes the overall rare-to-frequent concept guidance throughout the diffusion inference by leveraging the abundant semantic knowledge in LLMs. Our framework is flexible across any pre-trained diffusion models and LLMs, and can be seamlessly integrated with the region-guided diffusion approaches. Extensive experiments on three datasets, including our newly proposed benchmark, RareBench, containing various prompts with rare compositions of concepts, R2F significantly surpasses existing models including SD3.0 and FLUX by up to 28.1%p in T2I alignment. Code is available at https://github.com/krafton-ai/Rare2Frequent.
- Published
- 2024
43. AutoRAG: Automated Framework for optimization of Retrieval Augmented Generation Pipeline
- Author
-
Kim, Dongkyu, Kim, Byoungwook, Han, Donggeon, and Eibich, Matouš
- Subjects
Computer Science - Computation and Language ,H.4.0 - Abstract
Using LLMs (Large Language Models) in conjunction with external documents has made RAG (Retrieval-Augmented Generation) an essential technology. Numerous techniques and modules for RAG are being researched, but their performance can vary across different datasets. Finding RAG modules that perform well on specific datasets is challenging. In this paper, we propose the AutoRAG framework, which automatically identifies suitable RAG modules for a given dataset. AutoRAG explores and approximates the optimal combination of RAG modules for the dataset. Additionally, we share the results of optimizing a dataset using AutoRAG. All experimental results and data are publicly available and can be accessed through our GitHub repository https://github.com/Marker-Inc-Korea/AutoRAG_ARAGOG_Paper ., Comment: 20 pages
- Published
- 2024
44. Do strong bars exhibit strong non-circular motions?
- Author
-
Kim, Taehyun, Gadotti, Dimitri A., Lee, Yun Hee, López-Cobá, Carlos, Kim, Woong-Tae, Kim, Minjin, and Park, Myeong-gu
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
Galactic bars induce characteristic motions deviating from pure circular rotation, known as non-circular motions. As bars are non-axisymmetric structures, stronger bars are expected to show stronger non-circular motions. However, this has not yet been confirmed by observations. We use a bisymmetric model to account for the stellar kinematics of 14 barred galaxies obtained with the Multi-Unit Spectroscopic Explorer (MUSE) and characterize the degree of bar-driven non-circular motions. For the first time, we find tight relations between the bar strength (bar ellipticity and torque parameter) and the degree of stellar non-circular motions. We also find that bar strength is strongly associated with the stellar radial velocity driven by bars. Our results imply that stronger bars exhibit stronger non-circular motions. Non-circular motions beyond the bar are found to be weak, comprising less than 10% of the strength of the circular motions. We find that galaxies with a boxy/peanut (B/P) bulge exhibit a higher degree of non-circular motions and higher stellar radial velocity compared to galaxies without a B/P bulge, by 30-50%. However, this effect could be attributed to the presence of strong bars in galaxies with a B/P feature in our sample, which would naturally result in higher radial motions, rather than to B/P bulges themselves inducing stronger radial motions. More observational studies, utilizing both stellar and gaseous kinematics on statistically complete samples, along with numerical studies, are necessary to draw a comprehensive view of the impact that B/P bulges have on bar-driven non-circular motions., Comment: Accepted for publications Astrophysical Journal (ApJ). 23 pages, 10 figure, 1 table
- Published
- 2024
45. Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy
- Author
-
Kim, Sunwoo, Lee, Soo Yong, Bu, Fanchen, Kang, Shinhwan, Kim, Kyungho, Yoo, Jaemin, and Shin, Kijung
- Subjects
Computer Science - Machine Learning ,Computer Science - Social and Information Networks - Abstract
Graph autoencoders (Graph-AEs) learn representations of given graphs by aiming to accurately reconstruct them. A notable application of Graph-AEs is graph-level anomaly detection (GLAD), whose objective is to identify graphs with anomalous topological structures and/or node features compared to the majority of the graph population. Graph-AEs for GLAD regard a graph with a high mean reconstruction error (i.e. mean of errors from all node pairs and/or nodes) as anomalies. Namely, the methods rest on the assumption that they would better reconstruct graphs with similar characteristics to the majority. We, however, report non-trivial counter-examples, a phenomenon we call reconstruction flip, and highlight the limitations of the existing Graph-AE-based GLAD methods. Specifically, we empirically and theoretically investigate when this assumption holds and when it fails. Through our analyses, we further argue that, while the reconstruction errors for a given graph are effective features for GLAD, leveraging the multifaceted summaries of the reconstruction errors, beyond just mean, can further strengthen the features. Thus, we propose a novel and simple GLAD method, named MUSE. The key innovation of MUSE involves taking multifaceted summaries of reconstruction errors as graph features for GLAD. This surprisingly simple method obtains SOTA performance in GLAD, performing best overall among 14 methods across 10 datasets., Comment: Published as a conference paper at NeurIPS 2024
- Published
- 2024
46. Beyond Trivial Edges: A Fractional Approach to Cohesive Subgraph Detection in Hypergraphs
- Author
-
Kim, Hyewon, Shin, Woocheol, Kim, Dahee, Kim, Junghoon, Lim, Sungsu, and Jeong, Hyunji
- Subjects
Computer Science - Social and Information Networks - Abstract
Hypergraphs serve as a powerful tool for modeling complex relationships across domains like social networks, transactions, and recommendation systems. The (k,g)-core model effectively identifies cohesive subgraphs by assessing internal connections and co-occurrence patterns, but it is susceptible to inflated cohesiveness due to trivial hyperedges. To address this, we propose the $(k,g,p)$-core model, which incorporates the relative importance of hyperedges for more accurate subgraph detection. We develop both Na\"ive and Advanced pruning algorithms, demonstrating through extensive experiments that our approach reduces the execution frequency of costly operations by 51.9% on real-world datasets.
- Published
- 2024
47. ANOMIX: A Simple yet Effective Hard Negative Generation via Mixing for Graph Anomaly Detection
- Author
-
Kim, Hwan, Kim, Junghoon, and Lim, Sungsu
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Graph contrastive learning (GCL) generally requires a large number of samples. The one of the effective ways to reduce the number of samples is using hard negatives (e.g., Mixup). Designing mixing-based approach for GAD can be difficult due to imbalanced data or limited number of anomalies. We propose ANOMIX, a framework that consists of a novel graph mixing approach, ANOMIX-M, and multi-level contrasts for GAD. ANOMIX-M can effectively mix abnormality and normality from input graph to generate hard negatives, which are important for efficient GCL. ANOMIX is (a) A first mixing approach: firstly attempting graph mixing to generate hard negatives for GAD task and node- and subgraph-level contrasts to distinguish underlying anomalies. (b) Accurate: winning the highest AUC, up to 5.49% higher and 1.76% faster. (c) Effective: reducing the number of samples nearly 80% in GCL. Code is available at https://github.com/missinghwan/ANOMIX.
- Published
- 2024
48. A Two-Week $IXPE$ Monitoring Campaign on Mrk 421
- Author
-
Maksym, W. Peter, Liodakis, Ioannis, Saade, M. Lynne, Kim, Dawoon E., Middei, Riccardo, Di Gesu, Laura, Kiehlmann, Sebastian, Matzeu, Gabriele, Agudo, Iván, Marscher, Alan P., Ehlert, Steven R., Jorstad, Svetlana G., Kaaret, Philip, Marshall, Herman L., Pacciani, Luigi, Perri, Matteo, Puccetti, Simonetta, Kouch, Pouya M., Lindfors, Elina, Aceituno, Francisco José, Bonnoli, Giacomo, Casanova, Víctor, Escudero, Juan, Agís-González, Beatriz, Husillos, César, Morcuende, Daniel, Otero-Santos, Jorge, Sota, Alfredo, Piirola, Vilppu, Imazawa, Ryo, Sasada, Mahito, Fukazawa, Yasushi, Kawabata, Koji S., Uemura, Makoto, Mizuno, Tsunefumi, Nakaoka, Tatsuya, Akitaya, Hiroshi, McCall, Callum, Jermak, Helen E., Steele, Iain A., Borman, George A., Grishina, Tatiana S., Hagen-Thorn, Vladimir A., Kopatskaya, Evgenia N., Larionova, Elena G., Morozova, Daria A., Savchenko, Sergey S., Shishkina, Ekaterina V., Troitskiy, Ivan S., Troitskaya, Yulia V., Vasilyev, Andrey A., Zhovtan, Alexey V., Myserlis, Ioannis, Gurwell, Mark, Keating, Garrett, Rao, Ramprasad, Pauley, Colt, Angelakis, Emmanouil, Kraus, Alexander, Berdyugin, Andrei V., Kagitani, Masato, Kravtsov, Vadim, Poutanen, Juri, Sakanoi, Takeshi, Kang, Sincheol, Lee, Sang-Sung, Kim, Sang-Hyun, Cheong, Whee Yeon, Jeong, Hyeon-Woo, Song, Chanwoo, Blinov, Dmitry, Shablovinskaya, Elena, Antonelli, Lucio Angelo, Bachetti, Matteo, Baldini, Luca, Baumgartner, Wayne H., Bellazzini, Ronaldo, Bianchi, Stefano, Bongiorno, Stephen D., Bonino, Raffaella, Brez, Alessandro, Bucciantini, Niccoló, Capitanio, Fiamma, Castellano, Simone, Cavazzuti, Elisabetta, Chen, Chien-Ting, Ciprini, Stefano, Costa, Enrico, De Rosa, Alessandra, Del Monte, Ettore, Di Lalla, Niccoló, Di Marco, Alessandro, Donnarumma, Immacolata, Doroshenko, Victor, Dovčiak, Michal, Enoto, Teruaki, Evangelista, Yuri, Fabiani, Sergio, Ferrazzoli, Riccardo, Garcia, Javier A., Gunji, Shuichi, Hayashida, Kiyoshi, Heyl, Jeremy, Iwakiri, Wataru, Karas, Vladimir, Kislat, Fabian, Kitaguchi, Takao, Kolodziejczak, Jeffery J., Krawczynski, Henric, La Monaca, Fabio, Latronico, Luca, Maldera, Simone, Manfreda, Alberto, Marin, Frédéric, Marinucci, Andrea, Massaro, Francesco, Matt, Giorgio, Mitsuishi, Ikuyuki, Muleri, Fabio, Negro, Michela, Ng, C. -Y., O'Dell, Stephen L., Omodei, Nicola, Oppedisano, Chiara, Papitto, Alessandro, Pavlov, George G., Peirson, Abel Lawrence, Pesce-Rollins, Melissa, Petrucci, Pierre-Olivier, Pilia, Maura, Possenti, Andrea, Ramsey, Brian D., Rankin, John, Ratheesh, Ajay, Roberts, Oliver J., Romani, Roger W., Sgró, Carmelo, Slane, Patrick, Soffitta, Paolo, Spandre, Gloria, Swartz, Douglas A., Tamagawa, Toru, Tavecchio, Fabrizio, Taverna, Roberto, Tawara, Yuzuru, Tennant, Allyn F., Thomas, Nicholas E., Tombesi, Francesco, Trois, Alessio, Tsygankov, Sergey S., Turolla, Roberto, Vink, Jacco, Weisskopf, Martin C., Wu, Kinwah, Xie, Fei, and Zane, Silvia
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
X-ray polarization is a unique new probe of the particle acceleration in astrophysical jets made possible through the Imaging X-ray Polarimetry Explorer. Here we report on the first dense X-ray polarization monitoring campaign on the blazar Mrk 421. Our observations were accompanied by an even denser radio and optical polarization campaign. We find significant short-timescale variability in both X-ray polarization degree and angle, including a $\sim90^\circ$ angle rotation about the jet axis. We attribute this to random variations of the magnetic field, consistent with the presence of turbulence but also unlikely to be explained by turbulence alone. At the same time, the degree of lower-energy polarization is significantly lower and shows no more than mild variability. Our campaign provides further evidence for a scenario in which energy-stratified shock-acceleration of relativistic electrons, combined with a turbulent magnetic field, is responsible for optical to X-ray synchrotron emission in blazar jets., Comment: 23 pages, including 8 pages of appendices. 12 figures, 3 tables. Submitted to ApJ
- Published
- 2024
49. GPT-4o System Card
- Author
-
OpenAI, Hurst, Aaron, Lerer, Adam, Goucher, Adam P., Perelman, Adam, Ramesh, Aditya, Clark, Aidan, Ostrow, AJ, Welihinda, Akila, Hayes, Alan, Radford, Alec, Mądry, Aleksander, Baker-Whitcomb, Alex, Beutel, Alex, Borzunov, Alex, Carney, Alex, Chow, Alex, Kirillov, Alex, Nichol, Alex, Paino, Alex, Renzin, Alex, Passos, Alex Tachard, Kirillov, Alexander, Christakis, Alexi, Conneau, Alexis, Kamali, Ali, Jabri, Allan, Moyer, Allison, Tam, Allison, Crookes, Amadou, Tootoochian, Amin, Tootoonchian, Amin, Kumar, Ananya, Vallone, Andrea, Karpathy, Andrej, Braunstein, Andrew, Cann, Andrew, Codispoti, Andrew, Galu, Andrew, Kondrich, Andrew, Tulloch, Andrew, Mishchenko, Andrey, Baek, Angela, Jiang, Angela, Pelisse, Antoine, Woodford, Antonia, Gosalia, Anuj, Dhar, Arka, Pantuliano, Ashley, Nayak, Avi, Oliver, Avital, Zoph, Barret, Ghorbani, Behrooz, Leimberger, Ben, Rossen, Ben, Sokolowsky, Ben, Wang, Ben, Zweig, Benjamin, Hoover, Beth, Samic, Blake, McGrew, Bob, Spero, Bobby, Giertler, Bogo, Cheng, Bowen, Lightcap, Brad, Walkin, Brandon, Quinn, Brendan, Guarraci, Brian, Hsu, Brian, Kellogg, Bright, Eastman, Brydon, Lugaresi, Camillo, Wainwright, Carroll, Bassin, Cary, Hudson, Cary, Chu, Casey, Nelson, Chad, Li, Chak, Shern, Chan Jun, Conger, Channing, Barette, Charlotte, Voss, Chelsea, Ding, Chen, Lu, Cheng, Zhang, Chong, Beaumont, Chris, Hallacy, Chris, Koch, Chris, Gibson, Christian, Kim, Christina, Choi, Christine, McLeavey, Christine, Hesse, Christopher, Fischer, Claudia, Winter, Clemens, Czarnecki, Coley, Jarvis, Colin, Wei, Colin, Koumouzelis, Constantin, Sherburn, Dane, Kappler, Daniel, Levin, Daniel, Levy, Daniel, Carr, David, Farhi, David, Mely, David, Robinson, David, Sasaki, David, Jin, Denny, Valladares, Dev, Tsipras, Dimitris, Li, Doug, Nguyen, Duc Phong, Findlay, Duncan, Oiwoh, Edede, Wong, Edmund, Asdar, Ehsan, Proehl, Elizabeth, Yang, Elizabeth, Antonow, Eric, Kramer, Eric, Peterson, Eric, Sigler, Eric, Wallace, Eric, Brevdo, Eugene, Mays, Evan, Khorasani, Farzad, Such, Felipe Petroski, Raso, Filippo, Zhang, Francis, von Lohmann, Fred, Sulit, Freddie, Goh, Gabriel, Oden, Gene, Salmon, Geoff, Starace, Giulio, Brockman, Greg, Salman, Hadi, Bao, Haiming, Hu, Haitang, Wong, Hannah, Wang, Haoyu, Schmidt, Heather, Whitney, Heather, Jun, Heewoo, Kirchner, Hendrik, Pinto, Henrique Ponde de Oliveira, Ren, Hongyu, Chang, Huiwen, Chung, Hyung Won, Kivlichan, Ian, O'Connell, Ian, Osband, Ian, Silber, Ian, Sohl, Ian, Okuyucu, Ibrahim, Lan, Ikai, Kostrikov, Ilya, Sutskever, Ilya, Kanitscheider, Ingmar, Gulrajani, Ishaan, Coxon, Jacob, Menick, Jacob, Pachocki, Jakub, Aung, James, Betker, James, Crooks, James, Lennon, James, Kiros, Jamie, Leike, Jan, Park, Jane, Kwon, Jason, Phang, Jason, Teplitz, Jason, Wei, Jason, Wolfe, Jason, Chen, Jay, Harris, Jeff, Varavva, Jenia, Lee, Jessica Gan, Shieh, Jessica, Lin, Ji, Yu, Jiahui, Weng, Jiayi, Tang, Jie, Yu, Jieqi, Jang, Joanne, Candela, Joaquin Quinonero, Beutler, Joe, Landers, Joe, Parish, Joel, Heidecke, Johannes, Schulman, John, Lachman, Jonathan, McKay, Jonathan, Uesato, Jonathan, Ward, Jonathan, Kim, Jong Wook, Huizinga, Joost, Sitkin, Jordan, Kraaijeveld, Jos, Gross, Josh, Kaplan, Josh, Snyder, Josh, Achiam, Joshua, Jiao, Joy, Lee, Joyce, Zhuang, Juntang, Harriman, Justyn, Fricke, Kai, Hayashi, Kai, Singhal, Karan, Shi, Katy, Karthik, Kavin, Wood, Kayla, Rimbach, Kendra, Hsu, Kenny, Nguyen, Kenny, Gu-Lemberg, Keren, Button, Kevin, Liu, Kevin, Howe, Kiel, Muthukumar, Krithika, Luther, Kyle, Ahmad, Lama, Kai, Larry, Itow, Lauren, Workman, Lauren, Pathak, Leher, Chen, Leo, Jing, Li, Guy, Lia, Fedus, Liam, Zhou, Liang, Mamitsuka, Lien, Weng, Lilian, McCallum, Lindsay, Held, Lindsey, Ouyang, Long, Feuvrier, Louis, Zhang, Lu, Kondraciuk, Lukas, Kaiser, Lukasz, Hewitt, Luke, Metz, Luke, Doshi, Lyric, Aflak, Mada, Simens, Maddie, Boyd, Madelaine, Thompson, Madeleine, Dukhan, Marat, Chen, Mark, Gray, Mark, Hudnall, Mark, Zhang, Marvin, Aljubeh, Marwan, Litwin, Mateusz, Zeng, Matthew, Johnson, Max, Shetty, Maya, Gupta, Mayank, Shah, Meghan, Yatbaz, Mehmet, Yang, Meng Jia, Zhong, Mengchao, Glaese, Mia, Chen, Mianna, Janner, Michael, Lampe, Michael, Petrov, Michael, Wu, Michael, Wang, Michele, Fradin, Michelle, Pokrass, Michelle, Castro, Miguel, de Castro, Miguel Oom Temudo, Pavlov, Mikhail, Brundage, Miles, Wang, Miles, Khan, Minal, Murati, Mira, Bavarian, Mo, Lin, Molly, Yesildal, Murat, Soto, Nacho, Gimelshein, Natalia, Cone, Natalie, Staudacher, Natalie, Summers, Natalie, LaFontaine, Natan, Chowdhury, Neil, Ryder, Nick, Stathas, Nick, Turley, Nick, Tezak, Nik, Felix, Niko, Kudige, Nithanth, Keskar, Nitish, Deutsch, Noah, Bundick, Noel, Puckett, Nora, Nachum, Ofir, Okelola, Ola, Boiko, Oleg, Murk, Oleg, Jaffe, Oliver, Watkins, Olivia, Godement, Olivier, Campbell-Moore, Owen, Chao, Patrick, McMillan, Paul, Belov, Pavel, Su, Peng, Bak, Peter, Bakkum, Peter, Deng, Peter, Dolan, Peter, Hoeschele, Peter, Welinder, Peter, Tillet, Phil, Pronin, Philip, Tillet, Philippe, Dhariwal, Prafulla, Yuan, Qiming, Dias, Rachel, Lim, Rachel, Arora, Rahul, Troll, Rajan, Lin, Randall, Lopes, Rapha Gontijo, Puri, Raul, Miyara, Reah, Leike, Reimar, Gaubert, Renaud, Zamani, Reza, Wang, Ricky, Donnelly, Rob, Honsby, Rob, Smith, Rocky, Sahai, Rohan, Ramchandani, Rohit, Huet, Romain, Carmichael, Rory, Zellers, Rowan, Chen, Roy, Chen, Ruby, Nigmatullin, Ruslan, Cheu, Ryan, Jain, Saachi, Altman, Sam, Schoenholz, Sam, Toizer, Sam, Miserendino, Samuel, Agarwal, Sandhini, Culver, Sara, Ethersmith, Scott, Gray, Scott, Grove, Sean, Metzger, Sean, Hermani, Shamez, Jain, Shantanu, Zhao, Shengjia, Wu, Sherwin, Jomoto, Shino, Wu, Shirong, Shuaiqi, Xia, Phene, Sonia, Papay, Spencer, Narayanan, Srinivas, Coffey, Steve, Lee, Steve, Hall, Stewart, Balaji, Suchir, Broda, Tal, Stramer, Tal, Xu, Tao, Gogineni, Tarun, Christianson, Taya, Sanders, Ted, Patwardhan, Tejal, Cunninghman, Thomas, Degry, Thomas, Dimson, Thomas, Raoux, Thomas, Shadwell, Thomas, Zheng, Tianhao, Underwood, Todd, Markov, Todor, Sherbakov, Toki, Rubin, Tom, Stasi, Tom, Kaftan, Tomer, Heywood, Tristan, Peterson, Troy, Walters, Tyce, Eloundou, Tyna, Qi, Valerie, Moeller, Veit, Monaco, Vinnie, Kuo, Vishal, Fomenko, Vlad, Chang, Wayne, Zheng, Weiyi, Zhou, Wenda, Manassra, Wesam, Sheu, Will, Zaremba, Wojciech, Patil, Yash, Qian, Yilei, Kim, Yongjik, Cheng, Youlong, Zhang, Yu, He, Yuchen, Zhang, Yuchen, Jin, Yujia, Dai, Yunxing, and Malkov, Yury
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computers and Society ,Computer Science - Machine Learning ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
GPT-4o is an autoregressive omni model that accepts as input any combination of text, audio, image, and video, and generates any combination of text, audio, and image outputs. It's trained end-to-end across text, vision, and audio, meaning all inputs and outputs are processed by the same neural network. GPT-4o can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50\% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. In line with our commitment to building AI safely and consistent with our voluntary commitments to the White House, we are sharing the GPT-4o System Card, which includes our Preparedness Framework evaluations. In this System Card, we provide a detailed look at GPT-4o's capabilities, limitations, and safety evaluations across multiple categories, focusing on speech-to-speech while also evaluating text and image capabilities, and measures we've implemented to ensure the model is safe and aligned. We also include third-party assessments on dangerous capabilities, as well as discussion of potential societal impacts of GPT-4o's text and vision capabilities.
- Published
- 2024
50. Adversarial Environment Design via Regret-Guided Diffusion Models
- Author
-
Chung, Hojun, Lee, Junseo, Kim, Minsoo, Kim, Dohyeong, and Oh, Songhwai
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Training agents that are robust to environmental changes remains a significant challenge in deep reinforcement learning (RL). Unsupervised environment design (UED) has recently emerged to address this issue by generating a set of training environments tailored to the agent's capabilities. While prior works demonstrate that UED has the potential to learn a robust policy, their performance is constrained by the capabilities of the environment generation. To this end, we propose a novel UED algorithm, adversarial environment design via regret-guided diffusion models (ADD). The proposed method guides the diffusion-based environment generator with the regret of the agent to produce environments that the agent finds challenging but conducive to further improvement. By exploiting the representation power of diffusion models, ADD can directly generate adversarial environments while maintaining the diversity of training environments, enabling the agent to effectively learn a robust policy. Our experimental results demonstrate that the proposed method successfully generates an instructive curriculum of environments, outperforming UED baselines in zero-shot generalization across novel, out-of-distribution environments. Project page: https://github.com/rllab-snu.github.io/projects/ADD, Comment: 38th Conference on Neural Information Processing Systems
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.