46,598 results on '"WANG, LU"'
Search Results
2. Assessing Bear/Cub/Otter identity and history of cardiovascular disease among gay, bisexual, and other men who have sex with men in Metro Vancouver
- Author
-
Sang, Jordan M., Greatheart, Marcus, Wang, Lu, Barath, Justin, Lal, Allan, Card, Kiffer G., Blackwell, Everett, Lachowsky, Nathan J., Roth, Eric A., Hogg, Robert S., and Moore, David M.
- Published
- 2021
3. Expanding Chatbot Knowledge in Customer Service: Context-Aware Similar Question Generation Using Large Language Models
- Author
-
Hong, Mengze, Song, Yuanfeng, Jiang, Di, Wang, Lu, Guo, Zichang, and Zhang, Chen Jason
- Subjects
Computer Science - Computation and Language - Abstract
Reliable responses of service chatbots are often achieved by employing retrieval-based methods that restrict answers to a knowledge base comprising predefined question-answer pairs (QA pairs). To accommodate potential variations in how a customer's query may be expressed, it emerges as the favored solution to augment these QA pairs with similar questions that are possibly diverse while remaining semantic consistency. This augmentation task is known as Similar Question Generation (SQG). Traditional methods that heavily rely on human efforts or rule-based techniques suffer from limited diversity or significant semantic deviation from the source question, only capable of producing a finite number of useful questions. To address these limitations, we propose an SQG approach based on Large Language Models (LLMs), capable of producing a substantial number of diverse questions while maintaining semantic consistency to the source QA pair. This is achieved by leveraging LLMs' natural language understanding capability through fine-tuning with specially designed prompts. The experiments conducted on a real customer-service dataset demonstrate that our method surpasses baseline methods by a significant margin in terms of semantic diversity. Human evaluation further confirms that integrating the answer that reflects the customer's intention is crucial for increasing the number of generated questions that meet business requirements.
- Published
- 2024
4. Closing the Loop: Learning to Generate Writing Feedback via Language Model Simulated Student Revisions
- Author
-
Nair, Inderjeet, Tan, Jiaye, Su, Xiaotian, Gere, Anne, Wang, Xu, and Wang, Lu
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Providing feedback is widely recognized as crucial for refining students' writing skills. Recent advances in language models (LMs) have made it possible to automatically generate feedback that is actionable and well-aligned with human-specified attributes. However, it remains unclear whether the feedback generated by these models is truly effective in enhancing the quality of student revisions. Moreover, prompting LMs with a precise set of instructions to generate feedback is nontrivial due to the lack of consensus regarding the specific attributes that can lead to improved revising performance. To address these challenges, we propose PROF that PROduces Feedback via learning from LM simulated student revisions. PROF aims to iteratively optimize the feedback generator by directly maximizing the effectiveness of students' overall revising performance as simulated by LMs. Focusing on an economic essay assignment, we empirically test the efficacy of PROF and observe that our approach not only surpasses a variety of baseline methods in effectiveness of improving students' writing but also demonstrates enhanced pedagogical values, even though it was not explicitly trained for this aspect., Comment: Accepted to EMNLP 2024
- Published
- 2024
5. Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives
- Author
-
Zhang, Xinliang Frederick, Beauchamp, Nick, and Wang, Lu
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Reasoning about time and temporal relations is an integral aspect of human cognition, essential for perceiving the world and navigating our experiences. Though large language models (LLMs) have demonstrated impressive performance in many reasoning tasks, temporal reasoning remains challenging due to its intrinsic complexity. In this work, we first study an essential task of temporal reasoning -- temporal graph generation, to unveil LLMs' inherent, global reasoning capabilities. We show that this task presents great challenges even for the most powerful LLMs, such as GPT-3.5/4. We also notice a significant performance gap by small models (<10B) that lag behind LLMs by 50%. Next, we study how to close this gap with a budget constraint, e.g., not using model finetuning. We propose a new prompting technique tailored for temporal reasoning, Narrative-of-Thought (NoT), that first converts the events set to a Python class, then prompts a small model to generate a temporally grounded narrative, guiding the final generation of a temporal graph. Extensive experiments showcase the efficacy of NoT in improving various metrics. Notably, NoT attains the highest F1 on the Schema-11 evaluation set, while securing an overall F1 on par with GPT-3.5. NoT also achieves the best structural similarity across the board, even compared with GPT-3.5/4. Our code is available at https://github.com/launchnlp/NoT., Comment: EMNLP'24 Findings
- Published
- 2024
6. Scalable Fine-tuning from Multiple Data Sources:A First-Order Approximation Approach
- Author
-
Li, Dongyue, Zhang, Ziniu, Wang, Lu, and Zhang, Hongyang R.
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
We study the problem of fine-tuning a language model (LM) for a target task by optimally using the information from $n$ auxiliary tasks. This problem has broad applications in NLP, such as targeted instruction tuning and data selection in chain-of-thought fine-tuning. The key challenge of this problem is that not all auxiliary tasks are useful to improve the performance of the target task. Thus, choosing the right subset of auxiliary tasks is crucial. Conventional subset selection methods, such as forward & backward selection, are unsuitable for LM fine-tuning because they require repeated training on subsets of auxiliary tasks. This paper introduces a new algorithm to estimate model fine-tuning performances without repeated training. Our algorithm first performs multitask training using the data of all the tasks to obtain a meta initialization. Then, we approximate the model fine-tuning loss of a subset using functional values and gradients from the meta initialization. Empirically, we find that this gradient-based approximation holds with remarkable accuracy for twelve transformer-based LMs. Thus, we can now estimate fine-tuning performances on CPUs within a few seconds. We conduct extensive experiments to validate our approach, delivering a speedup of $30\times$ over conventional subset selection while incurring only $1\%$ error of the true fine-tuning performances. In downstream evaluations of instruction tuning and chain-of-thought fine-tuning, our approach improves over prior methods that utilize gradient or representation similarity for subset selection by up to $3.8\%$., Comment: 16 pages
- Published
- 2024
7. Turn Every Application into an Agent: Towards Efficient Human-Agent-Computer Interaction with API-First LLM-Based Agents
- Author
-
Lu, Junting, Zhang, Zhiyang, Yang, Fangkai, Zhang, Jue, Wang, Lu, Du, Chao, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, and Zhang, Qi
- Subjects
Computer Science - Artificial Intelligence - Abstract
Multimodal large language models (MLLMs) have enabled LLM-based agents to directly interact with application user interfaces (UIs), enhancing agents' performance in complex tasks. However, these agents often suffer from high latency and low reliability due to the extensive sequential UI interactions. To address this issue, we propose AXIS, a novel LLM-based agents framework prioritize actions through application programming interfaces (APIs) over UI actions. This framework also facilitates the creation and expansion of APIs through automated exploration of applications. Our experiments on Office Word demonstrate that AXIS reduces task completion time by 65%-70% and cognitive workload by 38%-53%, while maintaining accuracy of 97%-98% compare to humans. Our work contributes to a new human-agent-computer interaction (HACI) framework and a fresh UI design principle for application providers in the era of LLMs. It also explores the possibility of turning every applications into agents, paving the way towards an agent-centric operating system (Agent OS).
- Published
- 2024
8. Attack End-to-End Autonomous Driving through Module-Wise Noise
- Author
-
Wang, Lu, Zhang, Tianyuan, Han, Yikai, Fang, Muyang, Jin, Ting, and Kang, Jiaqi
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
With recent breakthroughs in deep neural networks, numerous tasks within autonomous driving have exhibited remarkable performance. However, deep learning models are susceptible to adversarial attacks, presenting significant security risks to autonomous driving systems. Presently, end-to-end architectures have emerged as the predominant solution for autonomous driving, owing to their collaborative nature across different tasks. Yet, the implications of adversarial attacks on such models remain relatively unexplored. In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model for the first time. We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection. We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods. We trust that our research will offer fresh insights into ensuring the safety and reliability of autonomous driving systems.
- Published
- 2024
9. Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
- Author
-
Zhang, Tianyuan, Wang, Lu, Kang, Jiaqi, Zhang, Xinwei, Liang, Siyuan, Chen, Yuwei, Liu, Aishan, and Liu, Xianglong
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Recent advances in deep learning have markedly improved autonomous driving (AD) models, particularly end-to-end systems that integrate perception, prediction, and planning stages, achieving state-of-the-art performance. However, these models remain vulnerable to adversarial attacks, where human-imperceptible perturbations can disrupt decision-making processes. While adversarial training is an effective method for enhancing model robustness against such attacks, no prior studies have focused on its application to end-to-end AD models. In this paper, we take the first step in adversarial training for end-to-end AD models and present a novel Module-wise Adaptive Adversarial Training (MA2T). However, extending conventional adversarial training to this context is highly non-trivial, as different stages within the model have distinct objectives and are strongly interconnected. To address these challenges, MA2T first introduces Module-wise Noise Injection, which injects noise before the input of different modules, targeting training models with the guidance of overall objectives rather than each independent module loss. Additionally, we introduce Dynamic Weight Accumulation Adaptation, which incorporates accumulated weight changes to adaptively learn and adjust the loss weights of each module based on their contributions (accumulated reduction rates) for better balance and robust training. To demonstrate the efficacy of our defense, we conduct extensive experiments on the widely-used nuScenes dataset across several end-to-end AD models under both white-box and black-box attacks, where our method outperforms other baselines by large margins (+5-10%). Moreover, we validate the robustness of our defense through closed-loop evaluation in the CARLA simulation environment, showing improved resilience even against natural corruption., Comment: 14 pages
- Published
- 2024
10. Scaling Law with Learning Rate Annealing
- Author
-
Tissue, Howe, Wang, Venus, and Wang, Lu
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We find that the cross-entropy loss curves of neural language models empirically adhere to a scaling law with learning rate (LR) annealing over training steps: $$L(s) = L_0 + A\cdot S_1^{-\alpha} - C\cdot S_2,$$ where $L(s)$ is the validation loss at step $s$, $S_1$ is the area under the LR curve, $S_2$ is the LR annealing area, and $L_0$, $A$, $C$, $\alpha$ are constant parameters. This formulation takes into account two factors: (1) power-law scaling over data size, and (2) the additional loss reduction during LR annealing. Therefore, this formulation can describe the full loss curve at each step, rather than the single loss point at the end of training. Applying the scaling law with LR annealing and fitting only one or two training curves, we can accurately predict the loss at any given step across any learning rate scheduler (LRS). This approach significantly reduces computational cost in formulating scaling laws while providing more accuracy and expressiveness for training dynamics. Extensive experiments demonstrate that our findings hold across a range of hyper-parameters and model architectures, and our equation can extend to scaling effect of model sizes. Moreover, our formulation provides accurate theoretical verification and explanation for empirical results observed in numerous previous studies, particularly those focusing on LR schedule and annealing. We believe that this work is promising to enhance the understanding of LLM training dynamics while greatly democratizing scaling laws, and it can guide researchers in refining training strategies (e.g. critical LRS) for further LLMs., Comment: Add more experiments to consolidate our scaling laws. 29 pages, 29 figures
- Published
- 2024
11. Multi-Scale Representation Learning for Image Restoration with State-Space Model
- Author
-
He, Yuhong, Peng, Long, Yi, Qiaosi, Wu, Chen, and Wang, Lu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Image restoration endeavors to reconstruct a high-quality, detail-rich image from a degraded counterpart, which is a pivotal process in photography and various computer vision systems. In real-world scenarios, different types of degradation can cause the loss of image details at various scales and degrade image contrast. Existing methods predominantly rely on CNN and Transformer to capture multi-scale representations. However, these methods are often limited by the high computational complexity of Transformers and the constrained receptive field of CNN, which hinder them from achieving superior performance and efficiency in image restoration. To address these challenges, we propose a novel Multi-Scale State-Space Model-based (MS-Mamba) for efficient image restoration that enhances the capacity for multi-scale representation learning through our proposed global and regional SSM modules. Additionally, an Adaptive Gradient Block (AGB) and a Residual Fourier Block (RFB) are proposed to improve the network's detail extraction capabilities by capturing gradients in various directions and facilitating learning details in the frequency domain. Extensive experiments on nine public benchmarks across four classic image restoration tasks, image deraining, dehazing, denoising, and low-light enhancement, demonstrate that our proposed method achieves new state-of-the-art performance while maintaining low computational complexity. The source code will be publicly available.
- Published
- 2024
12. Optimizing NOMA Transmissions to Advance Federated Learning in Vehicular Networks
- Author
-
Chen, Ziru, Ni, Zhou, Guan, Peiyuan, Wang, Lu, Cai, Lin X., Hashemi, Morteza, and Li, Zongzhi
- Subjects
Computer Science - Networking and Internet Architecture ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Diverse critical data, such as location information and driving patterns, can be collected by IoT devices in vehicular networks to improve driving experiences and road safety. However, drivers are often reluctant to share their data due to privacy concerns. The Federated Vehicular Network (FVN) is a promising technology that tackles these concerns by transmitting model parameters instead of raw data, thereby protecting the privacy of drivers. Nevertheless, the performance of Federated Learning (FL) in a vehicular network depends on the joining ratio, which is restricted by the limited available wireless resources. To address these challenges, this paper proposes to apply Non-Orthogonal Multiple Access (NOMA) to improve the joining ratio in a FVN. Specifically, a vehicle selection and transmission power control algorithm is developed to exploit the power domain differences in the received signal to ensure the maximum number of vehicles capable of joining the FVN. Our simulation results demonstrate that the proposed NOMA-based strategy increases the joining ratio and significantly enhances the performance of the FVN., Comment: The paper is accepted by IEEE Globecom 2024
- Published
- 2024
13. Gaussian Mixture based Evidential Learning for Stereo Matching
- Author
-
Liu, Weide, Wang, Xingxing, Wang, Lu, Cheng, Jun, Liu, Fayao, and Yang, Xulei
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In this paper, we introduce a novel Gaussian mixture based evidential learning solution for robust stereo matching. Diverging from previous evidential deep learning approaches that rely on a single Gaussian distribution, our framework posits that individual image data adheres to a mixture-of-Gaussian distribution in stereo matching. This assumption yields more precise pixel-level predictions and more accurately mirrors the real-world image distribution. By further employing the inverse-Gamma distribution as an intermediary prior for each mixture component, our probabilistic model achieves improved depth estimation compared to its counterpart with the single Gaussian and effectively captures the model uncertainty, which enables a strong cross-domain generation ability. We evaluated our method for stereo matching by training the model using the Scene Flow dataset and testing it on KITTI 2015 and Middlebury 2014. The experiment results consistently show that our method brings improvements over the baseline methods in a trustworthy manner. Notably, our approach achieved new state-of-the-art results on both the in-domain validated data and the cross-domain datasets, demonstrating its effectiveness and robustness in stereo matching tasks.
- Published
- 2024
14. Can LLMs 'Reason' in Music? An Evaluation of LLMs' Capability of Music Understanding and Generation
- Author
-
Zhou, Ziya, Wu, Yuhang, Wu, Zhiyue, Zhang, Xinyue, Yuan, Ruibin, Ma, Yinghao, Wang, Lu, Benetos, Emmanouil, Xue, Wei, and Guo, Yike
- Subjects
Computer Science - Sound ,Computer Science - Computation and Language ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Symbolic Music, akin to language, can be encoded in discrete symbols. Recent research has extended the application of large language models (LLMs) such as GPT-4 and Llama2 to the symbolic music domain including understanding and generation. Yet scant research explores the details of how these LLMs perform on advanced music understanding and conditioned generation, especially from the multi-step reasoning perspective, which is a critical aspect in the conditioned, editable, and interactive human-computer co-creation process. This study conducts a thorough investigation of LLMs' capability and limitations in symbolic music processing. We identify that current LLMs exhibit poor performance in song-level multi-step music reasoning, and typically fail to leverage learned music knowledge when addressing complex musical tasks. An analysis of LLMs' responses highlights distinctly their pros and cons. Our findings suggest achieving advanced musical capability is not intrinsically obtained by LLMs, and future research should focus more on bridging the gap between music knowledge and reasoning, to improve the co-creation experience for musicians., Comment: Accepted by ISMIR2024
- Published
- 2024
15. Micro-Expression Recognition by Motion Feature Extraction based on Pre-training
- Author
-
Li, Ruolin, Wang, Lu, Yang, Tingting, Xu, Lisheng, Ma, Bingyang, Li, Yongchun, and Wei, Hongchao
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Micro-expressions (MEs) are spontaneous, unconscious facial expressions that have promising applications in various fields such as psychotherapy and national security. Thus, micro-expression recognition (MER) has attracted more and more attention from researchers. Although various MER methods have emerged especially with the development of deep learning techniques, the task still faces several challenges, e.g. subtle motion and limited training data. To address these problems, we propose a novel motion extraction strategy (MoExt) for the MER task and use additional macro-expression data in the pre-training process. We primarily pretrain the feature separator and motion extractor using the contrastive loss, thus enabling them to extract representative motion features. In MoExt, shape features and texture features are first extracted separately from onset and apex frames, and then motion features related to MEs are extracted based on the shape features of both frames. To enable the model to more effectively separate features, we utilize the extracted motion features and the texture features from the onset frame to reconstruct the apex frame. Through pre-training, the module is enabled to extract inter-frame motion features of facial expressions while excluding irrelevant information. The feature separator and motion extractor are ultimately integrated into the MER network, which is then fine-tuned using the target ME data. The effectiveness of proposed method is validated on three commonly used datasets, i.e., CASME II, SMIC, SAMM, and CAS(ME)3 dataset. The results show that our method performs favorably against state-of-the-art methods.
- Published
- 2024
16. AutoRAG-HP: Automatic Online Hyper-Parameter Tuning for Retrieval-Augmented Generation
- Author
-
Fu, Jia, Qin, Xiaoting, Yang, Fangkai, Wang, Lu, Zhang, Jue, Lin, Qingwei, Chen, Yubo, Zhang, Dongmei, Rajmohan, Saravan, and Zhang, Qi
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Recent advancements in Large Language Models have transformed ML/AI development, necessitating a reevaluation of AutoML principles for the Retrieval-Augmented Generation (RAG) systems. To address the challenges of hyper-parameter optimization and online adaptation in RAG, we propose the AutoRAG-HP framework, which formulates the hyper-parameter tuning as an online multi-armed bandit (MAB) problem and introduces a novel two-level Hierarchical MAB (Hier-MAB) method for efficient exploration of large search spaces. We conduct extensive experiments on tuning hyper-parameters, such as top-k retrieved documents, prompt compression ratio, and embedding methods, using the ALCE-ASQA and Natural Questions datasets. Our evaluation from jointly optimization all three hyper-parameters demonstrate that MAB-based online learning methods can achieve Recall@5 $\approx 0.8$ for scenarios with prominent gradients in search space, using only $\sim20\%$ of the LLM API calls required by the Grid Search approach. Additionally, the proposed Hier-MAB approach outperforms other baselines in more challenging optimization scenarios. The code will be made available at https://aka.ms/autorag.
- Published
- 2024
17. A microwave photonic prototype for concurrent radar detection and spectrum sensing over an 8 to 40 GHz bandwidth
- Author
-
Shi, Taixia, Liang, Dingding, Wang, Lu, Li, Lin, Guo, Shaogang, Gao, Jiawei, Li, Xiaowei, Lin, Chulun, Shi, Lei, Ding, Baogang, Liu, Shiyang, Yang, Fangyi, Jiang, Chi, and Chen, Yang
- Subjects
Physics - Optics ,Electrical Engineering and Systems Science - Signal Processing - Abstract
In this work, a microwave photonic prototype for concurrent radar detection and spectrum sensing is proposed, designed, built, and investigated. A direct digital synthesizer and an analog electronic circuit are integrated to generate an intermediate frequency (IF) linearly frequency-modulated (LFM) signal with a tunable center frequency from 2.5 to 9.5 GHz and an instantaneous bandwidth of 1 GHz. The IF LFM signal is converted to the optical domain via an intensity modulator and then filtered by a fiber Bragg grating (FBG) to generate only two 2nd-order optical LFM sidebands. In radar detection, the two optical LFM sidebands beat with each other to generate a frequency-and-bandwidth-quadrupled LFM signal, which is used for ranging, radial velocity measurement, and imaging. By changing the center frequency of the IF LFM signal, the radar function can be operated within 8 to 40 GHz. In spectrum sensing, one 2nd-order optical LFM sideband is selected by another FBG, which then works in conjunction with the stimulated Brillouin scattering gain spectrum to map the frequency of the signal under test to time with an instantaneous measurement bandwidth of 2 GHz. By using a frequency shift module to adjust the pump frequency, the frequency measurement range can be adjusted from 0 to 40 GHz. The prototype is comprehensively studied and tested, which is capable of achieving a range resolution of 3.75 cm, a range error of less than $\pm$ 2 cm, a radial velocity error within $\pm$ 1 cm/s, delivering clear imaging of multiple small targets, and maintaining a frequency measurement error of less than $\pm$ 7 MHz and a frequency resolution of better than 20 MHz., Comment: 18 pages, 12 figures, 1 table
- Published
- 2024
18. Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation
- Author
-
An, Kaikai, Yang, Fangkai, Li, Liqun, Lu, Junting, Cheng, Sitao, Si, Shuzheng, Wang, Lu, Zhao, Pu, Cao, Lele, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, Zhang, Qi, and Chang, Baobao
- Subjects
Computer Science - Artificial Intelligence - Abstract
Recent advances in retrieval-augmented generation have significantly improved the performance of question-answering systems, particularly on factoid '5Ws' questions. However, these systems still face substantial challenges when addressing '1H' questions, specifically how-to questions, which are integral to decision-making processes and require dynamic, step-by-step answers. The key limitation lies in the prevalent data organization paradigm, chunk, which divides documents into fixed-size segments, and disrupts the logical coherence and connections within the context. To overcome this, in this paper, we propose Thread, a novel data organization paradigm aimed at enabling current systems to handle how-to questions more effectively. Specifically, we introduce a new knowledge granularity, termed 'logic unit', where documents are transformed into more structured and loosely interconnected logic units with large language models. Extensive experiments conducted across both open-domain and industrial settings demonstrate that Thread outperforms existing paradigms significantly, improving the success rate of handling how-to questions by 21% to 33%. Moreover, Thread exhibits high adaptability in processing various document formats, drastically reducing the candidate quantity in the knowledge base and minimizing the required information to one-fourth compared with chunk, optimizing both efficiency and effectiveness., Comment: Work in progress
- Published
- 2024
19. Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding
- Author
-
Liu, Xin, Bayat, Farima Fatahi, and Wang, Lu
- Subjects
Computer Science - Computation and Language - Abstract
Calibrating language models (LMs) aligns their generation confidence with the actual likelihood of answer correctness, which can inform users about LMs' reliability and mitigate hallucinated content. However, prior calibration methods, such as self-consistency-based and logit-based approaches, are either limited in inference-time efficiency or fall short of providing informative signals. Moreover, simply filtering out low-confidence responses reduces the LM's helpfulness when the answers are correct. Therefore, effectively using calibration techniques to enhance an LM's factuality remains an unsolved challenge. In this paper, we first propose an activation-based calibration method, ActCab, which trains a linear layer on top of the LM's last-layer activations that can better capture the representations of knowledge. Built on top of ActCab, we further propose CoDec, a confidence-guided decoding strategy to elicit truthful answers with high confidence from LMs. By evaluating on five popular QA benchmarks, ActCab achieves superior calibration performance than all competitive baselines, e.g., by reducing the average expected calibration error (ECE) score by up to 39%. Further experiments on CoDec show consistent improvements in several LMs' factuality on challenging QA datasets, such as TruthfulQA, highlighting the value of confidence signals in enhancing factuality.
- Published
- 2024
20. Verifiable Generation with Subsentence-Level Fine-Grained Citations
- Author
-
Cao, Shuyang and Wang, Lu
- Subjects
Computer Science - Computation and Language - Abstract
Verifiable generation requires large language models (LLMs) to cite source documents supporting their outputs, thereby improve output transparency and trustworthiness. Yet, previous work mainly targets the generation of sentence-level citations, lacking specificity about which parts of a sentence are backed by the cited sources. This work studies verifiable generation with subsentence-level fine-grained citations for more precise location of generated content supported by the cited sources. We first present a dataset, SCiFi, comprising 10K Wikipedia paragraphs with subsentence-level citations. Each paragraph is paired with a set of candidate source documents for citation and a query that triggers the generation of the paragraph content. On SCiFi, we evaluate the performance of state-of-the-art LLMs and strategies for processing long documents designed for these models. Our experiment results reveals key factors that could enhance the quality of citations, including the expansion of the source documents' context accessible to the models and the implementation of specialized model tuning., Comment: NAACL 2024 Findings
- Published
- 2024
21. LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
- Author
-
Zhang, Tianyuan, Wang, Lu, Li, Hainan, Xiao, Yisong, Liang, Siyuan, Liu, Aishan, Liu, Xianglong, and Tao, Dacheng
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering. Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions such as shadows and tire marks on the road. This research gap poses significant safety challenges since these illusions exist naturally in real-world traffic situations. For the first time, this paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil for evaluating the robustness of LD against this natural corruption. We systematically design 14 prevalent yet critical types of environmental illusions (e.g., shadow, reflection) that cover a wide spectrum of real-world influencing factors in LD tasks. Based on real-world environments, we create 94 realistic and customizable 3D cases using the widely used CARLA simulator, resulting in a dataset comprising 90,292 sampled images. Through extensive experiments, we benchmark the robustness of popular LD methods using LanEvil, revealing substantial performance degradation (-5.37% Accuracy and -10.70% F1-Score on average), with shadow effects posing the greatest risk (-7.39% Accuracy). Additionally, we assess the performance of commercial auto-driving systems OpenPilot and Apollo through collaborative simulations, demonstrating that proposed environmental illusions can lead to incorrect decisions and potential traffic accidents. To defend against environmental illusions, we propose the Attention Area Mixing (AAM) approach using hard examples, which witness significant robustness improvement (+3.76%) under illumination effects. We hope our paper can contribute to advancing more robust auto-driving systems in the future. Website: https://lanevil.github.io/., Comment: Accepted by ACM MM 2024
- Published
- 2024
22. A Mallows-like Criterion for Anomaly Detection with Random Forest Implementation
- Author
-
Zhao, Gaoxiang, Wang, Lu, and Wang, Xiaoqiang
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
The effectiveness of anomaly signal detection can be significantly undermined by the inherent uncertainty of relying on one specified model. Under the framework of model average methods, this paper proposes a novel criterion to select the weights on aggregation of multiple models, wherein the focal loss function accounts for the classification of extremely imbalanced data. This strategy is further integrated into Random Forest algorithm by replacing the conventional voting method. We have evaluated the proposed method on benchmark datasets across various domains, including network intrusion. The findings indicate that our proposed method not only surpasses the model averaging with typical loss functions but also outstrips common anomaly detection algorithms in terms of accuracy and robustness.
- Published
- 2024
23. JUNO Sensitivity to Invisible Decay Modes of Neutrons
- Author
-
JUNO Collaboration, Abusleme, Angel, Adam, Thomas, Adamowicz, Kai, Ahmad, Shakeel, Ahmed, Rizwan, Aiello, Sebastiano, An, Fengpeng, An, Qi, Andronico, Giuseppe, Anfimov, Nikolay, Antonelli, Vito, Antoshkina, Tatiana, de André, João Pedro Athayde Marcondes, Auguste, Didier, Bai, Weidong, Balashov, Nikita, Baldini, Wander, Barresi, Andrea, Basilico, Davide, Baussan, Eric, Bellato, Marco, Beretta, Marco, Bergnoli, Antonio, Bick, Daniel, Bieger, Lukas, Biktemerova, Svetlana, Birkenfeld, Thilo, Blake, Iwan, Blyth, Simon, Bolshakova, Anastasia, Bongrand, Mathieu, Breton, Dominique, Brigatti, Augusto, Brugnera, Riccardo, Bruno, Riccardo, Budano, Antonio, Busto, Jose, Cabrera, Anatael, Caccianiga, Barbara, Cai, Hao, Cai, Xiao, Cai, Yanke, Cai, Zhiyan, Callier, Stéphane, Calvez, Steven, Cammi, Antonio, Campeny, Agustin, Cao, Chuanya, Cao, Guofu, Cao, Jun, Caruso, Rossella, Cerna, Cédric, Cerrone, Vanessa, Chang, Jinfan, Chang, Yun, Chatrabhuti, Auttakit, Chen, Chao, Chen, Guoming, Chen, Pingping, Chen, Shaomin, Chen, Xin, Chen, Yiming, Chen, Yixue, Chen, Yu, Chen, Zelin, Chen, Zhangming, Chen, Zhiyuan, Chen, Zikang, Cheng, Jie, Cheng, Yaping, Cheng, Yu Chin, Chepurnov, Alexander, Chetverikov, Alexey, Chiesa, Davide, Chimenti, Pietro, Chin, Yen-Ting, Chou, Po-Lin, Chu, Ziliang, Chukanov, Artem, Claverie, Gérard, Clementi, Catia, Clerbaux, Barbara, Molla, Marta Colomer, Di Lorenzo, Selma Conforti, Coppi, Alberto, Corti, Daniele, Csakli, Simon, Cui, Chenyang, Corso, Flavio Dal, Dalager, Olivia, Datta, Jaydeep, De La Taille, Christophe, Deng, Zhi, Deng, Ziyan, Ding, Xiaoyu, Ding, Xuefeng, Ding, Yayun, Dirgantara, Bayu, Dittrich, Carsten, Dmitrievsky, Sergey, Dohnal, Tadeas, Dolzhikov, Dmitry, Donchenko, Georgy, Dong, Jianmeng, Doroshkevich, Evgeny, Dou, Wei, Dracos, Marcos, Druillole, Frédéric, Du, Ran, Du, Shuxian, Duan, Yujie, Dugas, Katherine, Dusini, Stefano, Duyang, Hongyue, Eck, Jessica, Enqvist, Timo, Fabbri, Andrea, Fahrendholz, Ulrike, Fan, Lei, Fang, Jian, Fang, Wenxing, Fedoseev, Dmitry, Feng, Li-Cheng, Feng, Qichun, Ferraro, Federico, Fournier, Amélie, Fritsch, Fritsch, Gan, Haonan, Gao, Feng, Garfagnini, Alberto, Gavrikov, Arsenii, Giammarchi, Marco, Giudice, Nunzio, Gonchar, Maxim, Gong, Guanghua, Gong, Hui, Gornushkin, Yuri, Grassi, Marco, Gromov, Maxim, Gromov, Vasily, Gu, Minghao, Gu, Xiaofei, Gu, Yu, Guan, Mengyun, Guan, Yuduo, Guardone, Nunzio, Guizzetti, Rosa Maria, Guo, Cong, Guo, Wanlei, Hagner, Caren, Han, Hechong, Han, Ran, Han, Yang, He, Jinhong, He, Miao, He, Wei, He, Xinhai, Heinz, Tobias, Hellmuth, Patrick, Heng, Yuekun, Herrera, Rafael, Hor, YuenKeung, Hou, Shaojing, Hsiung, Yee, Hu, Bei-Zhen, Hu, Hang, Hu, Jun, Hu, Peng, Hu, Shouyang, Hu, Tao, Hu, Yuxiang, Hu, Zhuojun, Huang, Guihong, Huang, Hanxiong, Huang, Jinhao, Huang, Junting, Huang, Kaixuan, Huang, Shengheng, Huang, Wenhao, Huang, Xin, Huang, Xingtao, Huang, Yongbo, Hui, Jiaqi, Huo, Lei, Huo, Wenju, Huss, Cédric, Hussain, Safeer, Imbert, Leonard, Ioannisian, Ara, Isocrate, Roberto, Jafar, Arshak, Jelmini, Beatrice, Jeria, Ignacio, Ji, Xiaolu, Jia, Huihui, Jia, Junji, Jian, Siyu, Jiang, Cailian, Jiang, Di, Jiang, Guangzheng, Jiang, Wei, Jiang, Xiaoshan, Jiang, Xiaozhao, Jiang, Yixuan, Jing, Xiaoping, Jollet, Cécile, Kang, Li, Karaparabil, Rebin, Kazarian, Narine, Khan, Ali, Khatun, Amina, Khosonthongkee, Khanchai, Korablev, Denis, Kouzakov, Konstantin, Krasnoperov, Alexey, Kuleshov, Sergey, Kumaran, Sindhujha, Kutovskiy, Nikolay, Labit, Loïc, Lachenmaier, Tobias, Lai, Haojing, Landini, Cecilia, Leblanc, Sébastien, Lefevre, Frederic, Lei, Ruiting, Leitner, Rupert, Leung, Jason, Li, Demin, Li, Fei, Li, Fule, Li, Gaosong, Li, Hongjian, Li, Huang, Li, Jiajun, Li, Min, Li, Nan, Li, Qingjiang, Li, Ruhui, Li, Rui, Li, Shanfeng, Li, Shuo, Li, Tao, Li, Teng, Li, Weidong, Li, Weiguo, Li, Xiaomei, Li, Xiaonan, Li, Xinglong, Li, Yi, Li, Yichen, Li, Yufeng, Li, Zhaohan, Li, Zhibing, Li, Ziyuan, Li, Zonghai, Liang, An-An, Liang, Hao, Liao, Jiajun, Liao, Yilin, Liao, Yuzhong, Limphirat, Ayut, Lin, Guey-Lin, Lin, Shengxin, Lin, Tao, Ling, Jiajie, Ling, Xin, Lippi, Ivano, Liu, Caimei, Liu, Fang, Liu, Fengcheng, Liu, Haidong, Liu, Haotian, Liu, Hongbang, Liu, Hongjuan, Liu, Hongtao, Liu, Hongyang, Liu, Jianglai, Liu, Jiaxi, Liu, Jinchang, Liu, Min, Liu, Qian, Liu, Qin, Liu, Runxuan, Liu, Shenghui, Liu, Shubin, Liu, Shulin, Liu, Xiaowei, Liu, Xiwen, Liu, Xuewei, Liu, Yankai, Liu, Zhen, Loi, Lorenzo, Lokhov, Alexey, Lombardi, Paolo, Lombardo, Claudio, Loo, Kai, Lu, Chuan, Lu, Haoqi, Lu, Jingbin, Lu, Junguang, Lu, Meishu, Lu, Peizhi, Lu, Shuxiang, Lu, Xianguo, Lubsandorzhiev, Bayarto, Lubsandorzhiev, Sultim, Ludhova, Livia, Lukanov, Arslan, Luo, Fengjiao, Luo, Guang, Luo, Jianyi, Luo, Shu, Luo, Wuming, Luo, Xiaojie, Lyashuk, Vladimir, Ma, Bangzheng, Ma, Bing, Ma, Qiumei, Ma, Si, Ma, Xiaoyan, Ma, Xubo, Maalmi, Jihane, Mai, Jingyu, Malabarba, Marco, Malyshkin, Yury, Mandujano, Roberto Carlos, Mantovani, Fabio, Mao, Xin, Mao, Yajun, Mari, Stefano M., Marini, Filippo, Martini, Agnese, Mayer, Matthias, Mayilyan, Davit, Mednieks, Ints, Meng, Yue, Meraviglia, Anita, Meregaglia, Anselmo, Meroni, Emanuela, Miramonti, Lino, Mohan, Nikhil, Montuschi, Michele, Reveco, Cristobal Morales, Nastasi, Massimiliano, Naumov, Dmitry V., Naumova, Elena, Navas-Nicolas, Diana, Nemchenok, Igor, Thi, Minh Thuan Nguyen, Nikolaev, Alexey, Ning, Feipeng, Ning, Zhe, Nunokawa, Hiroshi, Oberauer, Lothar, Ochoa-Ricoux, Juan Pedro, Olshevskiy, Alexander, Orestano, Domizia, Ortica, Fausto, Othegraven, Rainer, Paoloni, Alessandro, Parker, George, Parmeggiano, Sergio, Patsias, Achilleas, Pei, Yatian, Pelicci, Luca, Peng, Anguo, Peng, Haiping, Peng, Yu, Peng, Zhaoyuan, Percalli, Elisa, Perrin, Willy, Perrot, Frédéric, Petitjean, Pierre-Alexandre, Petrucci, Fabrizio, Pilarczyk, Oliver, Rico, Luis Felipe Piñeres, Popov, Artyom, Poussot, Pascal, Previtali, Ezio, Qi, Fazhi, Qi, Ming, Qi, Xiaohui, Qian, Sen, Qian, Xiaohui, Qian, Zhen, Qiao, Hao, Qin, Zhonghua, Qiu, Shoukang, Qu, Manhao, Qu, Zhenning, Ranucci, Gioacchino, Re, Alessandra, Rebii, Abdel, Redchuk, Mariia, Reina, Gioele, Ren, Bin, Ren, Jie, Ren, Yuhan, Ricci, Barbara, Rientong, Komkrit, Rifai, Mariam, Roche, Mathieu, Rodphai, Narongkiat, Romani, Aldo, Roskovec, Bedřich, Ruan, Xichao, Rybnikov, Arseniy, Sadovsky, Andrey, Saggese, Paolo, Sandanayake, Deshan, Sangka, Anut, Sava, Giuseppe, Sawangwit, Utane, Schever, Michaela, Schwab, Cédric, Schweizer, Konstantin, Selyunin, Alexandr, Serafini, Andrea, Settimo, Mariangela, Shao, Junyu, Sharov, Vladislav, Shi, Hexi, Shi, Jingyan, Shi, Yanan, Shutov, Vitaly, Sidorenkov, Andrey, Šimkovic, Fedor, Singhal, Apeksha, Sirignano, Chiara, Siripak, Jaruchit, Sisti, Monica, Smirnov, Mikhail, Smirnov, Oleg, Sokolov, Sergey, Songwadhana, Julanan, Soonthornthum, Boonrucksar, Sotnikov, Albert, Sreethawong, Warintorn, Stahl, Achim, Stanco, Luca, Stankevich, Konstantin, Steiger, Hans, Steinmann, Jochen, Sterr, Tobias, Stock, Matthias Raphael, Strati, Virginia, Strizh, Michail, Studenikin, Alexander, Su, Aoqi, Su, Jun, Sun, Guangbao, Sun, Shifeng, Sun, Xilei, Sun, Yongjie, Sun, Yongzhao, Sun, Zhengyang, Suwonjandee, Narumon, Takenaka, Akira, Tan, Xiaohan, Tang, Jian, Tang, Jingzhe, Tang, Qiang, Tang, Quan, Tang, Xiao, Hariharan, Vidhya Thara, Tkachev, Igor, Tmej, Tomas, Torri, Marco Danilo Claudio, Triossi, Andrea, Trzaska, Wladyslaw, Tung, Yu-Chen, Tuve, Cristina, Ushakov, Nikita, Vedin, Vadim, Venettacci, Carlo, Verde, Giuseppe, Vialkov, Maxim, Viaud, Benoit, Vollbrecht, Cornelius Moritz, von Sturm, Katharina, Vorobel, Vit, Voronin, Dmitriy, Votano, Lucia, Walker, Pablo, Wang, Caishen, Wang, Chung-Hsiang, Wang, En, Wang, Guoli, Wang, Hanwen, Wang, Jian, Wang, Jun, Wang, Li, Wang, Lu, Wang, Meng, Wang, Mingyuan, Wang, Qianchuan, Wang, Ruiguang, Wang, Sibo, Wang, Siguang, Wang, Wei, Wang, Wenshuai, Wang, Xi, Wang, Xiangyue, Wang, Yangfu, Wang, Yaoguang, Wang, Yi, Wang, Yifang, Wang, Yuanqing, Wang, Yuyi, Wang, Zhe, Wang, Zheng, Wang, Zhimin, Watcharangkool, Apimook, Wei, Wei, Wei, Wenlu, Wei, Yadong, Wei, Yuehuan, Wen, Liangjian, Weng, Jun, Wiebusch, Christopher, Wirth, Rosmarie, Wu, Chengxin, Wu, Diru, Wu, Qun, Wu, Yinhui, Wu, Yiyang, Wu, Zhi, Wurm, Michael, Wurtz, Jacques, Wysotzki, Christian, Xi, Yufei, Xia, Dongmei, Xian, Shishen, Xiang, Ziqian, Xiao, Fei, Xiao, Xiang, Xie, Xiaochuan, Xie, Yijun, Xie, Yuguang, Xin, Zhao, Xing, Zhizhong, Xu, Benda, Xu, Cheng, Xu, Donglian, Xu, Fanrong, Xu, Hangkun, Xu, Jiayang, Xu, Jilei, Xu, Jing, Xu, Jinghuan, Xu, Meihang, Xu, Xunjie, Xu, Yin, Xu, Yu, Yan, Baojun, Yan, Qiyu, Yan, Taylor, Yan, Xiongbo, Yan, Yupeng, Yang, Changgen, Yang, Chengfeng, Yang, Fengfan, Yang, Jie, Yang, Lei, Yang, Pengfei, Yang, Xiaoyu, Yang, Yifan, Yang, Yixiang, Yang, Zekun, Yao, Haifeng, Ye, Jiaxuan, Ye, Mei, Ye, Ziping, Yermia, Frédéric, You, Zhengyun, Yu, Boxiang, Yu, Chiye, Yu, Chunxu, Yu, Guojun, Yu, Hongzhao, Yu, Miao, Yu, Xianghui, Yu, Zeyuan, Yu, Zezhong, Yuan, Cenxi, Yuan, Chengzhuo, Yuan, Ying, Yuan, Zhenxiong, Yue, Baobiao, Zafar, Noman, Zamogilnyi, Kirill, Zavadskyi, Vitalii, Zeng, Fanrui, Zeng, Shan, Zeng, Tingxuan, Zeng, Yuda, Zhan, Liang, Zhang, Aiqiang, Zhang, Bin, Zhang, Binting, Zhang, Feiyang, Zhang, Hangchang, Zhang, Haosen, Zhang, Honghao, Zhang, Jialiang, Zhang, Jiawen, Zhang, Jie, Zhang, Jingbo, Zhang, Jinnan, Zhang, Junwei, Zhang, Lei, Zhang, Peng, Zhang, Ping, Zhang, Qingmin, Zhang, Shiqi, Zhang, Shu, Zhang, Shuihan, Zhang, Siyuan, Zhang, Tao, Zhang, Xiaomei, Zhang, Xin, Zhang, Xuantong, Zhang, Yibing, Zhang, Yinhong, Zhang, Yiyu, Zhang, Yongpeng, Zhang, Yu, Zhang, Yuanyuan, Zhang, Yumei, Zhang, Zhenyu, Zhang, Zhijian, Zhao, Jie, Zhao, Rong, Zhao, Runze, Zhao, Shujun, Zhao, Tianhao, Zheng, Hua, Zheng, Yangheng, Zhou, Jing, Zhou, Li, Zhou, Nan, Zhou, Shun, Zhou, Tong, Zhou, Xiang, Zhou, Xing, Zhu, Jingsen, Zhu, Kangfu, Zhu, Kejun, Zhu, Zhihang, Zhuang, Bo, Zhuang, Honglin, Zong, Liang, and Zou, Jiaheng
- Subjects
High Energy Physics - Experiment ,High Energy Physics - Phenomenology - Abstract
We explore the bound neutrons decay into invisible particles (e.g., $n\rightarrow 3 \nu$ or $nn \rightarrow 2 \nu$) in the JUNO liquid scintillator detector. The invisible decay includes two decay modes: $ n \rightarrow { inv} $ and $ nn \rightarrow { inv} $. The invisible decays of $s$-shell neutrons in $^{12}{\rm C}$ will leave a highly excited residual nucleus. Subsequently, some de-excitation modes of the excited residual nuclei can produce a time- and space-correlated triple coincidence signal in the JUNO detector. Based on a full Monte Carlo simulation informed with the latest available data, we estimate all backgrounds, including inverse beta decay events of the reactor antineutrino $\bar{\nu}_e$, natural radioactivity, cosmogenic isotopes and neutral current interactions of atmospheric neutrinos. Pulse shape discrimination and multivariate analysis techniques are employed to further suppress backgrounds. With two years of exposure, JUNO is expected to give an order of magnitude improvement compared to the current best limits. After 10 years of data taking, the JUNO expected sensitivities at a 90% confidence level are $\tau/B( n \rightarrow { inv} ) > 5.0 \times 10^{31} \, {\rm yr}$ and $\tau/B( nn \rightarrow { inv} ) > 1.4 \times 10^{32} \, {\rm yr}$., Comment: 28 pages, 7 figures, 4 tables
- Published
- 2024
24. Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning
- Author
-
Gu, Shangding, Sel, Bilgehan, Ding, Yuhao, Wang, Lu, Lin, Qingwei, Knoll, Alois, and Jin, Ming
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In numerous reinforcement learning (RL) problems involving safety-critical systems, a key challenge lies in balancing multiple objectives while simultaneously meeting all stringent safety constraints. To tackle this issue, we propose a primal-based framework that orchestrates policy optimization between multi-objective learning and constraint adherence. Our method employs a novel natural policy gradient manipulation method to optimize multiple RL objectives and overcome conflicting gradients between different tasks, since the simple weighted average gradient direction may not be beneficial for specific tasks' performance due to misaligned gradients of different task objectives. When there is a violation of a hard constraint, our algorithm steps in to rectify the policy to minimize this violation. We establish theoretical convergence and constraint violation guarantees in a tabular setting. Empirically, our proposed method also outperforms prior state-of-the-art methods on challenging safe multi-objective reinforcement learning tasks.
- Published
- 2024
25. Window and inpainting: dealing with data gaps for TianQin
- Author
-
Wang, Lu, Chen, Hong-Yu, Lyu, Xiangyu, Li, En-Kun, and Hu, Yi-Ming
- Subjects
General Relativity and Quantum Cosmology ,Astrophysics - Astrophysics of Galaxies ,Astrophysics - Instrumentation and Methods for Astrophysics ,Physics - Data Analysis, Statistics and Probability - Abstract
Space-borne gravitational wave detectors like TianQin might encounter data gaps due to factors like micro-meteoroid collisions or hardware failures. Such glitches will cause discontinuity in the data and have been observed in the LISA Pathfinder. The existence of such data gaps presents challenges to the data analysis for TianQin, especially for massive black hole binary mergers, since its signal-to-noise ratio (SNR) accumulates in a non-linear way, a gap near the merger could lead to significant loss of SNR. It could introduce bias in the estimate of noise properties, and furthermore the results of the parameter estimation. In this work, using simulated TianQin data with injected a massive black hole binary merger, we study the window function method, and for the first time, the inpainting method to cope with the data gap, and an iterative estimate scheme is designed to properly estimate the noise spectrum. We find that both methods can properly estimate noise and signal parameters. The easy-to-implement window function method can already perform well, except that it will sacrifice some SNR due to the adoption of the window. The inpainting method is slower, but it can minimize the impact of the data gap., Comment: 12 pages, 5 figures, comments welcome
- Published
- 2024
26. Maximizing Information Gain in Privacy-Aware Active Learning of Email Anomalies
- Author
-
Chung, Mu-Huan Miles, Li, Sharon, Kongmanee, Jaturong, Wang, Lu, Yang, Yuhong, Giang, Calvin, Jerath, Khilan, Raman, Abhay, Lie, David, and Chignell, Mark
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
Redacted emails satisfy most privacy requirements but they make it more difficult to detect anomalous emails that may be indicative of data exfiltration. In this paper we develop an enhanced method of Active Learning using an information gain maximizing heuristic, and we evaluate its effectiveness in a real world setting where only redacted versions of email could be labeled by human analysts due to privacy concerns. In the first case study we examined how Active Learning should be carried out. We found that model performance was best when a single highly skilled (in terms of the labelling task) analyst provided the labels. In the second case study we used confidence ratings to estimate the labeling uncertainty of analysts and then prioritized instances for labeling based on the expected information gain (the difference between model uncertainty and analyst uncertainty) that would be provided by labelling each instance. We found that the information maximization gain heuristic improved model performance over existing sampling methods for Active Learning. Based on the results obtained, we recommend that analysts should be screened, and possibly trained, prior to implementation of Active Learning in cybersecurity applications. We also recommend that the information gain maximizing sample method (based on expert confidence) should be used in early stages of Active Learning, providing that well-calibrated confidence can be obtained. We also note that the expertise of analysts should be assessed prior to Active Learning, as we found that analysts with lower labelling skill had poorly calibrated (over-) confidence in their labels., Comment: arXiv admin note: substantial text overlap with arXiv:2303.00870
- Published
- 2024
27. MIDGARD: Self-Consistency Using Minimum Description Length for Structured Commonsense Reasoning
- Author
-
Nair, Inderjeet and Wang, Lu
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
We study the task of conducting structured reasoning as generating a reasoning graph from natural language input using large language models (LLMs). Previous approaches have explored various prompting schemes, yet they suffer from error propagation due to the autoregressive nature and single-pass-based decoding, which lack error correction capability. Additionally, relying solely on a single sample may result in the omission of true nodes and edges. To counter this, we draw inspiration from self-consistency (SC), which involves sampling a diverse set of reasoning chains and taking the majority vote as the final answer. To tackle the substantial challenge of applying SC on generated graphs, we propose MIDGARD (MInimum Description length Guided Aggregation of Reasoning in Directed acyclic graph) that leverages Minimum Description Length (MDL)-based formulation to identify consistent properties among the different graph samples generated by an LLM. This formulation helps reject properties that appear in only a few samples, which are likely to be erroneous, while enabling the inclusion of missing elements without compromising precision. Our method demonstrates superior performance than comparisons across various structured reasoning tasks, including argument structure extraction, explanation graph generation, inferring dependency relations among actions for everyday tasks, and semantic graph generation from natural texts., Comment: Accepted at ACL 2024(main)
- Published
- 2024
28. Real-time Neural Woven Fabric Rendering
- Author
-
Chen, Xiang, Wang, Lu, and Wang, Beibei
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Graphics - Abstract
Woven fabrics are widely used in applications of realistic rendering, where real-time capability is also essential. However, rendering realistic woven fabrics in real time is challenging due to their complex structure and optical appearance, which cause aliasing and noise without many samples. The core of this issue is a multi-scale representation of the fabric shading model, which allows for a fast range query. Some previous neural methods deal with the issue at the cost of training on each material, which limits their practicality. In this paper, we propose a lightweight neural network to represent different types of woven fabrics at different scales. Thanks to the regularity and repetitiveness of woven fabric patterns, our network can encode fabric patterns and parameters as a small latent vector, which is later interpreted by a small decoder, enabling the representation of different types of fabrics. By applying the pixel's footprint as input, our network achieves multi-scale representation. Moreover, our network is fast and occupies little storage because of its lightweight structure. As a result, our method achieves rendering and editing woven fabrics at nearly 60 frames per second on an RTX 3090, showing a quality close to the ground truth and being free from visible aliasing and noise., Comment: Accepted by SIGGRAPH 2024 Conference Proceedings
- Published
- 2024
- Full Text
- View/download PDF
29. Balance Reward and Safety Optimization for Safe Reinforcement Learning: A Perspective of Gradient Manipulation
- Author
-
Gu, Shangding, Sel, Bilgehan, Ding, Yuhao, Wang, Lu, Lin, Qingwei, Jin, Ming, and Knoll, Alois
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Ensuring the safety of Reinforcement Learning (RL) is crucial for its deployment in real-world applications. Nevertheless, managing the trade-off between reward and safety during exploration presents a significant challenge. Improving reward performance through policy adjustments may adversely affect safety performance. In this study, we aim to address this conflicting relation by leveraging the theory of gradient manipulation. Initially, we analyze the conflict between reward and safety gradients. Subsequently, we tackle the balance between reward and safety optimization by proposing a soft switching policy optimization method, for which we provide convergence analysis. Based on our theoretical examination, we provide a safe RL framework to overcome the aforementioned challenge, and we develop a Safety-MuJoCo Benchmark to assess the performance of safe RL algorithms. Finally, we evaluate the effectiveness of our method on the Safety-MuJoCo Benchmark and a popular safe RL benchmark, Omnisafe. Experimental results demonstrate that our algorithms outperform several state-of-the-art baselines in terms of balancing reward and safety optimization.
- Published
- 2024
30. Generational differences in sexual behaviour and partnering among gay, bisexual, and other men who have sex with men
- Author
-
Hunt, Giselle, Wang, Lu, Bacani, Nicanor, Card, Kiffer, Sereda, Paul, Lachowsky, Nathan, Roth, Eric, Hogg, Robert, Moore, David, and Armstrong, Heather
- Published
- 2019
31. Older Immigrants' Access to Primary Health Care in Canada: A Scoping Review
- Author
-
Wang, Lu, Guruge, Sepali, and Montana, Gelsomina
- Published
- 2019
32. The coagulation status in women of endometriosis with stage IV.
- Author
-
Wang, Lu, Ling, Jingxian, Zhu, Xianghong, Zhang, Yan, Li, Rong, Huang, Jingjing, Huang, Doudou, Wu, Chan, and Zhou, Huaijun
- Subjects
Endometriosis ,Hypercoagulability ,Inflammation ,Stage IV ,Humans ,Female ,Endometriosis ,Adult ,Retrospective Studies ,Case-Control Studies ,Fibrinogen ,Neutrophils ,Partial Thromboplastin Time ,Blood Coagulation ,Severity of Illness Index ,CA-125 Antigen ,ROC Curve ,Lymphocytes ,Biomarkers - Abstract
BACKGROUND: Endometriosis is considered as a systemic disease with the presence of proinflammatory cytokines in the circulation, which drives hypercoagulable state of endometriosis. Currently, endometriosis is classified into four stages: I (minimal), II (mild), III (moderate) and IV (severe). The aim of this study is to investigate the correlations between inflammatory markers and coagulation factors in patients diagnosed of endometriosis with stage IV. METHODS: This retrospective case-control study included 171 endometriosis patients with stage IV and 184 controls. Continuous data were expressed by mean ± standard deviation. Mann-Whitney U and χ2 tests were used to compare the medians and frequencies among the groups. Spearman analysis was conducted to determine the correlation among the measured parameters. The diagnostic values of the parameters differentiating endometriomas were tested by receiver operating characteristic (ROC) curve. RESULTS: The time of activated partial thromboplastin time (APTT) was decreased and the concentration of fibrinogen (FIB) and neutrophil-to-lymphocyte ratio (NLR) were increased in women of endometriosis with stage IV. The APTT were negatively correlated with NLR while the concentrations of FIB were positively correlated with NLR. The ROC analysis showed that the Area under the curve (AUC) of FIB was 0.766 (95% confidence interval:0.717-0.814) with sensitivity and specificity reaching 86.5 and 60.9%, respectively. The AUC of CA125 and CA199 was 0.638 (95% confidence interval: 0.578-0.697), 0.71 (95% confidence interval: 0.656-0.763) with sensitivity and specificity reaching 40.9 and 91.8%, 80.7 and 56.5% respectively. The combination of these factors showed the highest AUC of 0.895 (0.862-0.927) with sensitivity of 88.9% and specificity of 77.7%. CONCLUSION: In the present study, we found that inflammatory factors showed significant correlation with APTT or FIB in endometriosis with stage IV. Moreover, the coagulation factors combined with CA125 and CA199 were more reliable for identifying the endometriosis with stage IV.
- Published
- 2024
33. The role of shear flow collapse and enhanced turbulence spreading in edge cooling approaching the density limit
- Author
-
Long, Ting, Diamond, PH, Ke, Rui, Chen, Zhipeng, Xu, Xin, Tian, Wenjing, Hong, Rongjie, Cao, Mingyun, Liu, Yanmin, Xu, Min, Wang, Lu, Yang, Zhoujun, Yuan, Jinbang, Zhou, Yongkang, Yan, Qinghao, Yang, Qinghu, Shen, Chengshuo, Nie, Lin, Wang, Zhanhui, Hao, Guangzhou, Wang, Nengchao, Chen, Zhongyong, Li, Jiquan, Chen, Wei, and Zhong, Wulyu
- Subjects
Nuclear and Plasma Physics ,Physical Sciences ,tokamak ,density limit ,edge cooling ,turbulence spreading ,shear flow ,Atomic ,Molecular ,Nuclear ,Particle and Plasma Physics ,Fluids & Plasmas ,Nuclear and plasma physics - Abstract
Experimental studies of the dynamics of shear flow and turbulence spreading at the edge of tokamak plasmas are reported. Scans of line-averaged density and plasma current are carried out while approaching the Greenwald density limit on the J-TEXT tokamak. In all scans, when the Greenwald fraction f G = n ¯ / n G = n ¯ / ( I p / π a 2 ) increases, a common feature of enhanced turbulence spreading and edge cooling is found. The result suggests that turbulence spreading is a good indicator of edge cooling, indeed better than turbulent particle transport is. The normalized turbulence spreading power increases significantly when the normalized E × B shearing rate decreases. This indicates that turbulence spreading becomes prominent when the shearing rate is weaker than the turbulence scattering rate. The asymmetry between positive/negative (blobs/holes) spreading events, turbulence spreading power and shear flow are discussed. These results elucidate the important effects of interaction between shear flow and turbulence spreading on plasma edge cooling.
- Published
- 2024
34. Finite-sample adjustments for comparing clustered adaptive interventions using data from a clustered SMART
- Author
-
Pan, Wenchu, Almirall, Daniel, Kilbourne, Amy M., Quanbeck, Andrew, and Wang, Lu
- Subjects
Statistics - Methodology - Abstract
Adaptive interventions, aka dynamic treatment regimens, are sequences of pre-specified decision rules that guide the provision of treatment for an individual given information about their baseline and evolving needs, including in response to prior intervention. Clustered adaptive interventions (cAIs) extend this idea by guiding the provision of intervention at the level of clusters (e.g., clinics), but with the goal of improving outcomes at the level of individuals within the cluster (e.g., clinicians or patients within clinics). A clustered, sequential multiple-assignment randomized trials (cSMARTs) is a multistage, multilevel randomized trial design used to construct high-quality cAIs. In a cSMART, clusters are randomized at multiple intervention decision points; at each decision point, the randomization probability can depend on response to prior data. A challenge in cluster-randomized trials, including cSMARTs, is the deleterious effect of small samples of clusters on statistical inference, particularly via estimation of standard errors. \par This manuscript develops finite-sample adjustment (FSA) methods for making improved statistical inference about the causal effects of cAIs in a cSMART. The paper develops FSA methods that (i) scale variance estimators using a degree-of-freedom adjustment, (ii) reference a t distribution (instead of a normal), and (iii) employ a ``bias corrected" variance estimator. Method (iii) requires extensions that are unique to the analysis of cSMARTs. Extensive simulation experiments are used to test the performance of the methods. The methods are illustrated using the Adaptive School-based Implementation of CBT (ASIC) study, a cSMART designed to construct a cAI for improving the delivery of cognitive behavioral therapy (CBT) by school mental health professionals within high schools in Michigan.
- Published
- 2024
35. Enhanced Language Model Truthfulness with Learnable Intervention and Uncertainty Expression
- Author
-
Bayat, Farima Fatahi, Liu, Xin, Jagadish, H. V., and Wang, Lu
- Subjects
Computer Science - Computation and Language - Abstract
Large language models (LLMs) can generate long-form and coherent text, yet they often hallucinate facts, which undermines their reliability. To mitigate this issue, inference-time methods steer LLM representations toward the "truthful directions" previously learned for truth elicitation. However, applying these truthful directions with the same intensity fails to generalize across different query contexts. We propose LITO, a Learnable Intervention method for Truthfulness Optimization that automatically identifies the optimal intervention intensity tailored to each specific context. LITO explores a sequence of model generations based on increasing levels of intervention intensities. It selects the most accurate response or refuses to answer when the predictions are highly uncertain. Experiments on multiple LLMs and question-answering datasets demonstrate that LITO improves truthfulness while preserving task accuracy. The adaptive nature of LITO counters the limitations of one-size-fits-all intervention methods, maximizing truthfulness by reflecting the model's internal knowledge only when it is confident. Our code is available at https://github.com/launchnlp/LITO., Comment: ACL 2024 Findings (Long paper)
- Published
- 2024
36. Verco: Learning Coordinated Verbal Communication for Multi-agent Reinforcement Learning
- Author
-
Li, Dapeng, Dong, Hang, Wang, Lu, Qiao, Bo, Qin, Si, Lin, Qingwei, Zhang, Dongmei, Zhang, Qi, Xu, Zhiwei, Zhang, Bin, and Fan, Guoliang
- Subjects
Computer Science - Multiagent Systems ,Computer Science - Artificial Intelligence - Abstract
In recent years, multi-agent reinforcement learning algorithms have made significant advancements in diverse gaming environments, leading to increased interest in the broader application of such techniques. To address the prevalent challenge of partial observability, communication-based algorithms have improved cooperative performance through the sharing of numerical embedding between agents. However, the understanding of the formation of collaborative mechanisms is still very limited, making designing a human-understandable communication mechanism a valuable problem to address. In this paper, we propose a novel multi-agent reinforcement learning algorithm that embeds large language models into agents, endowing them with the ability to generate human-understandable verbal communication. The entire framework has a message module and an action module. The message module is responsible for generating and sending verbal messages to other agents, effectively enhancing information sharing among agents. To further enhance the message module, we employ a teacher model to generate message labels from the global view and update the student model through Supervised Fine-Tuning (SFT). The action module receives messages from other agents and selects actions based on current local observations and received messages. Experiments conducted on the Overcooked game demonstrate our method significantly enhances the learning efficiency and performance of existing methods, while also providing an interpretable tool for humans to understand the process of multi-agent cooperation., Comment: 12 pages, 6 figures
- Published
- 2024
37. Small Language Models Need Strong Verifiers to Self-Correct Reasoning
- Author
-
Zhang, Yunxiang, Khalifa, Muhammad, Logeswaran, Lajanugen, Kim, Jaekyeom, Lee, Moontae, Lee, Honglak, and Wang, Lu
- Subjects
Computer Science - Computation and Language - Abstract
Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether small (<= 13B) language models (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs. We propose a novel pipeline that prompts smaller LMs to collect self-correction data that supports the training of self-refinement abilities. First, we leverage correct solutions to guide the model in critiquing their incorrect responses. Second, the generated critiques, after filtering, are used for supervised fine-tuning of the self-correcting reasoner through solution refinement. Our experimental results show improved self-correction abilities of two models on five datasets spanning math and commonsense reasoning, with notable performance gains when paired with a strong GPT-4-based verifier, though limitations are identified when using a weak self-verifier for determining when to correct., Comment: ACL Findings 2024 - Camera Ready
- Published
- 2024
38. Noiseless linear amplification-based quantum Ziv-Zakai bound for phase estimation and its Heisenberg error limits in noisy scenarios
- Author
-
Ye, Wei, Xiao, Peng, Xu, Xiaofan, Zhu, Xiang, Yan, Yunbin, Wang, Lu, Ren, Jie, Zhu, Yuxuan, Xia, Ying, Rao, Xuan, and Chang, Shoukang
- Subjects
Quantum Physics - Abstract
In this work, we address the central problem about how to effectively find the available precision limit of unknown parameters. In the framework of the quantum Ziv-Zakai bound (QZZB), we employ noiseless linear amplification (NLA)techniques to an initial coherent state (CS) as the probe state, and focus on whether the phase estimation performance is improved significantly in noisy scenarios, involving the photon-loss and phase-diffusion cases. More importantly, we also obtain two kinds of Heisenberg error limits of the QZZB with the NLA-based CS in these noisy scenarios, making comparisons with both the Margolus-Levitin (ML) type bound and the Mandelstam-Tamm (MT) type bound. Our analytical results show that in cases of photon loss and phase diffusion, the phase estimation performance of the QZZB can be improved remarkably by increasing the NLA gain factor. Particularly, the improvement is more pronounced with severe photon losses. Furthermore in minimal photon losses, our Heisenberg error limit shows better compactness than the cases of the ML-type and MT-type bounds. Our findings will provide an useful guidance for accomplishing more complex quantum information processing tasks., Comment: 10 pages, 9 figures
- Published
- 2024
39. Existence of monotone Morse flow lines of the expander functional
- Author
-
Bernstein, Jacob, Chen, Letian, and Wang, Lu
- Subjects
Mathematics - Differential Geometry ,Mathematics - Analysis of PDEs ,53E10, 49Q20 - Abstract
Given a smooth asymptotically conical self-expander that is strictly unstable we construct a (singular) Morse flow line of the expander functional that connects it to a stable self-expander. This flow is monotone in a suitable sense and has small singular set., Comment: 46 pages
- Published
- 2024
40. Lower Bounds on Density for Topologically Nontrivial Minimal Cones up to Dimension Six
- Author
-
Bernstein, Jacob and Wang, Lu
- Subjects
Mathematics - Differential Geometry ,53A10, 53E10 - Abstract
We prove lower bounds on the density of regular minimal cones of dimension less than seven provided the complements of the cones are topologically nontrivial., Comment: 19 pages; 1 figure
- Published
- 2024
41. Source-Aware Training Enables Knowledge Attribution in Language Models
- Author
-
Khalifa, Muhammad, Wadden, David, Strubell, Emma, Lee, Honglak, Wang, Lu, Beltagy, Iz, and Peng, Hao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) learn a vast amount of knowledge during pretraining, but they are often oblivious to the source(s) of such knowledge. We investigate the problem of intrinsic source citation, where LLMs are required to cite the pretraining source supporting a generated response. Intrinsic source citation can enhance LLM transparency, interpretability, and verifiability. To give LLMs such ability, we explore source-aware training -- a recipe that involves (i) training the LLM to associate unique source document identifiers with the knowledge in each document, followed by (ii) an instruction-tuning stage to teach the LLM to cite a supporting pretraining source when prompted. Source-aware training borrows from existing pretraining/fine-tuning frameworks and requires minimal changes to the model architecture or implementation. Through experiments on synthetic data, we demonstrate that our training recipe can enable faithful attribution to the pretraining data without a substantial impact on the model's perplexity compared to standard pretraining. Our findings also highlight the importance of pretraining data augmentation in achieving attribution. Code and data available here: \url{https://github.com/mukhal/intrinsic-source-citation}, Comment: COLM '24
- Published
- 2024
42. CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR
- Author
-
Qiu, Minghui, Huang, Yandao, Chen, Lin, Wang, Lu, and Wu, Kaishun
- Subjects
Computer Science - Machine Learning ,Computer Science - Networking and Internet Architecture - Abstract
In recent years, emerging research on mobile sensing has led to novel scenarios that enhance daily life for humans, but dynamic usage conditions often result in performance degradation when systems are deployed in real-world settings. Existing solutions typically employ one-off adaptation schemes based on neural networks, which struggle to ensure robustness against uncertain drifting conditions in human-centric sensing scenarios. In this paper, we propose CODA, a COst-efficient Domain Adaptation mechanism for mobile sensing that addresses real-time drifts from the data distribution perspective with active learning theory, ensuring cost-efficient adaptation directly on the device. By incorporating a clustering loss and importance-weighted active learning algorithm, CODA retains the relationship between different clusters during cost-effective instance-level updates, preserving meaningful structure within the data distribution. We also showcase its generalization by seamlessly integrating it with Neural Network-based solutions for Human Activity Recognition tasks. Through meticulous evaluations across diverse datasets, including phone-based, watch-based, and integrated sensor-based sensing tasks, we demonstrate the feasibility and potential of online adaptation with CODA. The promising results achieved by CODA, even without learnable parameters, also suggest the possibility of realizing unobtrusive adaptation through specific application designs with sufficient feedback.
- Published
- 2024
43. Hierarchical Gaussian Mixture Normalizing Flow Modeling for Unified Anomaly Detection
- Author
-
Yao, Xincheng, Li, Ruoqi, Qian, Zefeng, Wang, Lu, and Zhang, Chongyang
- Subjects
Computer Science - Machine Learning ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Unified anomaly detection (AD) is one of the most challenges for anomaly detection, where one unified model is trained with normal samples from multiple classes with the objective to detect anomalies in these classes. For such a challenging task, popular normalizing flow (NF) based AD methods may fall into a "homogeneous mapping" issue,where the NF-based AD models are biased to generate similar latent representations for both normal and abnormal features, and thereby lead to a high missing rate of anomalies. In this paper, we propose a novel Hierarchical Gaussian mixture normalizing flow modeling method for accomplishing unified Anomaly Detection, which we call HGAD. Our HGAD consists of two key components: inter-class Gaussian mixture modeling and intra-class mixed class centers learning. Compared to the previous NF-based AD methods, the hierarchical Gaussian mixture modeling approach can bring stronger representation capability to the latent space of normalizing flows, so that even complex multi-class distribution can be well represented and learned in the latent space. In this way, we can avoid mapping different class distributions into the same single Gaussian prior, thus effectively avoiding or mitigating the "homogeneous mapping" issue. We further indicate that the more distinguishable different class centers, the more conducive to avoiding the bias issue. Thus, we further propose a mutual information maximization loss for better structuring the latent feature space. We evaluate our method on four real-world AD benchmarks, where we can significantly improve the previous NF-based AD methods and also outperform the SOTA unified AD methods., Comment: This paper is accepted by ECCV2024
- Published
- 2024
44. TYC 3340-2437-1: A Quadruple System with A Massive Star
- Author
-
Li, Jiao, Liu, Chao, Luo, Changqing, Zhang, Bo, Li, Jiang-Dan, Li, Jia-Dong, Han, Zhan-Wen, Chen, Xue-Fei, Wang, Lu-Qian, Fang, Min, Xing, Li-Feng, Zhang, Xi-Liang, and Jin, Chichuan
- Subjects
Astrophysics - Solar and Stellar Astrophysics - Abstract
Hierarchical massive quadruple systems are ideal laboratories for examining the theories of star formation, dynamical evolution, and stellar evolution. The successive mergers of hierarchical quadruple systems might explain the mass gap between neutron stars and black holes. Looking for light curves of O-type binaries identified by LAMOST, we find a (2+2) quadruple system: TYC 3340-2437-1, located in the stellar bow-shock nebula (SBN). It has a probability of over 99.99\% being a quadruple system derived from the surface density of the vicinity stars. Its inner orbital periods are 3.390602(89) days and 2.4378(16) days, respectively, and the total mass is about (11.47 + 5.79) + (5.2 + 2.02) = 24.48 $M_{\odot}$. The line-of-sight inclinations of the inner binaries, B$_1$ and B$_2$, are 55.94 and 78.2 degrees, respectively, indicating that they are not co-planar. Based on observations spanning 34 months and the significance of the astrometric excess noise ($D>2$) in Gaia DR3 data, we guess that its outer orbital period might be a few years. If it were true, the quadruple system might form through the disk fragmentation mechanism with outer eccentric greater than zero. This eccentricity could be the cause of both the arc-like feature of the SBN and the noncoplanarity of the inner orbit. The outer orbital period and outer eccentric could be determined with the release of future epoch astrometric data of Gaia.
- Published
- 2024
45. Nissist: An Incident Mitigation Copilot based on Troubleshooting Guides
- Author
-
An, Kaikai, Yang, Fangkai, Lu, Junting, Li, Liqun, Ren, Zhixing, Huang, Hao, Wang, Lu, Zhao, Pu, Kang, Yu, Ding, Hua, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, and Zhang, Qi
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Effective incident management is pivotal for the smooth operation of enterprises-level cloud services. In order to expedite incident mitigation, service teams compile troubleshooting knowledge into Troubleshooting Guides (TSGs) accessible to on-call engineers (OCEs). While automated pipelines are enabled to resolve the most frequent and easy incidents, there still exist complex incidents that require OCEs' intervention. However, TSGs are often unstructured and incomplete, which requires manual interpretation by OCEs, leading to on-call fatigue and decreased productivity, especially among new-hire OCEs. In this work, we propose Nissist which leverages TSGs and incident mitigation histories to provide proactive suggestions, reducing human intervention. Leveraging Large Language Models (LLM), Nissist extracts insights from unstructured TSGs and historical incident mitigation discussions, forming a comprehensive knowledge base. Its multi-agent system design enhances proficiency in precisely discerning user queries, retrieving relevant information, and delivering systematic plans consecutively. Through our user case and experiment, we demonstrate that Nissist significant reduce Time to Mitigate (TTM) in incident mitigation, alleviating operational burdens on OCEs and improving service reliability. Our demo is available at https://aka.ms/nissist_demo., Comment: Work in progress
- Published
- 2024
46. Adaptive Weight Learning for Multiple Outcome Optimization With Continuous Treatment
- Author
-
Wang, Chang and Wang, Lu
- Subjects
Statistics - Methodology ,Mathematics - Statistics Theory - Abstract
To promote precision medicine, individualized treatment regimes (ITRs) are crucial for optimizing the expected clinical outcome based on patient-specific characteristics. However, existing ITR research has primarily focused on scenarios with categorical treatment options and a single outcome. In reality, clinicians often encounter scenarios with continuous treatment options and multiple, potentially competing outcomes, such as medicine efficacy and unavoidable toxicity. To balance these outcomes, a proper weight is necessary, which should be learned in a data-driven manner that considers both patient preference and clinician expertise. In this paper, we present a novel algorithm for developing individualized treatment regimes (ITRs) that incorporate continuous treatment options and multiple outcomes, utilizing observational data. Our approach assumes that clinicians are optimizing individualized patient utilities with sub-optimal treatment decisions that are at least better than random assignment. Treatment assignment is assumed to directly depend on the true underlying utility of the treatment rather than patient characteristics. The proposed method simultaneously estimates the weighting of composite outcomes and the decision-making process, allowing for construction of individualized treatment regimes with continuous doses. The proposed estimators can be used for inference and variable selection, facilitating the identification of informative treatment assignments and preference-associated variables. We evaluate the finite sample performance of our proposed method via simulation studies and apply it to a real data application of radiation oncology analysis., Comment: no
- Published
- 2024
47. Dual-Path Coupled Image Deraining Network via Spatial-Frequency Interaction
- Author
-
He, Yuhong, Jiang, Aiwen, Jiang, Lingfang, Wang, Zhifeng, and Wang, Lu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Transformers have recently emerged as a significant force in the field of image deraining. Existing image deraining methods utilize extensive research on self-attention. Though showcasing impressive results, they tend to neglect critical frequency information, as self-attention is generally less adept at capturing high-frequency details. To overcome this shortcoming, we have developed an innovative Dual-Path Coupled Deraining Network (DPCNet) that integrates information from both spatial and frequency domains through Spatial Feature Extraction Block (SFEBlock) and Frequency Feature Extraction Block (FFEBlock). We have further introduced an effective Adaptive Fusion Module (AFM) for the dual-path feature aggregation. Extensive experiments on six public deraining benchmarks and downstream vision tasks have demonstrated that our proposed method not only outperforms the existing state-of-the-art deraining method but also achieves visually pleasuring results with excellent robustness on downstream vision tasks.
- Published
- 2024
48. TensoSDF: Roughness-aware Tensorial Representation for Robust Geometry and Material Reconstruction
- Author
-
Li, Jia, Wang, Lu, Zhang, Lei, and Wang, Beibei
- Subjects
Computer Science - Graphics - Abstract
Reconstructing objects with realistic materials from multi-view images is problematic, since it is highly ill-posed. Although the neural reconstruction approaches have exhibited impressive reconstruction ability, they are designed for objects with specific materials (e.g., diffuse or specular materials). To this end, we propose a novel framework for robust geometry and material reconstruction, where the geometry is expressed with the implicit signed distance field (SDF) encoded by a tensorial representation, namely TensoSDF. At the core of our method is the roughness-aware incorporation of the radiance and reflectance fields, which enables a robust reconstruction of objects with arbitrary reflective materials. Furthermore, the tensorial representation enhances geometry details in the reconstructed surface and reduces the training time. Finally, we estimate the materials using an explicit mesh for efficient intersection computation and an implicit SDF for accurate representation. Consequently, our method can achieve more robust geometry reconstruction, outperform the previous works in terms of relighting quality, and reduce 50% training times and 70% inference time., Comment: Accepted by SIGGRAPH 2024
- Published
- 2024
49. Table-Top Tunable Chiral Photonic Emitter
- Author
-
Wang, Lu, Ciappina, Marcelo Fabián, Brabec, Thomas, and Liu, Xiaojun
- Subjects
Physics - Optics - Abstract
The increasing interest in chiral light stems from its spiral trajectory along the propagation direction, facilitating the interaction between different polarization states of light and matter. Despite tremendous achievements in chiral light-related research, the generation and control of chiral pulse have presented enduring challenges, especially at the terahertz and ultraviolet spectral ranges, due to the lack of suitable optical elements for effective pulse manipulation. Conventionally, chiral light can be obtained from intricate optical systems, by an external magnetic field, or by metamaterials, which necessitate sophisticated optical configurations. Here, we propose a versatile tunable chiral emitter, composed of only two planar Weyl semimetals slabs, addressing the challenges in both spectral ranges. Our results open the way to a compact tunable chiral emitter platform in both terahertz and ultra-violet frequency ranges. This advancement holds the potential to serve as the cornerstone for integrated chiral photonics.
- Published
- 2024
50. Optimize Individualized Energy Delivery for Septic Patients Using Predictive Deep Learning Models: A Real World Study
- Author
-
Wang, Lu, Chang, Li, Zhang, Ruipeng, Li, Kexun, Wang, Yu, Chen, Wei, Feng, Xuanlin, Sun, Mingwei, Wang, Qi, Lu, Charles Damien, Zeng, Jun, and Jiang, Hua
- Subjects
Quantitative Biology - Other Quantitative Biology - Abstract
Background and Objectives: We aim to establish deep learning models to optimize the individualized energy delivery for septic patients. Methods and Study Design: We conducted a study of adult septic patients in Intensive Care Unit (ICU), collecting 47 indicators for 14 days. After data cleaning and preprocessing, we used stats to explore energy delivery in deceased and surviving patients. We filtered out nutrition-related features and divided the data into three metabolic phases: acute early, acute late, and rehabilitation. Models were built using data before September 2020 and validated on the rest. We then established optimal energy target models for each phase using deep learning. Results: A total of 277 patients and 3115 data were included in this study. The models indicated that the optimal energy targets in the three phases were 900kcal/d, 2300kcal/d, and 2000kcal/d, respectively. Excessive energy intake increased mortality rapidly in the early period of the acute phase. Insufficient energy in the late period of the acute phase significantly raised the mortality of septic patients. For the rehabilitation phase, too much or too little energy delivery both associated with high mortality. Conclusion: Our study established time-series prediction models for septic patients to optimize energy delivery in the ICU. This approach indicated the feasibility of developing nutritional tools for critically ill patients. We recommended permissive underfeeding only in the early acute phase. Later, increased energy intake may improve survival and settle energy debts caused by underfeeding.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.