60 results
Search Results
2. Standardizing the Software Tag in Japan for Transparency of Development
- Abstract
In this paper, we describe the Software Tag which makes software development visible to software purchasers (users). A software tag is a partial set of empirical data about a software development project shared between the purchaser and developer. The purchaser uses the software tag to evaluate the software project, allowing them to recognize the quality level of the processes and products involved. With Japanese government support, we have successfully standardized the software tag named Software Tag Standard 1.0, and have developed various associated tools for tag data collection and visualization. For its initial evaluation, the software tag has been applied to several projects. This paper also presents various activities aimed at promoting the use of the software tag in Japan and the world., PROFES 2010 : Product-Focused Software Process Improvement, June 21-23, 2010, Limerick, Ireland
- Published
- 2023
3. Neural embedding: learning the embedding of the manifold of physics data
- Abstract
In this paper, we present a method of embedding physics data manifolds with metric structure into lower dimensional spaces with simpler metrics, such as Euclidean and Hyperbolic spaces. We then demonstrate that it can be a powerful step in the data analysis pipeline for many applications. Using progressively more realistic simulated collisions at the Large Hadron Collider, we show that this embedding approach learns the underlying latent structure. With the notion of volume in Euclidean spaces, we provide for the first time a viable solution to quantifying the true search capability of model agnostic search algorithms in collider physics (i.e. anomaly detection). Finally, we discuss how the ideas presented in this paper can be employed to solve many practical challenges that require the extraction of physically meaningful representations from information in complex high dimensional datasets.
- Published
- 2023
4. From Bismarck to Beveridge: the other pension reform in Spain
- Abstract
Ageing is the major challenge for the PAYG pension systems in developed countries. Most of them are undergoing reforms in order to adapt to the new demographic reality. The package of reforms implemented includes increasing the retirement age, reducing the replacement rate, or introducing a sustainability factor linking pension to life expectancy. The aim of this paper is to analyse the potential consequences of a different type of reform that is at a very incipient stage in Spain but that could have a significant impact if it were fully implemented. This reform, called ‘silent reform’ because it is imperceptible to citizens in its early stages, basically consists in increasing maximum pensions in line with inflation instead of wage or productivity growth. This policy is reducing the replacement rate only for high earning workers and increasing the redistributive component of the system. This paper is the first to quantify and evaluate the potential consequences of this type of reform in Spain. We have used an accounting model with heterogeneous agents and overlapping generations in order to project pension expenditure for the next six decades. The results show that this type of reform could potentially contain future expenditure but at the cost of changing the nature of the pension system from a contributory or Bismarckian-type system into a pure redistributive pension system or Beveridgean-type one., Ministerio de Ciencia e Innovación (MICINN), Depto. de Análisis Económico y Economía Cuantitativa, Fac. de Ciencias Económicas y Empresariales, TRUE, pub
- Published
- 2023
5. Effort Estimation Based on Collaborative Filtering
- Abstract
Effort estimation methods are one of the important tools for project managers in controlling human resources of ongoing or future software projects. The estimations require historical project data including process and product metrics that characterize past projects. Practically, in using the estimation methods, it is a problem that the historical project data frequently contain substantial missing values. In this paper, we propose an effort estimation method based on Collaborative Filtering for solving the problem. Collaborative Filtering has been developed in information retrieval researchers, as one of the estimation techniques using defective data, i.e. data having substantial missing values. The proposed method first evaluates similarity between a target (ongoing) project and each past project, using vector based similarity computation equation. Then it predicts the effort of the target project with the weighted sum of the efforts of past similar projects. We conducted an experimental case study to evaluate the estimation performance of the proposed method. The proposed method showed better performance than the conventional regression method when the data had substantial missing values., PROFES 2004 : Product Focused Software Process Improvement, April 5-8, 2004, Kansai Science City, Japan
- Published
- 2023
6. Webjig: An Automated User Data Collection System for Website Usability Evaluation
- Abstract
In order to improve website usability, it is important for developers to understand how users access websites. In this paper, we present Webjig, which is a support system for website usability evaluation in order to resolve the problems associated with the existing systems. Webjig can collect users’ interaction data from static and dynamic websites. Moreover, by using Webjig, developers can precisely identify users’ activities on websites. By performing an experiment to evaluate the usefulness of Webjig, we have confirmed that developers could effectively improve website usability., HCI 2009 : Human-Computer Interaction. New Trends, July 19-24, 2009, San Diego, CA, USA
- Published
- 2023
7. Community Search: A Collaborative Searching Web Application with a User Ranking System
- Abstract
People are using search engine in daily life. But most of the tools that we have today treat information-seeking tasks as a transient activity. In this research paper we introduce a web application system that provides collaborative function and experts finding system. We develop a system that will help user to organize search result and to do the collaboration with others. With the new iterative algorithm, users will also gain more percentage of needed result and the system will be able to suggest the experts related to the search keyword., OCSC 2011 : Online Communities and Social Computing, July 9-14, 2011, Orlando, FL, USA
- Published
- 2023
8. Exploiting Eye Gaze Information for Operating Services in Home Network System
- Abstract
This paper presents a system which extensively exploits user’s eye gaze information for operating services and appliances in the emerging home network system (HNS). We design and implement the system called AXELLA, which captures user’s gaze, then invokes a service operation, and finally announces the response via voice. AXELLA interprets the gaze information together with supplementary information as a gaze context, and triggers a service module associated by a service rule. Thus, a simple gazing activity can be used for various service operations. Service developers (or even home users) can easily develop context-aware HNS services with the eye-gaze-based UI. We demonstrate a practical service called “See and Know” implemented using AXELLA, where a user can acquire the current status information of every appliance just by looking at the appliance. It was shown that the proposed system can reduce the artificial dependency significantly with respect to ease-of-learning and system scalability., UCS 2006 : Ubiquitous Computing Systems, October 11-13, 2006, Seoul, Korea
- Published
- 2023
9. A Software Process Tailoring System Focusing to Quantitative Management Plans
- Abstract
This paper presents a survey about use of quantitative management indicators in a Japanese software development organization. This survey is conducted in order to investigate possible criteria for selecting and customizing organizational standard indicators according to the context of each project. Based on results of the survey, we propose a process tailoring support system that is mainly focusing to quantitative management planning. The system EPDG+ (Electronic Process Data Guidebook Plus) helps project planners select / customize indicators to be employed in process control. Derived software project plans including measurement and analysis activities can be browsed in detail with this system., PROFES 2006 : Product-Focused Software Process Improvement, June 12-14, 2006, Amsterdam, The Netherlands
- Published
- 2023
10. Impact Analysis of Granularity Levels on Feature Location Technique
- Abstract
Due to the increasing of software requirements and software features, modern software systems continue to grow in size and complexity. Locating source code entities that required to implement a feature in millions lines of code is labor and cost intensive for developers. To this end, several studies have proposed the use of Information Retrieval (IR) to rank source code entities based on their textual similarity to an issue report. The ranked source code entities could be at a class or function granularity level. Source code entities at the class-level are usually large in size and might contain a lot of functions that are not implemented for the feature. Hence, we conjecture that the class-level feature location technique requires more effort than function-level feature location technique. In this paper, we investigate the impact of granularity levels on a feature location technique. We also presented a new evaluation method using effort-based evaluation. The results indicated that function-level feature location technique outperforms class-level feature location technique. Moreover, function-level feature location technique also required 7 times less effort than class-level feature location technique to localize the first relevant source code entity. Therefore, we conclude that feature location technique at the function-level of program elements is effective in practice., APRES 2014 : Asia Pacific Requirements Engineering Symposium, April 28-29, 2014, Auckland, New Zealand
- Published
- 2023
11. An Analysis of Eye Movements during Browsing Multiple Search Results Pages
- Abstract
In general, most search engines display a certain number of search results on a search results page at one time, separating the entire search results into multiple search results pages. Therefore, lower ranked results (e.g., 11th-ranked result) may be displayed on the top area of the next (second) page and might be more likely to be browsed by users, rather than results displayed on the bottom of the previous (first) results page. To better understand users’ activities in web search, it is necessary to analyze the effect of display positions of search results while browsing multiple search results pages. In this paper, we present the results of our analysis of users’ eye movements. We have conducted an experiment to measure eye movements during web search and analyzed how long users spend to view each search result. From the analysis results, we have found that search results displayed on the top of the latter page were viewed for a longer time than those displayed on the bottom of the former page., HCI 2009 : Human-Computer Interaction. New Trends, July 19-24, 2009, San Diego, CA, USA
- Published
- 2023
12. Characterizing Safety of Integrated Services in Home Network System
- Abstract
This paper formalizes three kinds of safety to be satisfied by networked appliances and services in the emerging home network system (HNS). The local safety is defined by safety instructions of individual networked appliances. The global safety is specified as required properties of HNS services, which use multiple appliances simultaneously. The environment safety is derived from residential rules in home and surrounding environments. Based on the safety defined, we propose a modeling/validation framework for the safety. Specifically, we first introduce an object-oriented modeling technique to clarify the relationships among the appliances, the services and the home (environment) objects. We then employ the technique of Design by Contract with JML (Java Modeling Language), which achieves systematic safety validation through testing., ICOST 2007 : Pervasive Computing for Quality of Life Enhancement, June 21-23, 2007, Nara, Japan
- Published
- 2023
13. Abstracting mobility flows from bike-sharing systems
- Abstract
Bicycling has grown significantly in the past ten years. In some regions, the implementation of large-scale bike-sharing systems and improved cycling infrastructure are two of the factors enabling this growth. An increase in non-motorized modes of transportation makes our cities more human, decreases pollution, traffic, and improves quality of life. In many cities around the world, urban planners and policymakers are looking at cycling as a sustainable way of improving urban mobility. Although bike-sharing systems generate abundant data about their users’ travel habits, most cities still rely on traditional tools and methods for planning and policy-making. Recent technological advances enable the collection and analysis of large amounts of data about urban mobility, which can serve as a solid basis for evidence-based policy-making. In this paper, we introduce a novel analytical method that can be used to process millions of bike-sharing trips and analyze bike-sharing mobility, abstracting relevant mobility flows across specific urban areas. Backed by a visualization platform, this method provides a comprehensive set of analytical tools to support public authorities in making data-driven policy and planning decisions. This paper illustrates the use of the method with a case study of the Greater Boston bike-sharing system and, as a result, presents new findings about that particular system. Finally, an assessment with expert users showed that this method and tool were considered very useful, relatively easy to use and that they intend to adopt the tool in the near future.
- Published
- 2022
14. DE-Sinc methods have almost the same convergence property as SE-Sinc methods even for a family of functions fitting the SE-Sinc methods Part I: Definite integration and function approximation
- Abstract
In this paper, the theoretical convergence rate of the trapezoidal rule combined with the double-exponential (DE) transformation is given for a class of functions for which the single-exponential (SE) transformation is suitable. It is well known that the DE transformation enables the rule to achieve a much higher rate of convergence than the SE transformation, and the convergence rate has been analyzed and justified theoretically under a proper assumption. Here, it should be emphasized that the assumption is more severe than the one for the SE transformation, and there actually exist some examples such that the trapezoidal rule with the SE transformation achieves its usual rate, whereas the rule with DE does not. Such cases have been observed numerically, but no theoretical analysis has been given thus far. This paper reveals the theoretical rate of convergence in such cases, and it turns out that the DE’s rate is almost the same as, but slightly lower than that of the SE. By using the analysis technique developed here, the theoretical convergence rate of the Sinc approximation with the DE transformation is also given for a class of functions for which the SE transformation is suitable. The result is quite similar to above; the convergence rate in the DE case is slightly lower than in the SE case. Numerical examples which support those two theoretical results are also given.
- Published
- 2022
15. Measurement of CP asymmetries and branching fraction ratios of B− decays to two charm mesons
- Abstract
The CP asymmetries of seven B− decays to two charm mesons are measured using data corresponding to an integrated luminosity of 9 fb−1 of proton-proton collisions collected by the LHCb experiment. Decays involving a D*0 or D s ∗ − $$ {D}_s^{\ast -} $$ meson are analysed by reconstructing only the D0 or D s − $$ {D}_s^{-} $$ decay products. This paper presents the first measurement of A $$ \mathcal{A} $$ CP(B− → D s ∗ − $$ {D}_s^{\ast -} $$ D0) and A $$ \mathcal{A} $$ CP(B− → D s − $$ {D}_s^{-} $$ D∗0), and the most precise measurement of the other five CP asymmetries. There is no evidence of CP violation in any of the analysed decays. Additionally, two ratios between branching fractions of selected decays are measured.
- Published
- 2023
16. OME-Zarr: a cloud-optimized bioimaging file format with international community support
- Abstract
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself—OME-Zarr—along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain—the file format that underlies so many personal, institutional, and global data management and analysis tasks.
- Published
- 2023
17. OME-Zarr: a cloud-optimized bioimaging file format with international community support
- Abstract
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself—OME-Zarr—along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain—the file format that underlies so many personal, institutional, and global data management and analysis tasks.
- Published
- 2023
18. On the causality paradox and the Karch-Randall braneworld as an EFT
- Abstract
Holography on cutoff surfaces can appear to be in tension with causality. For example, as argued by Omiya and Wei [1], double holography seemingly allows for superluminal signalling. In this paper we argue that the brane description of double holography should be treated as an effective theory and demonstrate that causality violations due to faster-than-light communication are not visible above the associated cutoff length scale. This suggests that end-of-the-world brane models are consistent with causality and that the apparent superluminal signalling is a UV effect. Moreover, we argue that short distance non-localities generically give rise to apparent faster-than-light propagation of signals in Anti-de Sitter space. Nonetheless, superluminal signalling indicates that the causal structure on holographic cutoff surfaces needs to be modified. We propose and study three different candidate regions that might replace the domain of dependence in the brane EFT of the Karch-Randall model. These regions are defined by unitarity on the brane, through bulk entanglement wedges and through the nice slice criterion, respectively. In all dimensions, these candidate regions exclude those parts of the domain of dependence which are affected by superluminal signalling. While all three definitions agree in two dimensions, they are different in higher dimensions.
- Published
- 2023
19. Pileup and Infrared Radiation Annihilation (PIRANHA): a paradigm for continuous jet grooming
- Abstract
Jet grooming is an important strategy for analyzing relativistic particle collisions in the presence of contaminating radiation. Most jet grooming techniques introduce hard cutoffs to remove soft radiation, leading to discontinuous behavior and associated experimental and theoretical challenges. In this paper, we introduce Pileup and Infrared Radiation Annihilation (Piranha), a paradigm for continuous jet grooming that overcomes the discontinuity and infrared sensitivity of hard-cutoff grooming procedures. We motivate Piranha from the perspective of optimal transport and the Energy Mover’s Distance and review Apollonius Subtraction and Iterated Voronoi Subtraction as examples of Piranha-style grooming. We then introduce a new tree-based implementation of Piranha, Recursive Subtraction, with reduced computational costs. Finally, we demonstrate the performance of Recursive Subtraction in mitigating sensitivity to soft distortions from hadronization and detector effects, and additive contamination from pileup and the underlying event.
- Published
- 2023
20. Cooperative distributed state estimation: resilient topologies against smart spoofers
- Abstract
A network of observers is considered, where through asynchronous (with bounded delay) communications, they cooperatively estimate the states of a linear time-invariant (LTI) system. In such a setting, a new type of adversary might affect the observation process by impersonating the identity of the regular node, which is a violation of communication authenticity. These adversaries also inherit the capabilities of Byzantine nodes, making them more powerful threats called smart spoofers. We show how asynchronous networks are vulnerable to smart spoofing attack. In the estimation scheme considered in this paper, information flows from the sets of source nodes, which can detect a portion of the state variables each, to the other follower nodes. The regular nodes, to avoid being misguided by the threats, distributively filter the extreme values received from the nodes in their neighborhood. Topological conditions based on strong robustness are proposed to guarantee the convergence. Two simulation scenarios are provided to verify the results.
- Published
- 2023
21. Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora
- Abstract
The Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/c charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1 $$\pm 0.6$$ ± 0.6 % and 84.1 $$\pm 0.6$$ ± 0.6 %, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation.
- Published
- 2023
22. A burden shared: the financial, psychological, and health-related consequences borne by family members and caregivers of people with cancer in India
- Abstract
In India, approximately 1.4 million new cases of cancer are recorded annually, with 26.7 million people living with cancer in 2021. Providing care for family members with cancer impacts caregivers’ health and financial resources. Effects on caregivers’ health and financial resources, understood as family and caregiver “financial toxicity” of cancer, are important to explore in the Indian context, where family members often serve as caregivers, in light of cultural attitudes towards family. This is reinforced by other structural issues such as grave disparities in socioeconomic status, barriers in access to care, and limited access to supportive care services for many patients. Effects on family caregivers’ financial resources are particularly prevalent in India given the increased dependency on out-of-pocket financing for healthcare, disparate access to insurance coverage, and limitations in public expenditure on healthcare. In this paper, we explore family and caregiver financial toxicity of cancer in the Indian context, highlighting the multiple psychosocial aspects through which these factors may play out. We suggest steps forward, including future directions in (1) health services research, (2) community-level interventions, and (3) policy changes. We underscore that multidisciplinary and multi-sectoral efforts are needed to study and address family and caregiver financial toxicity in India.
- Published
- 2023
23. A spectral metric for collider geometry
- Abstract
By quantifying the distance between two collider events, one can triangulate a metric space and reframe collider data analysis as computational geometry. One popular geometric approach is to first represent events as an energy flow on an idealized celestial sphere and then define the metric in terms of optimal transport in two dimensions. In this paper, we advocate for representing events in terms of a spectral function that encodes pairwise particle angles and products of particle energies, which enables a metric distance defined in terms of one-dimensional optimal transport. This approach has the advantage of automatically incorporating obvious isometries of the data, like rotations about the colliding beam axis. It also facilitates first-principles calculations, since there are simple closed-form expressions for optimal transport in one dimension. Up to isometries and event sets of measure zero, the spectral representation is unique, so the metric on the space of spectral functions is a metric on the space of events. At lowest order in perturbation theory in electron-positron collisions, our metric is simply the summed squared invariant masses of the two event hemispheres. Going to higher orders, we present predictions for the distribution of metric distances between jets in fixed-order and resummed perturbation theory as well as in parton-shower generators. Finally, we speculate on whether the spectral approach could furnish a useful metric on the space of quantum field theories.
- Published
- 2023
24. Identifying latent activity behaviors and lifestyles using mobility data to describe urban dynamics
- Abstract
Urbanization and its problems require an in-depth and comprehensive understanding of urban dynamics, especially the complex and diversified lifestyles in modern cities. Digitally acquired data can accurately capture complex human activity, but it lacks the interpretability of demographic data. In this paper, we study a privacy-enhanced dataset of the mobility visitation patterns of 1.2 million people to 1.1 million places in 11 metro areas in the U.S. to detect the latent mobility behaviors and lifestyles in the largest American cities. Despite the considerable complexity of mobility visitations, we found that lifestyles can be automatically decomposed into only 12 latent interpretable activity behaviors on how people combine shopping, eating, working, or using their free time. Rather than describing individuals with a single lifestyle, we find that city dwellers’ behavior is a mixture of those behaviors. Those detected latent activity behaviors are equally present across cities and cannot be fully explained by main demographic features. Finally, we find those latent behaviors are associated with dynamics like experienced income segregation, transportation, or healthy behaviors in cities, even after controlling for demographic features. Our results signal the importance of complementing traditional census data with activity behaviors to understand urban dynamics.
- Published
- 2023
25. Thermal non-line-of-sight imaging from specular and diffuse reflections
- Abstract
This paper presents a non-line-of-sight technique to estimate the position and temperature of an occluded objectfrom a camera via reflection on a wall. Because objects with heat emit far infrared light with respect to theirtemperature, positions and temperatures are estimated from reflections on a wall. A key idea is that light paths from ahidden object to the camera depend on the position of the hidden object. The position of the object is recoveredfrom the angular distribution of specular and diffuse reflection component, and the temperature of the heat source isrecovered from the estimated position and the intensity of reflection. The effectiveness of our method is evaluated byconducting real-world experiments, showing that the position and the temperature of the hidden object can berecovered from the reflection destination of the wall by using a conventional thermal camera.
- Published
- 2023
26. On approximations of the PSD cone by a polynomial number of smaller-sized PSD cones
- Abstract
We study the problem of approximating the cone of positive semidefinite (PSD) matrices with a cone that can be described by smaller-sized PSD constraints. Specifically, we ask the question: “how closely can we approximate the set of unit-trace $$n \times n$$ n × n PSD matrices, denoted by D, using at most N number of $$k \times k$$ k × k PSD constraints?” In this paper, we prove lower bounds on N to achieve a good approximation of D by considering two constructions of an approximating set. First, we consider the unit-trace $$n \times n$$ n × n symmetric matrices that are PSD when restricted to a fixed set of k-dimensional subspaces in $${\mathbb {R}}^n$$ R n . We prove that if this set is a good approximation of D, then the number of subspaces must be at least exponentially large in n for any $$k = o(n)$$ k = o ( n ) . Second, we show that any set S that approximates D within a constant approximation ratio must have superpolynomial $${\varvec{S}}_+^k$$ S + k -extension complexity. To be more precise, if S is a constant factor approximation of D, then S must have $${\varvec{S}}_+^k$$ S + k -extension complexity at least $$\exp ( C \cdot \min \{ \sqrt{n}, n/k \})$$ exp ( C · min { n , n / k } ) where C is some absolute constant. In addition, we show that any set S such that $$D \subseteq S$$ D ⊆ S and the Gaussian width of S is at most a constant times larger than the Gaussian width of D must have $${\varvec{S}}_+^k$$ S + k -extension complexity at least $$\exp ( C \cdot \min \{ n^{1/3}, \sqrt{n/k} \})$$ exp ( C · min { n 1 / 3 , n / k } ) . These results imply that the cone of $$n \times n$$ n × n PSD matrices cannot be approximated by a polynomial number of $$k \times k$$ k × k PSD constraints for any $$k = o(n / \log ^2 n)$$ k = o ( n / log 2 n ) . These results generalize the recent work of Fawzi (Math Oper Res 46(4):1479–1489, 2021) on the hardness of polyhedral approximations of $${\varvec{S}}_+^n$$ S + n , which corresponds to the special case with $$k
- Published
- 2023
27. Closed string theory without level-matching at the free level
- Abstract
In its traditional form, the string field in closed string field theory is constrained by the level-matching condition, which is imposed beside the action. By analogy with the similar problem for the Ramond sector, it was understood by Okawa and Sakaguchi how to lift this condition and work with unconstrained field by introducing spurious free fields. These authors also pointed out that new backgrounds may exist thanks to a new gauge field which is trivial on flat space, but can generate fluxes on a toroidal background. In this paper, we perform a complete study of the free theory at the tachyonic and massless levels with the aim of setting the stage for studying backgrounds without level-matching.
- Published
- 2023
28. Human-Informed Topology Optimization: interactive application of feature size controls
- Abstract
This paper presents a new topology optimization framework in which the design decisions are made by humans and machines in collaboration. The new Human-Informed Topology Optimization approach eases the accessibility of topology optimization tools and enables improved design identification for the so-called ‘everyday’ and ‘in-the-field’ design situations. The new framework is based on standard density-based compliance minimization. However, the design engineer is enabled to actively use their experience and expertise to locally alter the minimum feature size requirements. This is done by conducting a short initial solution and prompting the design engineer to evaluate the quality. The user can identify potential areas of concern based on the initial material distribution. In these areas, the minimum feature size requirement can be altered as deemed necessary by the user. The algorithm rigorously resolves the compliance problem using the updated filtering map, resulting in solutions that eliminate, merge, or thicken topological members of concern. The new framework is demonstrated on 2D benchmark examples and the extension to 3D is shown. Its ability to achieve performance improvement with few computational resources are demonstrated on buckling and stress concentration examples.
- Published
- 2023
29. Disjunctive cuts in Mixed-Integer Conic Optimization
- Abstract
This paper studies disjunctive cutting planes in Mixed-Integer Conic Programming. Building on conic duality, we formulate a cut-generating conic program for separating disjunctive cuts, and investigate the impact of the normalization condition on its resolution. In particular, we show that a careful selection of normalization guarantees its solvability and conic strong duality. Then, we highlight the shortcomings of separating conic-infeasible points in an outer-approximation context, and propose conic extensions to the classical lifting and monoidal strengthening procedures. Finally, we assess the computational behavior of various normalization conditions in terms of gap closed, computing time and cut sparsity. In the process, we show that our approach is competitive with the internal lift-and-project cuts of a state-of-the-art solver.
- Published
- 2023
30. Regularity of Solutions to the Muskat Equation
- Abstract
In this paper, we show that if a solution to the Muskat problem in the case of different densities and the same viscosity is sufficiently smooth, then it must be analytic except at the points where a turnover of the fluids happens.
- Published
- 2023
31. Reflected entropy in random tensor networks. Part II. A topological index from canonical purification
- Abstract
In ref. [1], we analyzed the reflected entropy (SR) in random tensor networks motivated by its proposed duality to the entanglement wedge cross section (EW) in holographic theories, S R = 2 EW 4 G $$ {S}_R=2\frac{EW}{4G} $$ . In this paper, we discover further details of this duality by analyzing a simple network consisting of a chain of two random tensors. This setup models a multiboundary wormhole. We show that the reflected entanglement spectrum is controlled by representation theory of the Temperley-Lieb algebra. In the semiclassical limit motivated by holography, the spectrum takes the form of a sum over superselection sectors associated to different irreducible representations of the Temperley-Lieb algebra and labelled by a topological index k ∈ ℤ>0. Each sector contributes to the reflected entropy an amount 2 k EW 4 G $$ 2k\frac{EW}{4G} $$ weighted by its probability. We provide a gravitational interpretation in terms of fixed-area, higher-genus multiboundary wormholes with genus 2k – 1 initial value slices. These wormholes appear in the gravitational description of the canonical purification. We confirm the reflected entropy holographic duality away from phase transitions. We also find important non-perturbative contributions from the novel geometries with k ≥ 2 near phase transitions, resolving the discontinuous transition in SR. Along with analytic arguments, we provide numerical evidence for our results. We finally speculate that signatures of a non-trivial von Neumann algebra, connected to the Temperley-Lieb algebra, will emerge from a modular flowed version of reflected entropy.
- Published
- 2023
32. 3XOR Games with Perfect Commuting Operator Strategies Have Perfect Tensor Product Strategies and are Decidable in Polynomial Time
- Abstract
We consider 3XOR games with perfect commuting operator strategies. Given any 3XOR game, we show existence of a perfect commuting operator strategy for the game can be decided in polynomial time. Previously this problem was not known to be decidable. Our proof leads to a construction, showing a 3XOR game has a perfect commuting operator strategy iff it has a perfect tensor product strategy using a 3 qubit (8 dimensional) GHZ state. This shows that for perfect 3XOR games the advantage of a quantum strategy over a classical strategy (defined by the quantum-classical bias ratio) is bounded. This is in contrast to the general 3XOR case where the optimal quantum strategies can require high dimensional states and there is no bound on the quantum advantage. To prove these results, we first show equivalence between deciding the value of an XOR game and solving an instance of the subgroup membership problem on a class of right angled Coxeter groups. We then show, in a proof that consumes most of this paper, that the instances of this problem corresponding to 3XOR games can be solved in polynomial time.
- Published
- 2023
33. Power counting energy flow polynomials
- Abstract
Power counting is a systematic strategy for organizing collider observables and their associated theoretical calculations. In this paper, we use power counting to characterize a class of jet substructure observables called energy flow polynomials (EFPs). EFPs provide an overcomplete linear basis for infrared-and-collinear safe jet observables, but it is known that in practice, a small subset of EFPs is often sufficient for specific jet analysis tasks. By applying power counting arguments, we obtain linear relationships between EFPs that hold for quark and gluon jets to a specific order in the power counting. We test these relations in the parton shower generator Pythia, finding excellent agreement. Power counting allows us to truncate the basis of EFPs without affecting performance, which we corroborate through a study of quark-gluon tagging and regression.
- Published
- 2022
34. Robust convex optimization: A new perspective that unifies and extends
- Abstract
Robust convex constraints are difficult to handle, since finding the worst-case scenario is equivalent to maximizing a convex function. In this paper, we propose a new approach to deal with such constraints that unifies most approaches known in the literature and extends them in a significant way. The extension is either obtaining better solutions than the ones proposed in the literature, or obtaining solutions for classes of problems unaddressed by previous approaches. Our solution is based on an extension of the Reformulation-Linearization-Technique, and can be applied to general convex inequalities and general convex uncertainty sets. It generates a sequence of conservative approximations which can be used to obtain both upper- and lower- bounds for the optimal objective value. We illustrate the numerical benefit of our approach on a robust control and robust geometric optimization example.
- Published
- 2022
35. Single-peaked domains with designer uncertainty
- Abstract
This paper studies single-peaked domains where the designer is uncertain about the underlying alignment according to which the domain is single-peaked. The underlying alignment is common knowledge amongst agents, but preferences are private knowledge. Thus, the state of the world has both a public and private element, with the designer uninformed of both. I first posit a relevant solution concept called implementation in mixed information equilibria, which requires Nash implementation in the public information and dominant strategy implementation in the private information given the public information. I then identify necessary and sufficient conditions for social choice rules (SCRs) to be implementable. The characterization is used to identify unanimous and anonymous implementable SCRs for different forms of designer uncertainty, which basically boils down to picking the right SCRs from the large class identified by Moulin (Public Choice 35(4):437–455, 1980), and hence this result can be seen as identifying which of Moulin’s SCRs are robust to designer uncertainty.
- Published
- 2022
36. Performance enhancements for a generic conic interior point algorithm
- Abstract
In recent work, we provide computational arguments for expanding the class of proper cones recognized by conic optimization solvers, to permit simpler, smaller, more natural conic formulations. We define an exotic cone as a proper cone for which we can implement a small set of tractable (i.e. fast, numerically stable, analytic) oracles for a logarithmically homogeneous self-concordant barrier for the cone or for its dual cone. Our extensible, open-source conic interior point solver, Hypatia, allows modeling and solving any conic problem over a Cartesian product of exotic cones. In this paper, we introduce Hypatia’s interior point algorithm, which generalizes that of Skajaa and Ye (Math. Program. 150(2):391–422, 2015) by handling exotic cones without tractable primal oracles. To improve iteration count and solve time in practice, we propose four enhancements to the interior point stepping procedure of Skajaa and Ye: (1) loosening the central path proximity conditions, (2) adjusting the directions using a third order directional derivative barrier oracle, (3) performing a backtracking search on a curve, and (4) combining the prediction and centering directions. We implement 23 useful exotic cones in Hypatia. We summarize the complexity of computing oracles for these cones and show that our new third order oracle is not a bottleneck. From 37 applied examples, we generate a diverse benchmark set of 379 problems. Our computational testing shows that each stepping enhancement improves Hypatia’s iteration count and solve time. Altogether, the enhancements reduce the geometric means of iteration count and solve time by over 80% and 70% respectively.
- Published
- 2022
37. Evaluating the Security of a DNS Query Obfuscation Scheme for Private Web Surfing
- Abstract
The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries.
- Published
- 2022
38. Evaluating the Security of a DNS Query Obfuscation Scheme for Private Web Surfing
- Abstract
The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries.
- Published
- 2022
39. E-Sinc methods have almost the same convergence property as SE-Sinc methods even for a family of functions fitting the SE-Sinc methods Part II: Indefinite integration
- Abstract
In this paper, the theoretical convergence rate of the Sinc indefinite integration combined with the double-exponential (DE) transformation is given for a class of functions for which the single-exponential (SE) transformation is suitable. Although the DE transformation is considered as an enhanced version of the SE transformation for Sinc-related methods, the function space for which the DE transformation is suitable is smaller than that for SE, and therefore, there exist some examples such that the DE transformation is not better than the SE transformation. Even in such cases, however, some numerical observations in the literature suggest that there is almost no difference in the convergence rates of SE and DE. In fact, recently, the observations have been theoretically explained for two explicit approximation formulas: the Sinc quadrature and the Sinc approximation. The conclusion is that in such cases, the DE’s rate is slightly lower, but almost the same as that of the SE. The contribution of this study is the derivation of the same conclusion for the Sinc indefinite integration. Numerical examples that support the theoretical result are also provided.
- Published
- 2022
40. Evaluating the Security of a DNS Query Obfuscation Scheme for Private Web Surfing
- Abstract
The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries.
- Published
- 2022
41. Evaluating the Security of a DNS Query Obfuscation Scheme for Private Web Surfing
- Abstract
The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries.
- Published
- 2022
42. Testing planetary urbanisation: Siberia’s trans-scalar spatial regime of oil production
- Abstract
This paper analyses the extended urbanization of oil production in Siberia in order to test Neil Brenner and Christian Schmid’s theory of planetary urbanization. According to these authors, the intensification of the urban process triggered by the consolidation of global neoliberalism since the early 1990s, has transformed the planet into a situation of total urbanization. In their view, this planetary condition can be measured by the incorporation of former remote wilderness such as the Amazon, the oceans, the deserts or Siberia within urban circuits of production (which they label under the notion of extended urbanization). In this article we test if Brenner and Schmid’s hypothesis applies to the Siberian case. With that goal, we develop a diachronic historic and cartographic analysis which shows first the incorporation of Siberia to the Russian Empire, second the consolidation of East Asia-Russia commercial circuits, and third the conceptualization of Siberia as primarily an area for resource extraction. Such analysis leads us to define three historic spatial regimes for the whole Siberia, the last of which we study in relation to the notion of planetary urbanization. Our study compares the geographies of oil production in the region first during the Soviet period, and then, following Brenner and Schmid’s chronology, after the 1990s. The article concludes that the latter phase certainly implies an unprecedented intensification of extended urbanization and the incorporation of Siberia into trans-scalar global circuits of production. Finally, in order to analyse the relation between this process and the consolidation of neoliberalism we develop a synchronic study and mapping of the operations of two oil companies: a public one, NOC ROSNEFT, and a private one, LUKOIL. We conclude that both are similarly invested in creating the trans-scalar geographies of uneven development that characterize neoliberalism.
- Published
- 2022
43. A novel adaptive Runge–Kutta controller for nonlinear dynamical systems
- Abstract
This paper introduces a new Runge–Kutta (RK) integration-based adaptive controller by considering control law as an ODE for nonlinear MIMO systems. It is aimed to derive a novel adaptive controller by regarding the control law as an ODE with limited information about control law structure. Adaptive parameters are adjusted via an RK predictive system model where Levenberg–Marquardt (LM) technique is deployed. The adjustment mechanism enables to utilize RK both in adaptive controller and system model. The performance evaluation has been delved into on Van de Vusse (VdV) system for diverse situations, and reasonable results have been acquired for introduced adaptation mechanism.
- Published
- 2022
44. Non-Gaussianities in collider energy flux
- Abstract
The microscopic dynamics of particle collisions is imprinted into the statistical properties of asymptotic energy flux, much like the dynamics of inflation is imprinted into the cosmic microwave background. This energy flux is characterized by correlation functions E n → 1 ⋯ E n → k $$ \left\langle \mathcal{E}\left({\overrightarrow{n}}_1\right)\cdots \mathcal{E}\left({\overrightarrow{n}}_k\right)\right\rangle $$ of energy flow operators E n → $$ \mathcal{E}\left(\overrightarrow{n}\right) $$ . There has been significant recent progress in studying energy flux, including the calculation of multi-point correlation functions and their direct measurement inside high-energy jets at the Large Hadron Collider (LHC). In this paper, we build on these advances by defining a notion of “celestial non-gaussianity” as a ratio of the three-point function to a product of two-point functions. We show that this celestial non-gaussianity is under perturbative control within jets at the LHC, allowing us to cleanly access the non-gaussian interactions of quarks and gluons. We find good agreement between perturbative calculations of the non-gaussianity and a charged-particle-based analysis using CMS Open Data, and we observe a strong non-gaussianity peaked in the “flattened triangle” regime. The ability to robustly study three-point correlations is a significant step in advancing our understanding of jet substructure at the LHC. We anticipate that the celestial non-gaussianity, and its generalizations, will play an important role in the development of higher-order parton showers simulations and in the hunt for ever more subtle signals of potential new physics within jets.
- Published
- 2022
45. Streaming readout for next generation electron scattering experiments
- Abstract
Current and future experiments at the high-intensity frontier are expected to produce an enormous amount of data that needs to be collected and stored for offline analysis. Thanks to the continuous progress in computing and networking technology, it is now possible to replace the standard ‘triggered’ data acquisition systems with a new, simplified and outperforming scheme. ‘Streaming readout’ (SRO) DAQ aims to replace the hardware-based trigger with a much more powerful and flexible software-based one, that considers the whole detector information for efficient real-time data tagging and selection. Considering the crucial role of DAQ in an experiment, validation with on-field tests is required to demonstrate SRO performance. In this paper, we report results of the on-beam validation of the Jefferson Lab SRO framework. We exposed different detectors (PbWO-based electromagnetic calorimeters and a plastic scintillator hodoscope) to the Hall-D electron-positron secondary beam and to the Hall-B production electron beam, with increasingly complex experimental conditions. By comparing the data collected with the SRO system against the traditional DAQ, we demonstrate that the SRO performs as expected. Furthermore, we provide evidence of its superiority in implementing sophisticated AI-supported algorithms for real-time data analysis and reconstruction.
- Published
- 2022
46. Search for long-lived particles decaying into muon pairs in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV collected with a dedicated high-rate data stream
- Abstract
A search for long-lived particles decaying into muon pairs is performed using proton-proton collisions at a center-of-mass energy of 13 TeV, collected by the CMS experiment at the LHC in 2017 and 2018, corresponding to an integrated luminosity of 101 fb−1. The data sets used in this search were collected with a dedicated dimuon trigger stream with low transverse momentum thresholds, recorded at high rate by retaining a reduced amount of information, in order to explore otherwise inaccessible phase space at low dimuon mass and nonzero displacement from the primary interaction vertex. No significant excess of events beyond the standard model expectation is found. Upper limits on branching fractions at 95% confidence level are set on a wide range of mass and lifetime hypotheses in beyond the standard model frameworks with the Higgs boson decaying into a pair of long-lived dark photons, or with a long-lived scalar resonance arising from a decay of a b hadron. The limits are the most stringent to date for substantial regions of the parameter space. These results can be also used to constrain models of displaced dimuons that are not explicitly considered in this paper.
- Published
- 2022
47. Sedimentary stratigraphy of Lake Chalco (Central Mexico) during its formative stages
- Abstract
Lake Chalco lies south of the Basin of Mexico and has been the subject of studies related to Late Quaternary climate variability. In 2016, the International Continental Scientific Drilling Program “MexiDrill Project” recovered a 520-m sediment record from Lake Chalco. Magnetic susceptibility measurements revealed substantial changes in sediment physical properties between 343 and 285 m depth, suggesting changes in composition associated with fluctuations in the depositional environment. We targeted sediments in the 343–285 m interval for high-resolution facies analysis, to develop a model of Lake Chalco formation. We identified three facies associations, using sediment composition, texture, mineralogy and micro-morphological characteristics: (1) detrital facies, consisting of laminated silt, massive sand, stratified silty sand, clast-supported gravel and matrix-supported gravel; (2) biogenic facies, which include diatom ooze and bivalve coquina; and (3) volcaniclastic facies, represented by clast-supported pumice deposits. We propose that formation of Lake Chalco occurred in four stages, which we identified by changes in sediment characteristics. The first stage was an alluvial delta environment, in which debris and hyper-concentrated flows were the main sediment transport agents. The second was characterized by turbulent flows in a fluvial deltaic environment, which alternated with laminar flows associated with floodplains. The third stage was a time of fluvio-lacustrine transition in the basin, with formation of the previously identified Paleo-Chalco-I Lake, in response to wet conditions. During the fourth stage, a deep eutrophic lake formed (Paleo-Chalco-II), with an origin that appears to have been related to regional volcanism. Our working age-depth model indicates establishment of the lake at ca. 400 ± 46 ka. This paper presents the only available record of the transition from alluvial to lacustrine sedimentation of Lake Chalco. Our results allow us t
- Published
- 2022
48. Linear regression with partially mismatched data: local search with theoretical guarantees
- Abstract
Linear regression is a fundamental modeling tool in statistics and related fields. In this paper, we study an important variant of linear regression in which the predictor-response pairs are partially mismatched. We use an optimization formulation to simultaneously learn the underlying regression coefficients and the permutation corresponding to the mismatches. The combinatorial structure of the problem leads to computational challenges. We propose and study a simple greedy local search algorithm for this optimization problem that enjoys strong theoretical guarantees and appealing computational performance. We prove that under a suitable scaling of the number of mismatched pairs compared to the number of samples and features, and certain assumptions on problem data; our local search algorithm converges to a nearly-optimal solution at a linear rate. In particular, in the noiseless case, our algorithm converges to the global optimal solution with a linear convergence rate. Based on this result, we prove an upper bound for the estimation error of the parameter. We also propose an approximate local search step that allows us to scale our approach to much larger instances. We conduct numerical experiments to gather further insights into our theoretical results, and show promising performance gains compared to existing approaches.
- Published
- 2022
49. A binned likelihood for stochastic models
- Abstract
Metrics of model goodness-of-fit, model comparison, and model parameter estimation are the main categories of statistical problems in science. Bayesian and frequentist methods that address these questions often rely on a likelihood function, which is the key ingredient in order to assess the plausibility of model parameters given observed data. In some complex systems or experimental setups, predicting the outcome of a model cannot be done analytically, and Monte Carlo techniques are used. In this paper, we present a new analytic likelihood that takes into account Monte Carlo uncertainties, appropriate for use in the large and small sample size limits. Our formulation performs better than semi-analytic methods, prevents strong claims on biased statements, and provides improved coverage properties compared to available methods.
- Published
- 2022
50. Scintillation light detection in the 6-m drift-length ProtoDUNE Dual Phase liquid argon TPC
- Abstract
DUNE is a dual-site experiment for long-baseline neutrino oscillation studies, neutrino astrophysics and nucleon decay searches. ProtoDUNE Dual Phase (DP) is a 6 $$\times $$ × 6 $$\times $$ × 6 m $$^3$$ 3 liquid argon time-projection-chamber (LArTPC) that recorded cosmic-muon data at the CERN Neutrino Platform in 2019–2020 as a prototype of the DUNE Far Detector. Charged particles propagating through the LArTPC produce ionization and scintillation light. The scintillation light signal in these detectors can provide the trigger for non-beam events. In addition, it adds precise timing capabilities and improves the calorimetry measurements. In ProtoDUNE-DP, scintillation and electroluminescence light produced by cosmic muons in the LArTPC is collected by photomultiplier tubes placed up to 7 m away from the ionizing track. In this paper, the ProtoDUNE-DP photon detection system performance is evaluated with a particular focus on the different wavelength shifters, such as PEN and TPB, and the use of Xe-doped LAr, considering its future use in giant LArTPCs. The scintillation light production and propagation processes are analyzed and a comparison of simulation to data is performed, improving understanding of the liquid argon properties.
- Published
- 2022
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.