7,399 results on '"POINT set theory"'
Search Results
152. Nonexpansiveness and Fractal Maps in Hilbert Spaces.
- Author
-
Navascués, María A.
- Subjects
- *
HILBERT space , *NONEXPANSIVE mappings , *POINT set theory - Abstract
Picard iteration is on the basis of a great number of numerical methods and applications of mathematics. However, it has been known since the 1950s that this method of fixed-point approximation may not converge in the case of nonexpansive mappings. In this paper, an extension of the concept of nonexpansiveness is presented in the first place. Unlike the classical case, the new maps may be discontinuous, adding an element of generality to the model. Some properties of the set of fixed points of the new maps are studied. Afterwards, two iterative methods of fixed-point approximation are analyzed, in the frameworks of b-metric and Hilbert spaces. In the latter case, it is proved that the symmetrically averaged iterative procedures perform well in the sense of convergence with the least number of operations at each step. As an application, the second part of the article is devoted to the study of fractal mappings on Hilbert spaces defined by means of nonexpansive operators. The paper considers fractal mappings coming from φ -contractions as well. In particular, the new operators are useful for the definition of an extension of the concept of α -fractal function, enlarging its scope to more abstract spaces and procedures. The fractal maps studied here have quasi-symmetry, in the sense that their graphs are composed of transformed copies of itself. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
153. Symmetry-Enhanced Fuzzy Logic Analysis in Parallel and Cross-Road Scenarios: Optimizing Direction and Distance Weights for Map Matching.
- Author
-
Zhou, Weicheng, Ge, Huilin, and Ashraf, Muhammad Awais
- Subjects
- *
FUZZY logic , *MEMBERSHIP functions (Fuzzy logic) , *FUZZY algorithms , *POINT set theory , *INFORMATION design , *ACCURACY of information - Abstract
This study addresses the challenges of setting segmentation points in the membership function and determining appropriate weights for different types of information within a fuzzy logic algorithm for map matching. We use linear fitting to derive an empirical formula for setting segmentation points for the information membership function. Furthermore, we evaluate the effects of various weights for direction and distance information in parallel and cross-road scenarios. The research identified the optimal distance that achieves the highest matching accuracy and provided insights into how the weights of connection, direction, and distance information affect this accuracy. The simulations confirmed the critical importance of precise segmentation point settings and weight determinations in enhancing the accuracy of fuzzy logic algorithms for map matching. The results underscore the potency of our tailored parameter-setting strategy and contribute to knowledge of symmetry, offering practical insights for implementing fuzzy logic in map matching with a particular emphasis on the principle of symmetry in algorithm design and information processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
154. A note on a class of Fourier transforms.
- Author
-
TWOMEY, J. B.
- Subjects
- *
FOURIER transforms , *POINT set theory , *INTEGRALS - Abstract
We consider functions f є L²(Rn) for which ʃRn| ḟ(t)|²(1 + log+ |t|)2β dt < ∞, β > 0, where ḟ is the Fourier transform of f, and we identify a kernel Kβ such that f satisfies this integral condition if, and only if, f(x) = (Kβ * F)(x) = ʃRn Kβ(x - t) F(t) dt for some function F є L²(Rn). We also address the question of 'Fourier inversion' for this class by showing that certain Bochner-Riesz means of the transforms of f = Kβ*F converge to f outside small exceptional sets of points in Rn of capacity zero. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
155. Constrained least square progressive and iterative approximation (CLSPIA) for B-spline curve and surface fitting.
- Author
-
Chang, Qingjun, Ma, Weiyin, and Deng, Chongyang
- Subjects
- *
LEAST squares , *LAGRANGE multiplier , *CURVE fitting , *COMPUTATIONAL complexity , *LINEAR systems , *INTERPOLATION , *POINT set theory - Abstract
Combining the Lagrange multiplier method, the Uzawa algorithm, and the least square progressive and iterative approximation (LSPIA), we proposed the constrained least square progressive and iterative approximation (CLSPIA) to solve the problem of B-spline curve and surface fitting with constraint on data interpolation, i.e., computing the control points of a B-spline curve or surface which interpolates one set of input points while approximating the other set of given points. Compared with the method of solving the linear system directly, CLSPIA has some advantages as it inherits all the nice properties of LSPIA. Because of the data reuse property of LSPIA, CLSPIA reduces a great amount of computation. Using the local property of LSPIA, we can get shape preserving fitting curves by CLSPIA. CLSPIA is efficient for fitting large-scale data sets due to the fact that its computational complexity is linear to the scale of the input data. The many numerical examples in this paper show the efficiency and effectiveness of CLSPIA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
156. Latent diffusion transformer for point cloud generation.
- Author
-
Ji, Junzhong, Zhao, Runfeng, and Lei, Minglong
- Subjects
- *
POINT cloud , *TRANSFORMER models , *OPTICAL scanners , *FEATURE extraction , *POINT processes , *POINT set theory - Abstract
Diffusion models have been successfully applied to point cloud generation tasks recently. The main notion is using a forward process to progressively add noises into point clouds and then use a reverse process to generate point clouds by denoising these noises. However, since point cloud data is high-dimensional and exhibits complex structures, it is challenging to adequately capture the surface distribution of point clouds. Moreover, point cloud generation methods often resort to sampling methods and local operations to extract features, which inevitably ignores the global structures and overall shapes of point clouds. To address these limitations, we propose a latent diffusion model based on Transformers for point cloud generation. Instead of directly building a diffusion process based on the points, we first propose a latent compressor to convert original point clouds into a set of latent tokens before feeding them into diffusion models. Converting point clouds as latent tokens not only improves expressiveness, but also exhibits better flexibility since they can adapt to various downstream tasks. We carefully design the latent compressor based on an attention-based auto-encoder architecture to capture global structures in point clouds. Then, we propose to use transformers as the backbones of the latent diffusion module to maintain global structures. The powerful feature extraction ability of transformers guarantees the high quality and smoothness of generated point clouds. Experiments show that our method achieves superior performance in both unconditional generation on ShapeNet and multi-modal point cloud completion on ShapeNet-ViPC. Our code and samples are publicly available at https://github.com/Negai-98/LDT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
157. SOME FRACTALS RELATED TO PARTIAL MAXIMAL DIGITS IN LÜROTH EXPANSION.
- Author
-
DENG, JIANG, MA, JIHUA, SONG, KUNKUN, and XIE, ZHONGQUAN
- Subjects
- *
FRACTAL dimensions , *POINT set theory , *FRACTALS - Abstract
Let [ d 1 (x) , d 2 (x) , ... , d n (x) , ... ] be the Lüroth expansion of x ∈ (0 , 1 ] , and let L n (x) = max { d 1 (x) , ... , d n (x) }. It is shown that for any α ≥ 0 , the level set x ∈ (0 , 1 ] : lim n → ∞ L n (x) log log n n = α has Hausdorff dimension one. Certain sets of points for which the sequence { L n (x) } n ≥ 1 grows more rapidly are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
158. SC-CNN: LiDAR point cloud filtering CNN under slope and copula correlation constraint.
- Author
-
Chen, Ruixing, Wu, Jun, Zhao, Xuemei, Luo, Ying, and Xu, Gang
- Subjects
- *
POINT cloud , *LIDAR , *INTRACLASS correlation , *POINT set theory , *INFORMATION networks , *CONFIDENCE intervals - Abstract
To tackle the issue of lack of semantic consistency between ground and non-ground points, as well as the damage to the integrity of terrain boundary information during network downsampling, we developed a Semantic Consistency-Convolutional Neural Network (SC-CNN) to improve the precision of point cloud filtering under complex terrain conditions. The novel aspects include: (1) farthest point sampling (FPS) with slope constraints, which enhances terrain contour preservation through adaptive subblock partitioning and slope-based sampling; (2) intra-class feature enhancement via copula correlation and attention mechanisms, improving the network's ability to distinguish between ground and non-ground points by focusing on intra-class feature consistency and inter-class differences; and (3) filter error correction using copula correlation and confidence intervals. refining filtering accuracy by adjusting for negatively correlated point sets. Tested on the ISPRS and 3D Vaihingen datasets, SC-CNN notably outperformed existing methods, reducing the mean total error (MT.E) by 0.17% and 1.93%, respectively, thereby significantly enhancing point-cloud filtering accuracy under complex terrain conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
159. A note on the 2-colored rectilinear crossing number of random point sets in the unit square.
- Author
-
Cabello, S., Czabarka, É, Fabila-Monroy, R., Higashikawa, Y., Seidel, R., Székely, L., Tkadlec, J., and Wesolek, A.
- Subjects
- *
RANDOM numbers , *RANDOM sets , *POINT set theory , *RAMSEY numbers - Abstract
Let S be a set of four points chosen independently, uniformly at random from a square. Join every pair of points of S with a straight line segment. Color these edges red if they have positive slope and blue, otherwise. We show that the probability that S defines a pair of crossing edges of the same color is equal to 1 / 4 . This is connected to a recent result of Aichholzer et al. [1] who showed that by 2-colouring the edges of a geometric graph and counting monochromatic crossings instead of crossings, the number of crossings can be more than halved. Our result shows that for the described random drawings, there is a coloring of the edges such that the number of monochromatic crossings is in expectation 1 2 - 7 50 of the total number of crossings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
160. No selection lemma for empty triangles.
- Author
-
Fabila-Monroy, R., Hidalgo-Toscano, C., Perz, D., and Vogtenhuber, B.
- Subjects
- *
TRIANGLES , *ZERO (The number) , *REAL numbers , *COMBINATORIAL geometry , *POINT set theory - Abstract
Let P be a set of n points in general position in the plane. The Second Selection Lemma states that for any family of Θ (n 3) triangles spanned by P, there exists a point of the plane that lies in a constant fraction of them. For families of Θ (n 3 - α) triangles, with 0 ≤ α ≤ 1 , there might not be a point in more than Θ (n 3 - 2 α) of those triangles. An empty triangle of P is a triangle spanned by P not containing any point of P in its interior. Bárány conjectured that there exists an edge spanned by P that is incident to a super-constant number of empty triangles of P. The number of empty triangles of P might be as low as Θ (n 2) ; in such a case, on average, every edge spanned by P is incident to a constant number of empty triangles. The conjecture of Bárány suggests that for the class of empty triangles the above upper bound might not hold. In this paper we show that, somewhat surprisingly, the above upper bound does in fact hold for empty triangles. Specifically, we show that for any integer n and real number 0 ≤ α ≤ 1 there exists a point set of size n with Θ (n 3 - α) empty triangles such that any point of the plane is only in O (n 3 - 2 α) empty triangles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
161. A Simple and Efficient Method for Accelerating Construction of the Gap-Greedy Spanner.
- Author
-
Salami, Hosein and Nouri-Baygi, Mostafa
- Subjects
- *
GREEDY algorithms , *WRENCHES , *COMPLETE graphs , *POINT set theory - Abstract
Let G be the complete Euclidean graph on a set of points embedded in the plane. Given a constant t > 1 , a spanning subgraph G ′ of G is said to be a t -spanner, or simply a spanner, if for any pair of nodes p , q in G there exists a t -path in G ′ , i.e., a path between p and q whose length is at most t times their distance in G. Gap-greedy spanner, proposed by Arya and Smid, is a light weight and bounded degree spanner in which a pair of points p , q is guaranteed to have a t -path, if there exists at least one edge with some special criteria in the spanner. Existing algorithms for computing the gap-greedy spanner determine the existence of such an edge for each pair of points by examining the edges of the spanner, which takes O (n) time, however in this paper, we have presented a method by which this task can be done in O (1) time. Using the proposed method and well-separated pair decomposition, we have proposed a linear-space algorithm that can compute the gap-greedy spanner in O (n 2) time. How to use the well-separated pair decomposition to compute this spanner was proposed by Bakhshesh and Farshi, however using an example, we have shown that one of the algorithms they have proposed for this purpose is incorrect. We have performed various experiments to measure the duration and amount of memory used by the algorithms for computing this spanner. The results of these experiments showed that the proposed method, without a significant effect on the amount of memory consumed compared to previous algorithms, leads to a significant acceleration in the construction time of this spanner. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
162. BIM-based task planning method for wheeled-legged rebar binding robot.
- Author
-
Cao, Siyi, Duan, Hao, Guo, Shuai, Wu, Jiajun, Ai, Tengfeng, and Jiang, Haili
- Subjects
- *
BUILDING sites , *BUILDING information modeling , *ROBOTS , *MOBILE robots , *CONSTRUCTION planning , *GENETIC algorithms , *POINT set theory - Abstract
Building Information Modeling (BIM) data has the advantages of high quality, operability, and parametric capabilities. In recent years, the application of BIM in construction planning for building robots has been increasing. Task planning for rebar binding is an important aspect of construction, and there is a growing trend of using robots to improve the accuracy and efficiency of binding operations. However, the complex and dynamic construction site environment poses significant challenges for task planning in rebar binding. Therefore, this paper proposes a new BIM-based task planning method for a wheeled-legged rebar binding robot, which can quickly generate an optimal task sequence. Firstly, an initial set of rebar intersection points is generated based on BIM data. Then, considering the execution dead zone of the robot and the variable workspace of the binding mechanism, the rebar intersection points set is decomposed and filtered using morphological opening operations. Finally, a constrained genetic algorithm is employed to sort the set of task points. Simulation and real-world experiments were conducted using a wheeled-legged rebar binding robot as the research subject. The results demonstrate that, compared to manual planning, the average binding efficiency is approximately 1500 points per hour, which is 1.97 times higher than manual operation. This validates the applicability and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
163. Blocking sets, minimal codes and trifferent codes.
- Author
-
Bishnoi, Anurag, D'haeseleer, Jozefien, Gijswijt, Dion, and Potukuchi, Aditya
- Subjects
- *
LINEAR codes , *FINITE fields , *PROJECTIVE spaces , *POINT set theory - Abstract
We prove new upper bounds on the smallest size of affine blocking sets, that is, sets of points in a finite affine space that intersect every affine subspace of a fixed codimension. We show an equivalence between affine blocking sets with respect to codimension‐2 subspaces that are generated by taking a union of lines through the origin, and strong blocking sets in the corresponding projective space, which in turn are equivalent to minimal codes. Using this equivalence, we improve the current best upper bounds on the smallest size of a strong blocking set in finite projective spaces over fields of size at least 3. Furthermore, using coding theoretic techniques, we improve the current best lower bounds on a strong blocking set. Our main motivation for these new bounds is their application to trifferent codes, which are sets of ternary codes of length n$n$ with the property that for any three distinct codewords there is a coordinate where they all have distinct values. Over the finite field F3$\mathbb {F}_3$, we prove that minimal codes are equivalent to linear trifferent codes. Using this equivalence, we show that any linear trifferent code of length n$n$ has size at most 3n/4.55$3^{n/4.55}$, improving the recent upper bound of Pohoata and Zakharov. Moreover, we show the existence of linear trifferent codes of length n$n$ and size at least 139/5n/4$\frac{1}{3}{\left(9/5 \right)}^{n/4}$, thus (asymptotically) matching the best lower bound on trifferent codes. We also give explicit constructions of affine blocking sets with respect to codimension‐2 subspaces that are a constant factor bigger than the best known lower bound. By restricting to F3$\mathbb {F}_3$, we obtain linear trifferent codes of size at least 323n/312$3^{23n/312}$, improving the current best explicit construction that has size 3n/112$3^{n/112}$. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
164. An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization.
- Author
-
Liu, Ruyu, Pan, Shaohua, Wu, Yuqia, and Yang, Xiaoqi
- Subjects
NONSMOOTH optimization ,IMAGE reconstruction ,CONVEX functions ,POINT set theory ,DIFFERENTIABLE functions ,NEWTON-Raphson method ,MATHEMATICAL regularization - Abstract
This paper focuses on the minimization of a sum of a twice continuously differentiable function f and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of f involving the ϱ th power of the KKT residual. For ϱ = 0 , we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For ϱ ∈ (0 , 1) , by assuming that cluster points satisfy a locally Hölderian error bound of order q on a second-order stationary point set and a local error bound of order q > 1 + ϱ on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on q and ϱ . A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on ℓ 1 -regularized Student's t-regressions, group penalized Student's t-regressions, and nonconvex image restoration confirm the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
165. Point cloud completion network for 3D shapes with morphologically diverse structures.
- Author
-
Si, Chun-Jing, Yin, Zhi-Ben, Fan, Zhen-Qi, Liu, Fu-Yong, Niu, Rong, Yao, Na, Shen, Shi-Quan, Shi, Ming-Deng, and Xi, Ya-Jun
- Subjects
POINT cloud ,POINT set theory ,SURFACE structure - Abstract
Point cloud completion is a challenging task that involves predicting missing parts in incomplete 3D shapes. While existing strategies have shown effectiveness on point cloud datasets with regular shapes and continuous surfaces, they struggled to manage the morphologically diverse structures commonly encountered in real-world scenarios. This research proposed a new point cloud completion method, called SegCompletion, to derive complete 3D geometries from a partial shape with different structures and discontinuous surfaces. To achieve this, morphological segmentation was introduced before point cloud completion by deep hierarchical feature learning on point sets, and thus, the complex morphological structure was segmented into regular shapes and continuous surfaces. Additionally, each instance of a point cloud that belonged to the same type of feature could also be effectively identified using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise). Furthermore, the multiscale generative network achieved sophisticated patching of missing point clouds under the same geometric feature based on feature points. To compensate for the variance in the mean distances between the centers of the patches and their closest neighbors, a simple yet effective uniform loss was utilized. A number of experiments on ShapeNet and Pheno4D datasets have shown the performance of SegCompletion on public datasets. Moreover, the contribution of SegCompletion to our dataset (Cotton3D) was discussed. The experimental results demonstrated that SegCompletion performed better than existing methods reported in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
166. A discussion about the limitations of the Eurocode's high-speed load model for railway bridges.
- Author
-
Ferreira, Gonçalo, Montenegro, Pedro, Pinto, José Rui, Henriques, António Abel, and Calçada, Rui
- Subjects
BOGIES (Vehicles) ,RAILROAD bridges ,LIVE loads ,JOINT use of railroad facilities ,HIGH speed trains ,POINT set theory - Abstract
High-speed railway bridges are subjected to normative limitations concerning maximum permissible deck accelerations. For the design of these structures, the European norm EN 1991-2 introduces the high-speed load model (HSLM)—a set of point loads intended to include the effects of existing high-speed trains. Yet, the evolution of current trains and the recent development of new load models motivate a discussion regarding the limits of validity of the HSLM. For this study, a large number of randomly generated load models of articulated, conventional, and regular trains are tested and compared with the envelope of HSLM effects. For each type of train, two sets of 100,000 load models are considered: one abiding by the limits of the EN 1991-2 and another considering wider limits. This comparison is achieved using both a bridge-independent metric (train signatures) and dynamic analyses on a case study bridge (the Canelas bridge of the Portuguese Railway Network). For the latter, a methodology to decrease the computational cost of moving loads analysis is introduced. Results show that some theoretical load models constructed within the stipulated limits of the norm can lead to effects not covered by the HSLM. This is especially noted in conventional trains, where there is a relation with larger distances between centres of adjacent vehicle bogies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
167. Fractal dimension of potential singular points set in the Navier–Stokes equations under supercritical regularity.
- Author
-
Wang, Yanqing and Wu, Gang
- Subjects
NAVIER-Stokes equations ,POINT set theory ,FRACTAL dimensions ,HAUSDORFF measures ,MATHEMATICS - Abstract
The main objective of this paper is to answer the questions posed by Robinson and Sadowski [22, p. 505, Commun. Math. Phys., 2010] for the Navier–Stokes equations. Firstly, we prove that the upper box dimension of the potential singular points set $\mathcal {S}$ of suitable weak solution $u$ belonging to $L^{q}(0,T;L^{p}(\mathbb {R}^{3}))$ for $1\leq \frac {2}{q}+\frac {3}{p}\leq \frac 32$ with $2\leq q and $2 is at most $\max \{p,q\}(\frac {2}{q}+\frac {3}{p}-1)$ in this system. Secondly, it is shown that $1-2s$ dimension Hausdorff measure of potential singular points set of suitable weak solutions satisfying $u\in L^{2}(0,T;\dot {H}^{s+1}(\mathbb {R}^{3}))$ for $0\leq s\leq \frac 12$ is zero, whose proof relies on Caffarelli–Silvestre's extension. Inspired by Barker–Wang's recent work [1], this further allows us to discuss the Hausdorff dimension of potential singular points set of suitable weak solutions if the gradient of the velocity is under some supercritical regularity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
168. On the singularity point in acoustic orthorhombic media.
- Author
-
Stovas, Alexey
- Subjects
ACOUSTIC models ,GROUP velocity ,POLARIZATION (Social sciences) ,SHEAR waves ,POINT set theory - Abstract
The acoustic orthorhombic model is widely used in seismic modeling and processing of P-wave data. However, the anisotropic acoustic models have so called S-wave artifacts (1 artifact in transversely isotropic acoustic medium and two artifacts in orthorhombic acoustic medium). I show that S-wave artifacts can have one singularity point that results in complications in polarization field and the group velocity surface. The conditions of the existence of this point are defined in terms of anellipticity parameters. This singularity point and its group velocity image are the objects of my analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
169. Enhanced Target Localization in the Internet of Underwater Things through Quantum-Behaved Metaheuristic Optimization with Multi-Strategy Integration.
- Author
-
Mei, Xiaojun, Miao, Fahui, Wang, Weijun, Wu, Huafeng, Han, Bing, Wu, Zhongdai, Chen, Xinqiang, Xian, Jiangfeng, Zhang, Yuanyuan, and Zang, Yining
- Subjects
POINT set theory ,QUANTUM computing ,QUANTUM theory ,LOCATION problems (Programming) ,METAHEURISTIC algorithms ,QUANTUM computers - Abstract
Underwater localization is considered a critical technique in the Internet of Underwater Things (IoUTs). However, acquiring accurate location information is challenging due to the heterogeneous underwater environment and the hostile propagation of acoustic signals, especially when using received signal strength (RSS)-based techniques. Additionally, most current solutions rely on strict mathematical expressions, which limits their effectiveness in certain scenarios. To address these challenges, this study develops a quantum-behaved meta-heuristic algorithm, called quantum enhanced Harris hawks optimization (QEHHO), to solve the localization problem without requiring strict mathematical assumptions. The algorithm builds on the original Harris hawks optimization (HHO) by integrating four strategies into various phases to avoid local minima. The initiation phase incorporates good point set theory and quantum computing to enhance the population quality, while a random nonlinear technique is introduced in the transition phase to expand the exploration region in the early stages. A correction mechanism and exploration enhancement combining the slime mold algorithm (SMA) and quasi-oppositional learning (QOL) are further developed to find an optimal solution. Furthermore, the RSS-based Cramér–Raolower bound (CRLB) is derived to evaluate the effectiveness of QEHHO. Simulation results demonstrate the superior performance of QEHHO under various conditions compared to other state-of-the-art closed-form-expression- and meta-heuristic-based solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
170. The exact projective penalty method for constrained optimization.
- Author
-
Norkin, Vladimir
- Subjects
CONSTRAINED optimization ,POINT set theory ,PROBLEM solving ,NONSMOOTH optimization ,QUADRATIC programming - Abstract
A new exact projective penalty method is proposed for the equivalent reduction of constrained optimization problems to nonsmooth unconstrained ones. In the method, the original objective function is extended to infeasible points by summing its value at the projection of an infeasible point on the feasible set with the distance to the projection. Beside Euclidean projections, also a pointed projection in the direction of some fixed internal feasible point can be used. The equivalence means that local and global minimums of the problems coincide. Nonconvex sets with multivalued Euclidean projections are admitted, and the objective function may be lower semicontinuous. The particular case of convex problems is included. The obtained unconstrained or box constrained problem is solved by a version of the branch and bound method combined with local optimization. In principle, any local optimizer can be used within the branch and bound scheme but in numerical experiments sequential quadratic programming method was successfully used. So the proposed exact penalty method does not assume the existence of the objective function outside the allowable area and does not require the selection of the penalty coefficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
171. AirCore Observations at Northern Tibetan Plateau During the Asian Summer Monsoon.
- Author
-
Yi, You, Cai, Zhaonan, Liu, Yi, Tao, Mengchu, Fang, Shuangxi, Yang, Dongxu, Bai, Zhixuan, Liang, Miao, Yao, Bo, Bian, Jianchun, Honomichl, Shawn B., Randel, William J., and Pan, Laura L.
- Subjects
- *
CHEMICAL structure , *PHOTOMETRY , *TROPOSPHERE , *TROPOPAUSE , *POINT set theory , *MONSOONS - Abstract
We present data and analysis of a set of balloon‐borne sounding profiles, which includes co‐located O3, CO, CH4, and particles, over the northern Tibetan Plateau during an Asian summer monsoon (ASM) season. These novel measurements shed light on the ASM transport behavior near the northern edge of the anticyclone. Joint analyses of these species with the temperature and wind profiles and supported by back trajectory modeling identify three distinct transport processes that dominate the vertical chemical structure in the middle troposphere, upper troposphere (UT), and the tropopause region. The correlated changes in profile structures in the middle troposphere highlight the influence of the strong westerly jet. Elevated constituent concentrations in the UT identify the main level of convective transport at the upstream source regions. Observed higher altitude maxima for CH4 characterize the airmasses' continued ascent following convection. These data complement constituent observations from other parts of the ASM anticyclone. Plain Language Summary: Asian summer monsoon deep convection transports surface pollutants to the stratosphere. Although satellite data have provided clear evidence of this transport, in situ measurements are critical for characterizing how monsoon is vertically re‐distributing the regional emissions. We report new balloon‐borne measurements over the Tibetan Plateau that provide a unique data set on the northern edge of the anticyclone, complementing other observations. Key Points: A novel set of in‐situ profile measurements of O3, CO, CH4 and particles from Tibetan Plateau during Asian summer monsoon are presentedJoint analyses of the profiles provide insights into transport processes controlling the northern edge of the Asian monsoon anticycloneObserved CO profile maxima at 13–14 km (∼360–370 K) identify the level of convective transport at the upstream source regions [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
172. Homochirality in Ferroelectrochemistry.
- Author
-
Peng, Hang, Qi, Jun‐Chao, Liu, Yu‐Si, Zhang, Jia‐Mei, Liao, Wei‐Qiang, and Xiong, Ren‐Gen
- Subjects
- *
DIVERGENT thinking , *FERROELECTRIC crystals , *CRYSTAL symmetry , *PERSONALITY studies , *POINT set theory , *SOLID-liquid equilibrium - Abstract
What is the most favorite and original chemistry developed in your research group? We originally proposed the design principle for molecular ferroelectrics: ferroelectrochemistry, including quasi‐spherical theory, the introduction of homochirality, and H/F substitution. Ferroelectrochemistry changed the blind search for molecular ferroelectrics into targeted chemical design, which will develop into a new discipline. How do you get into this specific field? Could you please share some experiences with our readers? I have been devoted to the field of molecular ferroelectrics for more than 20 years. In the early stage, I worked on non‐centrosymmetric metal‐organic complexes, which are potential molecular ferroelectrics. This laid a foundation for my further study of molecular ferroelectrics. Non‐centrosymmetric crystal symmetry is only one of the necessary requirements for ferroelectrics, which must adopt one of the 10 polar crystallographic point groups and should also generally undergo symmetry‐breaking phase transitions. Due to the lack of a feasible method, the discovery of molecular ferroelectrics has long depended on blindly searching. This process is like finding a needle in a haystack. After years of exploration in this field, I fully understood the Landau phase transition phenomenological theory, Curie symmetry, and Neumann principle from a chemical perspective, and proposed the design principle for molecular ferroelectrics: ferroelectrochemistry, transforming the discovery of molecular ferroelectrics from blind search to targeted chemical design. Never give up no matter how much difficulty you have met, because maybe there is an opportunity the next second. What is the most important personality for scientific research? Curiosity, divergent thinking, perseverance, team spirit, and gratitude. How do you supervise your students? Emphasis on independent problem‐solving abilities. Encourage students to read professional books frequently while doing research. What are your hobbies? What's your favorite book(s)? Jogging, reading, and swimming. My favorite book is The Journey to the West. Comprehensive Summary: Molecular ferroelectrics have attracted tremendous attention in the past decades due to their excellent ferroelectric performance and superiorities of easy processability, mechanical flexibility, and good biocompatibility. However, the discovery of molecular ferroelectrics is a great challenge and has long relied on blind search. This situation changed recently, with the development of ferroelectrochemistry proposed by our group. As a major design approach in ferroelectrochemistry, introducing homochirality, which facilitates the crystallization of materials in polar crystallographic point groups, greatly improves the probability of being ferroelectrics. Various new molecular ferroelectrics with splendid properties have been precisely synthesized by using this efficient and universal strategy. In this review, we summarize the advances in the chemical design of molecular ferroelectrics through the strategy of introducing homochirality. Key Scientists: [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
173. Large norms in group theory.
- Author
-
Ferrara, Maria and Trombetti, Marco
- Subjects
- *
GROUP theory , *SOLVABLE groups , *LATTICE theory , *POINT set theory , *INFINITE groups , *PROFINITE groups , *CARDINAL numbers , *MATRIX norms - Abstract
In 1935, the introduction of the norm of a group by Reinhold Baer is a turning point in group theory. In fact, Baer proved that there is a very strong relationship between the structure of the norm and that of the whole group (see [1] , [2] , [3] , [4] , [5]). Since then, the norm has been playing a very significant roles in many aspects of group theory and its applications: it has been used in [43] to describe the connection between Hopf–Galois structures and skew braces; it has been used in [23] to describe some special types of profinite groups; and it has been fundamental in the theory of subgroup lattices of groups (see [40]). In this paper, we weaken the original definition of norm by taking into account only those subgroups that are "large" in some sense. Depending on the chosen concept of largeness, the resulting norm can have an impact on the structure of the whole group that is even greater than that of Baer's norm. This is exactly what happens with the non-polycyclic norm , and in fact, Theorem 4.17 gives a precise description of generalized soluble groups in which the non-polycyclic norm is non-Dedekind (and can be considered as the main result of the paper). Other times, the resulting norms have their own peculiar behaviour; this is the case if "large" means "infinite", "having infinite rank", "being non-Černikov", or "having cardinality m " for some given uncountable cardinal number m. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
174. Multi-strategy modified sparrow search algorithm for hyperparameter optimization in arbitrage prediction models.
- Author
-
Cheng, Shenjie, Qin, Panke, Lu, Baoyun, Yu, Jinxia, Tang, Yongli, Zeng, Zeliang, Tu, Sensen, Qi, Haoran, Ye, Bo, and Cai, Zhongqi
- Subjects
- *
SEARCH algorithms , *POINT set theory , *PREDICTION models , *ARBITRAGE , *SPARROWS , *STANDARD deviations - Abstract
Deep learning models struggle to effectively capture data features and make accurate predictions because of the strong non-linear characteristics of arbitrage data. Therefore, to fully exploit the model performance, researchers have focused on network structure and hyperparameter selection using various swarm intelligence algorithms for optimization. Sparrow Search Algorithm (SSA), a classic heuristic method that simulates the sparrows' foraging and anti-predatory behavior, has demonstrated excellent performance in various optimization problems. Hence, in this study, the Multi-Strategy Modified Sparrow Search Algorithm (MSMSSA) is applied to the Long Short-Term Memory (LSTM) network to construct an arbitrage spread prediction model (MSMSSA-LSTM). In the modified algorithm, the good point set theory, the proportion-adaptive strategy, and the improved location update method are introduced to further enhance the spatial exploration capability of the sparrow. The proposed model was evaluated using the real spread data of rebar and hot coil futures in the Chinese futures market. The obtained results showed that the mean absolute percentage error, root mean square error, and mean absolute error of the proposed model had decreased by a maximum of 58.5%, 65.2%, and 67.6% compared to several classical models. The model has high accuracy in predicting arbitrage spreads, which can provide some reference for investors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
175. Hyper-physiologic mechanical cues, as an osteoarthritis disease-relevant environmental perturbation, cause a critical shift in set points of methylation at transcriptionally active CpG sites in neo-cartilage organoids.
- Author
-
Bloks, Niek G. C., Dicks, Amanda, Harissa, Zainab, Nelissen, Rob G. H. H., Hajmousa, Ghazaleh, Ramos, Yolande F. M., de Almeida, Rodrigo Coutinho, Guilak, Farshid, and Meulenbelt, Ingrid
- Subjects
- *
ECOLOGICAL disturbances , *POINT set theory , *OSTEOARTHRITIS , *GENE expression , *JOINT stiffness , *KNEE - Abstract
Background: Osteoarthritis (OA) is a complex, age-related multifactorial degenerative disease of diarthrodial joints marked by impaired mobility, joint stiffness, pain, and a significant decrease in quality of life. Among other risk factors, such as genetics and age, hyper-physiological mechanical cues are known to play a critical role in the onset and progression of the disease (Guilak in Best Pract Res Clin Rheumatol 25:815–823, 2011). It has been shown that post-mitotic cells, such as articular chondrocytes, heavily rely on methylation at CpG sites to adapt to environmental cues and maintain phenotypic plasticity. However, these long-lasting adaptations may eventually have a negative impact on cellular performance. We hypothesize that hyper-physiologic mechanical loading leads to the accumulation of altered epigenetic markers in articular chondrocytes, resulting in a loss of the tightly regulated balance of gene expression that leads to a dysregulated state characteristic of the OA disease state. Results: We showed that hyper-physiological loading evokes consistent changes in CpGs associated with expression changes (ML-tCpGs) in ITGA5, CAV1, and CD44, among other genes, which together act in pathways such as anatomical structure morphogenesis (GO:0009653) and response to wound healing (GO:0042060). Moreover, by comparing the ML-tCpGs and their associated pathways to tCpGs in OA pathophysiology (OA-tCpGs), we observed a modest but particular interconnected overlap with notable genes such as CD44 and ITGA5. These genes could indeed represent lasting detrimental changes to the phenotypic state of chondrocytes due to mechanical perturbations that occurred earlier in life. The latter is further suggested by the association between methylation levels of ML-tCpGs mapped to CD44 and OA severity. Conclusion: Our findings confirm that hyper-physiological mechanical cues evoke changes to the methylome-wide landscape of chondrocytes, concomitant with detrimental changes in positional gene expression levels (ML-tCpGs). Since CAV1, ITGA5, and CD44 are subject to such changes and are central and overlapping with OA-tCpGs of primary chondrocytes, we propose that accumulation of hyper-physiological mechanical cues can evoke long-lasting, detrimental changes in set points of gene expression that influence the phenotypic healthy state of chondrocytes. Future studies are necessary to confirm this hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
176. Saberes e fazeres matemáticos utilizados por pedreiros no município de Amapá/AP.
- Author
-
Serrão Custódio, Elivaldo and Castro Moreira, Paumiere
- Subjects
- *
MATHEMATICS education , *SET theory , *APPLIED mathematics , *POINT set theory , *TRADITIONAL knowledge - Abstract
This article aims to discuss the mathematical knowledge and practices carried out by bricklayers in the municipality of Amapá, State of Amapá, Brazil, aiming to reinforce that applied mathematics is highly effective in building empirical knowledge. The work also aims to describe how this mathematical knowledge and practices are transmitted and passed on by these families. The research has a qualitative, exploratory, and descriptive nature. To this end, direct observation was carried out through in place visits to works being carried out in the municipality. The research data were obtained through informal conversations with a small group of eight bricklayers, whose inclusion and exclusion group was the oldest bricklayers in the municipality and who have been working in the area for the longest time. The results obtained point to a rich set of mathematical knowledge (empirical and traditional) that has been acquired and transmitted over the years. Furthermore, the data leads to reflection on the pedagogical practice of teaching and learning mathematics and the importance of studying culture and its applicability in the classroom as a way of valuing and maintaining local traditions and mathematical knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
177. GLS‐PIA: n‐Dimensional Spherical B‐Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement.
- Author
-
Zhao, Yuming, Wu, Zhongke, and Wang, Xingce
- Subjects
- *
CURVE fitting , *GEODESICS , *POINT set theory , *SQUARE , *SPHERES , *PARAMETRIC equations - Abstract
Due to the widespread applications of curves on n‐dimensional spheres, fitting curves on n‐dimensional spheres has received increasing attention in recent years. However, due to the non‐Euclidean nature of spheres, curve fitting methods on n‐dimensional spheres often struggle to balance fitting accuracy and curve fairness. In this paper, we propose a new fitting framework, GLS‐PIA, for parameterized point sets on n‐dimensional spheres to address the challenge. Meanwhile, we provide the proof of the method. Firstly, we propose a progressive iterative approximation method based on geodesic least squares which can directly optimize the geodesic least squares loss on the n‐sphere, improving the accuracy of the fitting. Additionally, we use an error allocation method based on contribution coefficients to ensure the fairness of the fitting curve. Secondly, we propose an adaptive knot placement method based on geodesic difference to estimate a more reasonable distribution of control points in the parameter domain, placing more control points in areas with greater detail. This enables B‐spline curves to capture more details with a limited number of control points. Experimental results demonstrate that our framework achieves outstanding performance, especially in handling imbalanced data points. (In this paper, "sphere" refers to n‐sphere (n ≥ 2) unless otherwise specified.) [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
178. BallMerge: High‐quality Fast Surface Reconstruction via Voronoi Balls.
- Author
-
Parakkat, Amal Dev, Ohrhallinger, Stefan, Eisemann, Elmar, and Memari, Pooran
- Subjects
- *
SURFACE reconstruction , *THREE-dimensional printing , *NUMERICAL analysis , *POINT set theory - Abstract
We introduce a Delaunay‐based algorithm for reconstructing the underlying surface of a given set of unstructured points in 3D. The implementation is very simple, and it is designed to work in a parameter‐free manner. The solution builds upon the fact that in the continuous case, a closed surface separates the set of maximal empty balls (medial balls) into an interior and exterior. Based on discrete input samples, our reconstructed surface consists of the interface between Voronoi balls, which approximate the interior and exterior medial balls. An initial set of Voronoi balls is iteratively processed, merging Voronoi‐ball pairs if they fulfil an overlapping error criterion. Our complete open‐source reconstruction pipeline performs up to two quick linear‐time passes on the Delaunay complex to output the surface, making it an order of magnitude faster than the state of the art while being competitive in memory usage and often superior in quality. We propose two variants (local and global), which are carefully designed to target two different reconstruction scenarios for watertight surfaces from accurate or noisy samples, as well as real‐world scanned data sets, exhibiting noise, outliers, and large areas of missing data. The results of the global variant are, by definition, watertight, suitable for numerical analysis and various applications (e.g., 3D printing). Compared to classical Delaunay‐based reconstruction techniques, our method is highly stable and robust to noise and outliers, evidenced via various experiments, including on real‐world data with challenges such as scan shadows, outliers, and noise, even without additional preprocessing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
179. Circle Actions on Four Dimensional Almost Complex Manifolds With Discrete Fixed Point Sets.
- Author
-
Jang, Donghoon
- Subjects
- *
POINT set theory , *COMPLEX manifolds , *CIRCLE , *INTEGERS - Abstract
We establish a necessary and sufficient condition for pairs of integers to arise as the weights at the fixed points of an effective circle action on a compact almost complex 4-manifold with a discrete fixed point set. As an application, we provide a necessary and sufficient condition for a pair of integers to arise as the Chern numbers of such an action, answering negatively a question by Sabatini whether |$c_{1}^{2}[M] \leq 3 c_{2}[M]$| holds for any such manifold |$M$|. We achieve this by demonstrating that pairs of integers that arise as weights of a circle action also arise as weights of a restriction of a |$\mathbb {T}^{2}$| -action. Furthermore, we discuss applications to circle actions on complex/symplectic 4-manifolds and semi-free circle actions with discrete fixed point sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
180. Best (orthogonal) fitting ellipsoid with quaternions.
- Author
-
Bektas, Sebahattin
- Subjects
- *
ELLIPSOIDS , *EULER angles , *QUATERNIONS , *IMAGE processing , *POINT set theory - Abstract
The aim of this study is the determination of the best fit ellipsoid to given points by quaternions. The problem of the fitting ellipsoid is frequently encountered in image processing, computer games, medicine, engineering and science applications, geodesy, etc. The ellipsoid fitting problem is the process of determining the ellipsoid that best fits a given set of points in 3D. In the fitting process, it is generally done over two models. The first of these is the algebraic method and the second one is orthogonal (geometric) method. In this study, we tried to solve the problem of algebraic and orthogonal ellipsoid fitting based on Euler angles for the first time over quaternions. The superiority of quaternions over Euler rotation angles is well known. In addition, the variance–covariance matrix of the parameters of the fitted ellipsoid will also be calculated. Numerical applications show that the proposed method can be used successfully. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
181. Rotated points for object detection in remote sensing images.
- Author
-
Wang, Longbao, Shen, Yican, Yang, Jin, Zeng, Hui, and Gao, Hongmin
- Subjects
- *
OBJECT recognition (Computer vision) , *REMOTE sensing , *POINT set theory , *REMOTE-sensing images , *IMAGE processing - Abstract
Object detection in remote sensing images poses great challenges due to the dense distribution, arbitrary orientation, and aspect ratio variations of objects. Most of the existing methods rely on aligned convolutional features, which fail to capture the geometric information of objects effectively and result in the inconsistency between the classification score and localization accuracy. Moreover, densely packed objects suffer from spatial feature aliasing caused by the intersection of reception fields between objects. To address this issue, a deformable convolution‐based method named rotated points is proposed, which consists of two modules: a point set loss module and a high‐quality sample assignment module. The point set loss module can extract geometric features of objects in arbitrary directions with fine‐grained point sets for feature representation and introduce outlier penalties to penalize outlier points. The high‐quality sample assignment module measures the classification and localization ability, orientation quality, and point‐wise correlation of point sets comprehensively to enhance the consistency of classification and regression significantly. Experiments on the DOTA and FAIR1M datasets demonstrate that the proposed method achieves significant improvements over the benchmark model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
182. On approximating fixed points of strictly pseudocontractive mappings in metric spaces.
- Author
-
SALISU, SANI, BERINDE, VASILE, SRIWONGSA, SONGPON, and KUMAM, POOM
- Subjects
- *
NONEXPANSIVE mappings , *METRIC spaces , *FIXED point theory , *POINT set theory - Abstract
In this work, we analyse the class of strictly pseudocontractive mappings in general metric spaces by providing a comprehensive and appropriate definition of a strictly pseudocontractive mapping, which serves as a natural extension of the existing notion. Moreover, we establish its various characterizations and explore several significant properties of these mappings in relation to fixed point theory in CAT(0) spaces. Specifically, we establish that these mappings are Lipschitz continuous, satisfying the demiclosedness-type property, and possessing a closed convex fixed point set. Furthermore, we show that the fixed points of the mappings can be effectively approximated using an iterative scheme for fixed points of nonexpansive mappings. The results in this work contribute to a deeper understanding of strictly pseudocontractive mappings and their applicability in the context of fixed point theory in metric spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
183. Gentrifiers Evading Stigma: Social Integrationists in the Neighborhood of the Future.
- Author
-
Parker, Jeffrey Nathaniel and Ternullo, Stephanie
- Subjects
- *
REPUTATION , *GENTRIFICATION , *SOCIAL stigma , *IDEOLOGY , *NEIGHBORHOODS , *URBAN sociology , *POINT set theory , *ETHNOLOGY - Abstract
How does the moral calculus of gentrification change for self-conscious newcomers in neighborhoods with a reputation deemed unworthy of preserving? In pointing to a set of practices distinct from "pioneering" accounts of gentrification, Brown-Saracino (2007) identified social preservationists as figures who seek to preserve authentic community and the marginalized old-timers who embody it. Using the deviant case of Bridgeport—a historically White neighborhood in Chicago with a deeply historical and persistent reputation for racism —we examine how self-conscious newcomers orient themselves to the gentrification process when the old-timers are not considered a marginalized group worth protecting, but rather a powerful group with problematic racial views. Whereas Brown-Saracino identified the importance of "selecting the old-timer" among a set of potential representatives of a valorized past, we suggest that in this case, newcomers fight to redefine a neighborhood based on a socially desirable future. Drawing on two distinct sets of ethnographic and interview-based data, we outline how this process has unfolded. We conclude that Bridgeport's story points to the importance of examining how gentrification ideologies emerge from the collision of personal commitments and neighborhood context, as neighborhood newcomers balance their ethics, concerns over personal reputation, and salient aspects of their new homes, including place reputation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
184. Improved Brain Storm Optimization Algorithm Based on Flock Decision Mutation Strategy.
- Author
-
Zhao, Yanchi, Cheng, Jianhua, and Cai, Jing
- Subjects
- *
OPTIMIZATION algorithms , *K-means clustering , *POINT set theory , *DIFFERENTIAL evolution , *PROBLEM solving - Abstract
To tackle the problem of the brain storm optimization (BSO) algorithm's suboptimal capability for avoiding local optima, which contributes to its inadequate optimization precision, we developed a flock decision mutation approach that substantially enhances the efficacy of the BSO algorithm. Furthermore, to solve the problem of insufficient BSO algorithm population diversity, we introduced a strategy that utilizes the good point set to enhance the initial population's quality. Simultaneously, we substituted the K-means clustering approach with spectral clustering to improve the clustering accuracy of the algorithm. This work introduced an enhanced version of the brain storm optimization algorithm founded on a flock decision mutation strategy (FDIBSO). The improved algorithm was compared against contemporary leading algorithms through the CEC2018. The experimental section additionally employs the AUV intelligence evaluation as an application case. It addresses the combined weight model under various dimensional settings to substantiate the efficacy of the FDIBSO algorithm further. The findings indicate that FDIBSO surpasses BSO and other enhanced algorithms for addressing intricate optimization challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
185. Prediction interval: A powerful statistical tool for monitoring patients and analytical systems.
- Author
-
Coskun, Abdurrahman
- Subjects
- *
PATIENT monitoring , *FORECASTING , *QUALITY control , *POINT set theory , *CLINICAL pathology - Abstract
Monitoring is indispensable for assessing disease prognosis and evaluating the effectiveness of treatment strategies, both of which rely on serial measurements of patients' data. It also plays a critical role in maintaining the stability of analytical systems, which is achieved through serial measurements of quality control samples. Accurate monitoring can be achieved through data collection, following a strict preanalytical and analytical protocol, and the application of a suitable statistical method. In a stable process, future observations can be predicted based on historical data collected during periods when the process was deemed reliable. This can be evaluated using the statistical prediction interval. Statistically, prediction interval gives an "interval" based on historical data where future measurement results can be located with a specified probability such as 95%. Prediction interval consists of two primary components: (i) the set point and (ii) the total variation around the set point which determines the upper and lower limits of the interval. Both can be calculated using the repeated measurement results obtained from the process during its steady-state. In this paper, (i) the theoretical bases of prediction intervals were outlined, and (ii) its practical application was explained through examples, aiming to facilitate the implementation of prediction intervals in laboratory medicine routine practice, as a robust tool for monitoring patients' data and analytical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
186. Connectivity with Uncertainty Regions Given as Line Segments.
- Author
-
Cabello, Sergio and Gajser, David
- Subjects
- *
REAL numbers , *COMPUTABLE functions , *NP-hard problems , *POINT set theory , *COMPUTATIONAL geometry - Abstract
For a set Q of points in the plane and a real number δ ≥ 0 , let G δ (Q) be the graph defined on Q by connecting each pair of points at distance at most δ .We consider the connectivity of G δ (Q) in the best scenario when the location of a few of the points is uncertain, but we know for each uncertain point a line segment that contains it. More precisely, we consider the following optimization problem: given a set P of n - k points in the plane and a set S of k line segments in the plane, find the minimum δ ≥ 0 with the property that we can select one point p s ∈ s for each segment s ∈ S and the corresponding graph G δ (P ∪ { p s ∣ s ∈ S }) is connected. It is known that the problem is NP-hard. We provide an algorithm to exactly compute an optimal solution in O (f (k) n log n) time, for a computable function f (·) . This implies that the problem is FPT when parameterized by k. The best previous algorithm uses O ((k !) k k k + 1 · n 2 k) time and computes the solution up to fixed precision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
187. A Global-Local Approximation Framework for Large-Scale Gaussian Process Modeling.
- Author
-
Vakayil, Akhil and Joseph, V. Roshan
- Subjects
- *
GAUSSIAN processes , *STATISTICAL correlation , *POINT set theory - Abstract
In this work, we propose a novel framework for large-scale Gaussian process (GP) modeling. Contrary to the global, and local approximations proposed in the literature to address the computational bottleneck with exact GP modeling, we employ a combined global-local approach in building the approximation. Our framework uses a subset-of-data approach where the subset is a union of a set of global points designed to capture the global trend in the data, and a set of local points specific to a given testing location to capture the local trend around the testing location. The correlation function is also modeled as a combination of a global, and a local kernel. The predictive performance of our framework, which we refer to as TwinGP, is comparable to the state-of-the-art GP modeling methods, but at a fraction of their computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
188. Detection and localization of corrosion using identical-group- velocity Lamb wave modes.
- Author
-
Huang, Liping, Ding, Jiawei, Lin, Jing, and Luo, Zhi
- Subjects
- *
LAMB waves , *GROUP velocity , *VELOCITY , *POINT set theory - Abstract
Multimodal is the prominent characteristic of Lamb waves, which means a variety of Lamb wave modes propagate in the structure and each of them has unique propagation behaviour. For a wideband multimodal Lamb wave signal, it is possible that many different modes share the same group velocity at some frequency points, while their dispersion behaviours change differently to thickness variation. In this study, the diversity and difference of the interaction between the corrosion and Lamb wave modes are investigated. On this basis, a corrosion index (CI) is established to evaluate the time gap between two modes at their identical group velocity points. The simulation results show that the CI value of a corroded path is obviously larger than that of a healthy path, indicating the presence of the damage, which is independent of any base-line signal. Furthermore, the new reconstruction algorithm for the probabilistic inspection of damage (RAPID) was proposed, where the signal difference coefficient was replaced by the CI to visualise damage location. The experimental results demonstrate the effectiveness of the proposed multimodal Lamb wave baseline-free damage detection method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
189. Event pattern analysis: the point density around the appearance and disappearance of points.
- Author
-
Sadahiro, Yukio
- Subjects
- *
CHAIN stores , *EVIDENCE gaps , *RESEARCH questions , *DENSITY , *POINT set theory , *CONVENIENCE stores - Abstract
Point pattern analysis is a fundamental analytical tool in various academic fields related to spatial information science. Basic but important patterns discussed in existing studies are spatially clustered and dispersed points. Analysis of these static patterns aims to reveal their underlying structure, i.e. why and how they are formed. However, to answer this research question, the direct tracking of the pattern formation process is more effective, though we have not yet fully conducted such analysis along the temporal axis. To fill the research gap, this paper develops a new method of spatiotemporal analysis. We focus on the point density around the appearance and disappearance of points. A research question is whether points appear/disappear in dense or sparse space. Extending Ripley's K-function, we develop four measures statistically evaluating the point density. We applied the proposed method to the analysis of the competition among convenience store chains in Shibuya-ku, Tokyo. A new method for event pattern analysis is proposed. The method evaluates the point density around the appearance and disappearance of points. The method considers event patterns within a single set of points and between two different sets of points. The method is applied to analyse the competition among convenience store chains in Shibuya-ku, Tokyo, which reveals the dynamic aspects of the competition among store chains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
190. FFD-SLAM: A Real-Time Visual SLAM Toward Dynamic Scenes with Semantic and Optical Flow Information.
- Author
-
Zhang, Hao, Wang, Yu, Zhong, Tianjie, Dong, Fangyan, and Chen, Kewei
- Subjects
- *
OPTICAL flow , *OBJECT recognition (Computer vision) , *DYNAMICAL systems , *PROBLEM solving , *POINT set theory - Abstract
To solve the problem of poor localization accuracy and robustness of visual simultaneous localization and mapping (SLAM) systems in highly dynamic environments, this paper proposes a dynamic visual SLAM algorithm called FFD-SLAM that fuses the target detection network with the optical flow method. The algorithm considers ORB-SLAM2 as the basic framework, joins the semantic thread in parallel with its tracking thread, initially obtains the set of feature points through the real-time detection of dynamic objects in the environment through YOLOv5 in the semantic thread, then filters the set of feature points obtained in the semantic thread through the optical flow module, and finally utilizes the remaining static feature points for the matching calculation. Experiments showed that the proposed algorithm showed an improvement of approximately 97% in the localization accuracy compared with the ORB-SLAM2 algorithm in a highly dynamic environment, which effectively improves the localization accuracy and robustness of the system. The proposed algorithm also showed a higher real-time performance compared with some excellent dynamic SLAM algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
191. Synchronizing dynamical systems: Their groupoids and C^*-algebras.
- Author
-
Deeley, Robin J. and Stocker, Andrew M.
- Subjects
- *
DYNAMICAL systems , *GROUPOIDS , *ORBIT method , *ALGEBRA , *POINT set theory - Abstract
Building on work of Ruelle and Putnam in the Smale space case, Thomsen defined the homoclinic and heteroclinic C^\ast-algebras for an expansive dynamical system. In this paper we define a class of expansive dynamical systems, called synchronizing dynamical systems, that exhibit hyperbolic behavior almost everywhere. Synchronizing dynamical systems generalize Smale spaces (and even finitely presented systems). Yet they still have desirable dynamical properties such as having a dense set of periodic points. We study various C^\ast-algebras associated with a synchronizing dynamical system. Among other results, we show that the homoclinic algebra of a synchronizing system contains an ideal which behaves like the homoclinic algebra of a Smale space. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
192. Adapting the centred simplex gradient to compensate for misaligned sample points.
- Author
-
Chen, Yiwen and Hare, Warren
- Subjects
POINT set theory ,MATHEMATICAL optimization - Abstract
The centred simplex gradient (CSG) is a popular gradient approximation technique in derivative-free optimization. Its computation requires a perfectly symmetric set of sample points and is known to provide an accuracy of |$\mathcal {O}(\varDelta ^2)$| , where |$\varDelta $| is the radius of the sampling set. In this paper, we consider the situation where the set of sample points is not perfectly symmetric. By adapting the formula for the CSG to compensate for the misaligned points, we define a new Adapted-CSG. We study the error bounds and the numerical stability of the Adapted-CSG. We also present numerical examples to demonstrate its properties relative to each new parameter and make a comparison to an alternative method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
193. Random points are optimal for the approximation of Sobolev functions.
- Author
-
Krieg, David and Sonnleitner, Mathias
- Subjects
SOBOLEV spaces ,POINT set theory ,NUMERICAL integration - Abstract
We show that independent and uniformly distributed sampling points are asymptotically as good as optimal sampling points for the approximation of functions from Sobolev spaces |$W_p^s(\varOmega)$| on bounded convex domains |$\varOmega \subset{\mathbb{R}}^d$| in the |$L_q$| -norm if |$q
- Published
- 2024
- Full Text
- View/download PDF
194. Sparse optimal control of Timoshenko's beam using a locking‐free finite element approximation.
- Author
-
Hernández, Erwin and Merino, Pedro
- Subjects
LINEAR orderings ,FINITE element method ,POINT set theory ,ACTUATORS ,COST functions - Abstract
This paper addresses the optimal control problem with sparse controls of a Timoshenko beam, its numerical approximation using the finite element method, and the numerical solution via nonsmooth methods. Incorporating sparsity‐promoting terms in the cost function is practically useful for beam vibration models and results in the localization of the control action that facilitates the placement of actuators or control devices. We consider two types of sparsity‐inducing penalizers: the L1$$ {L}^1 $$‐norm and the L0$$ {L}^0 $$‐penalizer, which measures function support. We analyze discretized problems utilizing linear finite elements with a locking‐free scheme to approximate the states and adjoint states. We confirm that this approximation has the looking‐free property required to achieve a linear convergence linear order of approximation for L1$$ {L}^1 $$ control case and depending on the set of switching points in the L0$$ {L}^0 $$ controls. This is similar to the purely L2$$ {L}^2 $$‐norm penalized optimal control, where the order of approximation is independent of the thickness of the beam. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
195. Design of a decentralized control law for variable area coupled tank systems using H∞ complimentary sensitivity function.
- Author
-
K. R., Achu Govind, Mahapatra, Subhasish, and Ranjan Mahapatro, Soumya
- Subjects
POINT set theory ,ROBUST control - Abstract
This paper proposes a novel graphical approach to control the liquid levels of the different variable area coupled tank systems using a decentralized H∞$$ {H}_{\infty } $$ controller. The coupled tank systems are common benchmark systems that consist of two tanks interconnected with each other. The primary goal is to maintain the liquid level in the tank at the desired set point by controlling the flow rate. This is attained with a decentralized method wherein the controller values are obtained from the KP,KI$$ {\mathcal{K}}_{\mathcal{P}},{\mathcal{K}}_{\mathcal{I}} $$ plane. Besides, the H∞$$ {H}_{\infty } $$ robust criteria ||UT(jω)F(jω)||∞≤1$$ {\left\Vert {\mathcal{U}}_{\mathcal{T}}\left(j\omega \right)\mathcal{F}\left(j\omega \right)\right\Vert}_{\infty}\le 1 $$ is enforced to guarantee the robustness. The proposed controller design approach aims to improve the performance of the system in terms of disturbance rejection and tracking. The decentralized control structure uses a distributed control strategy, thereby reducing the computational load which leads to improving the control performance. The loop interactions are minimized with decouplers. Furthermore, first‐order plus dead‐time (FOPDT) models are derived for each of the decoupled subsystems. Besides, the servo and regulatory responses of the controller are verified with input and output disturbances. Additionally, the response of the controller to system uncertainties and parameter variations is also studied. The simulation results demonstrate the effectiveness and robustness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
196. An End-to-End Geometric Characterization-aware Semantic Instance Segmentation Network for ALS Point Clouds.
- Author
-
Wang, Jinhong and Yao, Wei
- Subjects
POINT cloud ,AIRBORNE lasers ,POINT set theory - Abstract
Semantic instance segmentation from scenes, serving as a crucial role for 3D modelling and scene understanding. Conducting semantic segmentation before grouping instances is adopted by the existing state-of-the-art methods. However, without additional refinement, semantic errors will fully propagate into the grouping stage, resulting in low overlap with the ground truth instance. Furthermore, the proposed methods focused on indoor level scenes, which are limited when directly applied to large-scale outdoor Airborne Laser Scanning (ALS) point clouds. Numerous instances, significant object density and scale variations make ALS point clouds distinct from indoor data. In order to address the problems, we proposed a geometric characterization-aware semantic instance segmentation network, which utilized both semantic and objectness score to select potential points for grouping. And in point cloud feature learning stage, hand-craft geometry features are taken as input for geometric characterization awareness. Moreover, to address errors propagated from previous modules after grouping, we have additionally designed a per-instance refinement module. To assess semantic instance segmentation, we conducted experiments on an open-source dataset. Additionally, we performed semantic segmentation experiments to evaluate the performance of our proposed point cloud feature learning method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
197. Deterministic sampling based on Kullback–Leibler divergence and its applications.
- Author
-
Wang, Sumin and Sun, Fasheng
- Subjects
DISTRIBUTION (Probability theory) ,CONTINUOUS distributions ,GAUSSIAN processes ,POINT set theory ,PROBABILITY theory - Abstract
This paper introduces a new way to extract a set of representative points from a continuous distribution, which focuses on a method where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when the size of points is small. These points are generated by minimizing the Kullback–Leibler divergence, which is an information-based measure of the disparity between two probability distributions. We refer to these points as Kullback–Leibler points. Based on the link between the total variation and the Kullback–Leibler divergence, we prove that the empirical distribution of Kullback–Leibler points converges to the target distribution. Additionally, we illustrate that Kullback–Leibler points have advantages in simulations when compared with representative points generated by Monte Carlo or other representative points methods. In addition, to prevent the frequent evaluation of complex functions, a sequential version of Kullback–Leibler points is proposed, which adaptively updates the representative points by learning about the complex or unknown functions sequentially. Two potential applications of Kullback-Leibler points in simulation of complex probability densities and optimization of complex response surfaces are discussed and demonstrated with examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
198. Solving multi-attribute decision making problems with incomplete weights: A dual approach.
- Author
-
Byeong Seok Ahn
- Subjects
STATISTICAL decision making ,DECISION making ,LINEAR programming ,MULTIPLE criteria decision making ,POINT set theory - Abstract
This paper proposes a method of ranking discrete alternatives when attribute weights are incompletely known. There are a variety of situations in which it is reasonable to consider incomplete attribute weights and several techniques have been developed to solve such multi-attribute decision making problems. Most frequently, a linear programming (LP) problem subject to a set of incomplete attribute weights is solved to identify dominance relations between alternatives. In this paper, we explore a dual problem to find a closed-form solution and determine the extreme points of a set of (strictly) ranked attribute weights. A simple investigation of the dual optimal solution often leads to a preferred alternative and permits to find the optimal attribute weights that can be applied to the primal, based on the primal-dual relationship. Furthermore, we extend the approach to several examples of incomplete attribute weights and to linear partial information expressed as linear inequalities that satisfy some predefined conditions. Finally, we present a case study to demonstrate how the dual approach can be used to establish dominance between alternatives, when preference orders are specified for a subset of alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
199. New good large (n,r)-arcs in PG(2,29) and PG(2,31).
- Author
-
Daskalov, Rumen
- Subjects
PROJECTIVE planes ,FINITE fields ,LINEAR codes ,POINT set theory - Abstract
An (n, r)-arc is a set of n points of a projective plane such that some r, but no r + 1 of them, are collinear. The maximum size of an (n, r)-arc in PG(2, q) is denoted by m
r (2, q). In this article a (477, 18)-arc, a (596, 22)-arc, a (697, 25)-arc in PG(2,29) and a (598, 21)-arc, a (664, 23)-arc, a (699, 24)-arc, a (769, 26)-arc, a (838, 28)-arc in PG(2,31) are presented. The constructed arcs improve the respective lower bounds on mr (2, 29) and mr (2, 31) in [6]. As a consequence, there exist eight new three-dimensional linear codes over the respective finite fields. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
200. FCDS-DETR: detection transformer based on feature correction and double sampling.
- Author
-
Wang, Min, Jiao, Zhiqiang, Huang, Zhanhua, and Yu, Shihang
- Subjects
- *
BIAS correction (Topology) , *PROBLEM solving , *POINT set theory - Abstract
The recently proposed semantic-aligned matching detection transformer (SAM–DETR model) accelerates the convergence of the detection transformer (DETR) by mapping object queries into an identical embedding space as the encoder's output feature map. However, SAM–DETR model has the problem of low detection accuracy compared to other DETR variants. We observe that the lower detection accuracy of SAM–DETR model is caused by the insufficient number of sample points and the inaccurate localization of the sample points during re-sampling, which blurs the generated attention map. This paper proposes an object detector based on a feature correction and double sampling DETR (FCDS-DETR) to solve this problem. FCDS-DETR takes SAM–DETR model as a baseline and builds on it by adding a feature correction module and a double sampling mechanism to achieve further improvement in detection accuracy with a limited number of additional parameters without sacrificing convergence speed. Firstly, FCDS-DETR improves the sampling point localization accuracy by adding a feature correction module to model the inter-channel dependence of the feature maps to be sampled. Secondly, the number of sampled points is increased by the double sampling mechanism, and attention fusion is used to fuse the attention weight maps corresponding to the two sets of sampled points to improve the recognizability of the attention weight maps. The experimental results show that the average precision is improved by +0.7 on the COCO dataset compared with the SAM–DETR model, and the number of parameters is increased by only 10.34 % , which improves the detection performance of the model very well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.