28,802 results
Search Results
52. Comment on the Paper by Agnoli et al
- Author
-
N. Veall
- Subjects
Breathing ,Respiratory Dead Space ,Arterial input function ,Mechanics ,Deconvolution ,Mathematics ,Bolus injection - Abstract
The possibility of using a very short 133Xe inhalation time in order to simulate a bolus injection is a tempting one. We tried this modification and abandoned the idea because it introduces a major source of error. Provided the arterial input function is measured and the deconvolution carried out on the head curve the values obtained for the fast component clearance rate are independent of the breathing period if it is varied between 1 and 15 min. For shorter periods the values obtained for the fast component are higher. This is to the fact that in order to achieve adequate count rates it is necessary to use a much higher concentration of 133Xe in the breathing mixture. Under these conditions the contribution to the observed count rate due to scattered radiation from 133Xe in the nasopharynx, which is usually negligible, is considerably accentuated. This contribution can be measured quite simply by inhaling 50 to 100 ml of 133Xe at the end of a normal inspiration so that only the respiratory dead space is filled with the tracer.
- Published
- 1969
53. Comments on the paper 'On a geometric theorem' by Henri Poincaré
- Author
-
Vladimir I. Arnold
- Subjects
Annulus (mathematics) ,Fixed point ,Mathematics::Geometric Topology ,symbols.namesake ,Poincaré conjecture ,symbols ,Geometric theorem ,Mathematics::Differential Geometry ,Mathematics::Symplectic Geometry ,Lagrangian ,Mathematics ,Symplectic geometry ,Symplectic manifold ,Morse theory ,Mathematical physics - Abstract
Associated with Poincare’s “geometric theorem”, there is a number of proven and unproven propositions on fixed points of symplectic (or, as called by Poincare canonical) diffeomorphisms of symplectic manifolds more general than a circular annulus (or in an even more generally, on intersections of so-called Lagrangian submanifolds of a symplectic manifold).
- Published
- 1972
54. On Kuramochi's paper 'Potentials on Riemann surfaces'
- Author
-
Makoto Ohtsuka
- Subjects
Riemann–Hurwitz formula ,symbols.namesake ,Extremal length ,Harmonic function ,Riemann surface ,Mathematical analysis ,symbols ,Boundary (topology) ,Function (mathematics) ,Mathematics - Abstract
In 1956 Z. Kuramochi [2] defined a new boundary for any open Riemann surface R. It is now called Kuramochi boundary. Rigorous treatments of this boundary are found in Ill and [4]. It shares some properties with inner points of R. For instance, the values of every SHS function 2) are defined on the Kuramochl boundary in [1]. This was also done by Kuramochl [3] but his discussions were not quite clear.
- Published
- 1968
55. Contribution to Discussion of the Paper by P. G. Hodge jr
- Author
-
E. H. Lee
- Subjects
Yield (engineering) ,Flow (mathematics) ,Diagonal ,Diagram ,Mathematical analysis ,Boundary value problem ,Plasticity ,Strain rate ,Logarithmic spiral ,Mathematics - Abstract
Professor Hodge has clearly brought out the value of the use of the Tresca yield condition and associated flow rule in obtaining solutions of boundary value problems of plastic flow. In terms of Lode’s variables, this flow criterion appears to differ considerably from the experimental results of Taylor and Quinney: Phil. Trans. Roy. Soc., Lond., Ser. A 230, 323 (1931), and for this reason in the past the Mises flow law has been preferred in theoretical work. However, some recent solutions based on the Tresca criterion have been checked experimentally (J. Foulyes and E. T. Onat, Tests of behaviour of circular plates under transverse load, Brown University Report DA—3172/3, May 1955) and have been found to give remarkable agreement with experiment. In the problem of a loaded circular plate, the characteristics change from radial lines to logarithmic spirals at a certain radius and such a change was observed in markings on the plate surface. It seems therefore that solutions obtained by this method are more satisfactory than had been anticipated. The reason for this may be due to concentration of the corresponding points in the Lode diagram in the region of the diagonal in problems of non-homogeneous stress and strain, due to the freedom of the strain rate vector at the corners of the Tresca yield hexagon. It would seem worthwhile to look into this, for as Professor Hodge has shown, this technique offers an extremely powerful means of solution of plastic flow problems, which will be the more valuable when the basis for its accuracy is better understood.
- Published
- 1956
56. The sum-of-squares hierarchy on the sphere and applications in quantum information theory
- Author
-
Fang, K, Fawzi, H, Fang, Kun [0000-0002-9232-6846], Fawzi, Hamza [0000-0001-6026-4102], Apollo - University of Cambridge Repository, Fang, K [0000-0002-9232-6846], and Fawzi, H [0000-0001-6026-4102]
- Subjects
FOS: Computer and information sciences ,Unit sphere ,4902 Mathematical Physics ,General Mathematics ,0211 other engineering and technologies ,FOS: Physical sciences ,0102 computer and information sciences ,02 engineering and technology ,Computational Complexity (cs.CC) ,90C22 ,90C23 ,01 natural sciences ,Polynomial kernel ,4903 Numerical and Computational Mathematics ,81P42 ,FOS: Mathematics ,Quantum information ,Mathematics - Optimization and Control ,Mathematics ,Discrete mathematics ,Quantum Physics ,021103 operations research ,Hierarchy (mathematics) ,Full Length Paper ,4901 Applied Mathematics ,Explained sum of squares ,4904 Pure Mathematics ,Computer Science - Computational Complexity ,Separable state ,Rate of convergence ,Optimization and Control (math.OC) ,010201 computation theory & mathematics ,Homogeneous polynomial ,49 Mathematical Sciences ,Quantum Physics (quant-ph) ,Software - Abstract
We consider the problem of maximizing a homogeneous polynomial on the unit sphere and its hierarchy of sum-of-squares relaxations. Exploiting the polynomial kernel technique, we obtain a quadratic improvement of the known convergence rate by Reznick and Doherty and Wehner. Specifically, we show that the rate of convergence is no worse than $$O(d^2/\ell ^2)$$ O ( d 2 / ℓ 2 ) in the regime $$\ell = \Omega (d)$$ ℓ = Ω ( d ) where $$\ell $$ ℓ is the level of the hierarchy and d the dimension, solving a problem left open in the recent paper by de Klerk and Laurent (arXiv:1904.08828 ). Importantly, our analysis also works for matrix-valued polynomials on the sphere which has applications in quantum information for the Best Separable State problem. By exploiting the duality relation between sums of squares and the Doherty–Parrilo–Spedalieri hierarchy in quantum information theory, we show that our result generalizes to nonquadratic polynomials the convergence rates of Navascués, Owari and Plenio.
- Published
- 2020
57. A general double-proximal gradient algorithm for d.c. programming
- Author
-
Radu Ioan Boț and Sebastian Banert
- Subjects
General Mathematics ,Connection (vector bundle) ,Proximal-gradient algorithm ,0211 other engineering and technologies ,65K05 ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,90C26 ,Convergence analysis ,Convergence (routing) ,FOS: Mathematics ,Point (geometry) ,Mathematics - Numerical Analysis ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,49M29 ,021103 operations research ,Concave function ,Toland dual ,Full Length Paper ,Regular polygon ,90C26, 90C30, 65K05 ,Numerical Analysis (math.NA) ,Linear map ,Iterated function ,Optimization and Control (math.OC) ,Convex function ,Algorithm ,d.c. programming ,Software ,Kurdyka–Łojasiewicz property - Abstract
The possibilities of exploiting the special structure of d.c. programs, which consist of optimizing the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimization problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka--\L ojasiewicz property. In the last part, we apply the algorithm to an image processing model.
- Published
- 2018
58. Towards planning of osteotomy around the knee with quantitative inclusion of the adduction moment: a biomechanical approach
- Author
-
Philipp Damm, Adam Trepczynski, Margit Biehl, Gian M. Salzmann, Stefan Preiss, and Publica
- Subjects
Orthopedic surgery ,Orthodontics ,Supracondylar osteotomy ,Original Paper ,Leg alignment ,Biomechanics of osteotomy ,medicine.medical_treatment ,Osteotomy ,Skeleton (computer programming) ,Medial compartment force ratio ,Target angle ,High tibial osteotomy ,Gait analysis ,medicine ,Adduction moment ,Torque ,Orthopedics and Sports Medicine ,Femur ,Epicondyle ,RD701-811 ,Mathematics - Abstract
Purpose Despite practised for decades, the planning of osteotomy around the knee, commonly using the Mikulicz-Line, is only empirically based, clinical outcome inconsistent and the target angle still controversial. A better target than the angle of frontal-plane static leg alignment might be the external frontal-plane lever arm (EFL) of the knee adduction moment. Hypothetically assessable from frontal-plane-radiograph skeleton dimensions, it might depend on the leg-alignment angle, the hip-centre-to-hip-centre distance, the femur- and tibia-length. Methods The target EFL to achieve a medial compartment force ratio of 50% during level-walking was identified by relating in-vivo-measurement data of knee-internal loads from nine subjects with instrumented prostheses to the same subjects’ EFLs computed from frontal-plane skeleton dimensions. Adduction moments derived from these calculated EFLs were compared to the subjects’ adduction moments measured during gait analysis. Results Highly significant relationships (0.88 ≤ R2 ≤ 0.90) were found for both the peak adduction moment measured during gait analysis and the medial compartment force ratio measured in vivo to EFL calculated from frontal-plane skeleton dimensions. Both correlations exceed the respective correlations with the leg alignment angle, EFL even predicts the adduction moment’s first peak. The guideline EFL for planning osteotomy was identified to 0.349 times the epicondyle distance, hence deducing formulas for individualized target angles and Mikulicz-Line positions based on full-leg radiograph skeleton dimensions. Applied to realistic skeleton geometries, widespread results explain the inconsistency regarding correction recommendations, whereas results for average geometries exactly meet the most-consented “Fujisawa-Point”. Conclusion Osteotomy outcome might be improved by planning re-alignment based on the provided formulas exploiting full-leg-radiograph skeleton dimensions.
- Published
- 2021
59. Positional encoding in cotton-top tamarins (Saguinus oedipus)
- Author
-
Natalie Shelton-May, Jessica R. Rogge, Elisabetta Versace, Andrea Ravignani, Artificial Intelligence, and Informatics and Applied Informatics
- Subjects
0106 biological sciences ,Male ,Similarity (geometry) ,Artificial grammar learning ,Computer science ,Movement ,Rule learning ,Experimental and Cognitive Psychology ,Cotton-top tamarins ,Relative position ,010603 evolutionary biology ,01 natural sciences ,050105 experimental psychology ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Generalization (learning) ,Encoding (memory) ,Animals ,Learning ,0501 psychology and cognitive sciences ,050102 behavioral science & comparative psychology ,Non-adjacent dependency ,Ecology, Evolution, Behavior and Systematics ,Mathematics ,Original Paper ,biology ,business.industry ,05 social sciences ,Pattern recognition ,biology.organism_classification ,Saguinus oedipus ,Positional rule ,Female ,Artificial intelligence ,business ,Absolute position ,Saguinus ,Reinforcement, Psychology ,030217 neurology & neurosurgery - Abstract
Strategies used in artificial grammar learning can shed light into the abilities of different species to extract regularities from the environment. In the A(X)nB rule, A and B items are linked but assigned to different positional categories and separated by distractor items. Open questions are how widespread is the ability to extract positional regularities from A(X)nB patterns, which strategies are used to encode positional regularities and whether individuals exhibit preferences for absolute or relative position encoding. We used visual arrays to investigate whether cotton-top tamarins (Saguinus oedipus) can learn this rule and which strategies they use. After training on a subset of exemplars, half of the tested monkeys successfully generalized to novel combinations. These tamarins discriminated between categories of tokens with different properties (A, B, X) and detected a positional relationship between non-adjacent items even in the presence of novel distractors. Generalization, though, was incomplete, since we observed a failure with items that during training had always been presented in reinforced arrays. The pattern of errors revealed that successful subjects used visual similarity with training stimuli to solve the task, and that tamarins extracted the relative position of As and Bs rather than their absolute position, similarly to what observed in other species. Relative position encoding appears to be the default strategy in different tasks and taxa.
- Published
- 2019
60. Second-order work in barodesy
- Author
-
Gertraud Medicus, Dimitrios Kolymbas, and Barbara Schneider-Muntau
- Subjects
Work (thermodynamics) ,Barodesy ,Second-order work ,010102 general mathematics ,Constitutive equation ,0211 other engineering and technologies ,Finite element simulations ,02 engineering and technology ,Mechanics ,Constitutive model ,Geotechnical Engineering and Engineering Geology ,01 natural sciences ,Finite element method ,Stress (mechanics) ,Solid mechanics ,Earth and Planetary Sciences (miscellaneous) ,Boundary value problem ,Limit (mathematics) ,0101 mathematics ,Shear band ,021101 geological & geomatics engineering ,Mathematics ,Research Paper - Abstract
Second-order work analyses, based on elasto-plastic models, have been frequently carried out leading to the result that failure may occur before the limit yield condition is encountered. In this article, second-order work investigations are carried out with barodesy regarding standard element tests and finite element applications. In barodesy, it was shown—like in hypoplasticity and elasto-plasticity—that second-order work may vanish at stress states inside the critical limit surface. For boundary value problems, an end-to-end shear band of vanishing second-order work marks situations, where failure is imminent.
- Published
- 2018
61. Leaner and greener analysis of cannabinoids
- Author
-
Elizabeth Mudge, Paula N. Brown, and Susan J. Murch
- Subjects
Cannabaceae ,Relative standard deviation ,Single-laboratory validation ,Nanotechnology ,Flowers ,Raw material ,Cannabis sativa ,01 natural sciences ,Biochemistry ,Analytical Chemistry ,Limit of Detection ,Chromatography, High Pressure Liquid ,Mathematics ,Cannabis ,Detection limit ,biology ,010405 organic chemistry ,Cannabinoids ,010401 analytical chemistry ,Reproducibility of Results ,Green Chemistry Technology ,Repeatability ,Pulp and paper industry ,biology.organism_classification ,0104 chemical sciences ,Medical services ,Green chemistry ,Solvents ,Medical marijuana ,Research Paper - Abstract
There is an explosion in the number of labs analyzing cannabinoids in marijuana (Cannabis sativa L., Cannabaceae) but existing methods are inefficient, require expert analysts, and use large volumes of potentially environmentally damaging solvents. The objective of this work was to develop and validate an accurate method for analyzing cannabinoids in cannabis raw materials and finished products that is more efficient and uses fewer toxic solvents. An HPLC-DAD method was developed for eight cannabinoids in cannabis flowers and oils using a statistically guided optimization plan based on the principles of green chemistry. A single-laboratory validation determined the linearity, selectivity, accuracy, repeatability, intermediate precision, limit of detection, and limit of quantitation of the method. Amounts of individual cannabinoids above the limit of quantitation in the flowers ranged from 0.02 to 14.9% w/w, with repeatability ranging from 0.78 to 10.08% relative standard deviation. The intermediate precision determined using HorRat ratios ranged from 0.3 to 2.0. The LOQs for individual cannabinoids in flowers ranged from 0.02 to 0.17% w/w. This is a significant improvement over previous methods and is suitable for a wide range of applications including regulatory compliance, clinical studies, direct patient medical services, and commercial suppliers. Electronic supplementary material The online version of this article (doi:10.1007/s00216-017-0256-3) contains supplementary material, which is available to authorized users.
- Published
- 2017
62. The use of B-splines to represent the topography of river networks
- Author
-
Michael Schmidt, Eva Boergens, and Florian Seitz
- Subjects
geography ,geography.geographical_feature_category ,Mean squared error ,Neighbourhood (graph theory) ,Geodesy ,Standard deviation ,Water level ,ddc ,Root mean square ,Modeling and Simulation ,Outlier ,Tributary ,General Earth and Planetary Sciences ,Limit (mathematics) ,Original Paper ,B-Splines ,Directed tree graph ,River network ,River topography ,86A30 Geodesy ,mapping problems ,86A32 Geostatistics ,41A15 Spline approximation ,65D07 Splines ,05C05 Trees ,Mathematics - Abstract
This work presents a new extension to B-Splines that enables them to model functions on directed tree graphs such as non-braided river networks. The main challenge of the application of B-splines to graphs is their definition in the neighbourhood of nodes with more than two incident edges. Achieving that the B-splines are continuous at these points is non-trivial. For both, simplification reasons and in view of our application, we limit the graphs to directed tree graphs. To fulfil the requirement of continuity, the knots defining the B-Splines need to be located symmetrically along the edges with the same direction. With such defined B-Splines, we approximate the topography of the Mekong River system from scattered height data along the river. To this end, we first test and validate successfully the method with synthetic water level data, with and without added annual signal. The quality of the resulting heights is assessed besides others by means of root mean square errors (RMSE) and mean absolute differences (MAD). The RMSE values are 0.26 m and 1.05 m without and with added annual variation respectively and the MAD values are even lower with 0.11 m and 0.60 m. For the second test, we use real water level observations measured by satellite altimetry. Again, we successfully estimate the river topography, but also discuss the short comings and problems with unevenly distributed data. The unevenly distributed data leads to some very large outliers close to the upstream ends of the rivers tributaries and in regions with rapidly changing topography such as the Mekong Falls. Without the outlier removal the standard deviation of the resulting heights can be as large as 50 m with a mean value of 15.73 m. After the outlier removal the mean standard deviation drops to 8.34 m.
- Published
- 2020
63. Generating subtour elimination constraints for the TSP from pure integer solutions
- Author
-
Ulrich Pferschy and Rostislav Staněk
- Subjects
90C27 ,0209 industrial biotechnology ,Speedup ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Travelling salesman problem ,Combinatorics ,Traveling salesman problem ,020901 industrial engineering & automation ,Integer ,Simple (abstract algebra) ,Euclidean geometry ,FOS: Mathematics ,Cluster analysis ,Mathematics - Optimization and Control ,Mathematics ,Discrete mathematics ,ILP solver ,Original Paper ,021103 operations research ,Subtour elimination constraint ,Complete graph ,Random Euclidean graph ,Optimization and Control (math.OC) ,Branch and cut ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
The traveling salesman problem (TSP) is one of the most prominent combinatorial optimization problems. Given a complete graph \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G = (V, E)$$\end{document}G=(V,E) and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach. Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions. In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs.
- Published
- 2016
64. A stabilized finite element method for finite-strain three-field poroelasticity
- Author
-
Rafel Bordas, David Kay, Simon Tavener, and Lorenz Berger
- Subjects
Original Paper ,Applied Mathematics ,Mechanical Engineering ,Mathematical analysis ,Bandwidth (signal processing) ,Poromechanics ,Fluid flux ,Computational Mechanics ,Ocean Engineering ,010103 numerical & computational mathematics ,01 natural sciences ,Finite element method ,010101 applied mathematics ,Computational Mathematics ,symbols.namesake ,Computational Theory and Mathematics ,Lagrange multiplier ,Finite strain theory ,Compressibility ,symbols ,Boundary value problem ,0101 mathematics ,Mathematics - Abstract
We construct a stabilized finite-element method to compute flow and finite-strain deformations in an incompressible poroelastic medium. We employ a three-field mixed formulation to calculate displacement, fluid flux and pressure directly and introduce a Lagrange multiplier to enforce flux boundary conditions. We use a low order approximation, namely, continuous piecewise-linear approximation for the displacements and fluid flux, and piecewise-constant approximation for the pressure. This results in a simple matrix structure with low bandwidth. The method is stable in both the limiting cases of small and large permeability. Moreover, the discontinuous pressure space enables efficient approximation of steep gradients such as those occurring due to rapidly changing material coefficients or boundary conditions, both of which are commonly seen in physical and biological applications.
- Published
- 2017
65. Inequality in USA mathematics education: the roles race and socio-economic status play
- Author
-
Schmidt, William H., Guo, Siwen, and Sullivan, William F.
- Published
- 2024
- Full Text
- View/download PDF
66. Combine DPC and TDM for MIMO Broadcast Channels in Circuit Data Scenarios
- Author
-
Guocheng Lv, Da Wang, Ye Jin, Mingke Dong, and Yingbo Li
- Subjects
Scheme (programming language) ,Gaussian ,MIMO ,Data_CODINGANDINFORMATIONTHEORY ,Covariance ,Complex normal distribution ,Power (physics) ,symbols.namesake ,Time-division multiplexing ,symbols ,Electronic engineering ,Dirty paper coding ,computer ,Computer Science::Information Theory ,Mathematics ,computer.programming_language - Abstract
Dirty paper coding (DPC) is shown to achieve the capacity of multiple-input multiple-output (MIMO) Gaussian broadcast channels (BCs). Finding the optimal covariance matrices and order of users for maximizing user capacity requires high complexity computation. To deal with this problem, many researchers use fixed order of users e.g. minimum power first (MPF) and get subpar performance. Meanwhile, the complexity of DPC scheme grows linearly with number of users requiring to cancel known-interference. In this paper we present a scheme combining DPC and traditional Time Division Multiplexing (TDM) that aim to maximize user capacity and reduce complexity of implementation. Simulation results show that the proposed scheme achieves better performance than that of DPC MPF scheme.
- Published
- 2012
67. The Linear Convergence of a Merit Function Method for Nonlinear Complementarity Problems
- Author
-
Xiaoqin Jiang and Liyong Lu
- Subjects
Mathematical optimization ,Rate of convergence ,Complementarity theory ,Merit function ,Short paper ,Convergence (routing) ,Nonlinear complementarity ,Mixed complementarity problem ,Mathematics - Abstract
Based on a family of generalized merit functions, a merit function method for solving nonlinear complementarity problems was proposed by Lu, Huang and Hu [Properties of a family of merit functions and a merit function method for the NCP, Appl. Math.– J. Chinese Univ., 2010, 25: 379–390], where, the global convergence of the method was proved. However, no the result on the convergence rate of the method was reported. In this short paper, we show that the method proposed in the above paper is globally linearly convergent under suitable assumptions.
- Published
- 2012
68. Lead versus lag-time trade-off variants: does it make any difference?
- Author
-
Federico Augustovski, Mark Oppe, Nancy Devlin, Lucila Rey-Ares, and Vilma Irazola
- Subjects
Quality of life ,Adult ,Male ,medicine.medical_specialty ,Time Factors ,Adolescent ,Health Status ,Economics, Econometrics and Finance (miscellaneous) ,Pilot Projects ,Trade-off ,Time-trade-off ,law.invention ,C93 ,Interviews as Topic ,Young Adult ,Lag time ,Randomized controlled trial ,law ,Surveys and Questionnaires ,Medicine ,Humans ,I10 ,Worse than dead ,Mathematics ,Aged ,Aged, 80 and over ,Original Paper ,Lag-time TTO ,Group interview ,business.industry ,Public health ,Health Policy ,Significant difference ,Public Health, Environmental and Occupational Health ,Mean age ,Middle Aged ,Time trade-off ,humanities ,Health states ,EQ-5D-5L ,Lead-time TTO ,Quota sampling ,D01 ,Female ,business ,Demography - Abstract
Objectives The traditional time trade-off (TTO) method has some problems in the valuation of health states considered worse than dead. The aim of our study is to compare two TTO variants that address this issue: lead-time and lag-time TTO. Methods Quota sampling was undertaken in June 2011 in Buenos Aires as part of the EQ-5D-5L Multinational Pilot Study. Respondents were randomly assigned to one of the TTO variants with two blocks of five EQ-5D-5L health states. Tasks were administered using a web-based digital aid (EQ-VT) administered in a group interview. Results A total of 387 participants were included [mean age 38.85 (SD: 13.97); 53.14 % females]. The mean observed values ranged from 0.44 (0.59) for state 21111 to 0.02 (0.76) for state 53555 in the lead-time group and between 0.53 (0.52) and 0.08 (0.76) in the lag-time group. There were no statistically significant differences in the values between TTO variants, except for a significant difference of 0.19 for state 33133. In both variants, marked peaks were observed around the value 0 across all states, with a higher percentage of 0 responses in the last state valued, suggesting ordering effects. Conclusions No important differences were found between TTO variants regarding values for EQ-5D-5L health states, suggesting that they could be equivalent variants. However, differences between the two methods may have been obscured by other aspects of the study design affecting the characteristics of the data.
- Published
- 2013
69. Wheat and Rice Straw Fibers
- Author
-
Yiqi Yang and Narendra Reddy
- Subjects
animal structures ,biology ,Pulp (paper) ,Fineness ,food and beverages ,Rice straw ,engineering.material ,Straw ,biology.organism_classification ,Kenaf ,chemistry.chemical_compound ,chemistry ,Agronomy ,Fodder ,Ultimate tensile strength ,engineering ,Lignin ,Mathematics - Abstract
Wheat is the fourth most popular crop in the world with a production of 675 million tons in 2012. About 1–1.2 tons of straw are generated per acre and wheat straw accounts for about 50 % by weight of the cereal produced. Straw is mainly used as animal fodder and bedding, for thatching, and for artistic works, and in many countries, wheat straw is burnt to prevent soilborne diseases. Extensive studies have been done to understand the potential of using wheat straw for pulp and paper production. However, wheat straw has a waxy covering on the surface and a unique morphological structure that makes it difficult for alkali to penetrate into the straw and separate fiber bundles with the length, fineness, and tensile properties required for textile and other high-value fibrous applications. As seen in Fig. 3.1, the individual cells or ultimate fibers in wheat straw have serrated edges that get interlocked with each other. It was found that a pretreatment with detergent and mechanical separation with steel balls were necessary before the alkaline treatment to obtain fiber bundles from wheat straw [07Red]. Fiber bundles obtained from wheat straw had tensile properties similar to kenaf as seen in Table 3.1. About 20 % fibers were obtained, but the fiber bundles obtained were considerably coarser than cotton and linen.
- Published
- 2014
70. A Kernel P Systems Survey
- Author
-
Florentin Ipate and Marian Gheorghe
- Subjects
Algebra ,Discrete mathematics ,Development (topology) ,Kernel (statistics) ,Short paper ,Boolean expression ,Mathematics ,Basic class - Abstract
In this short paper one overviews the two years development of kernel P systems (kP systems for short), a basic class of P systems combining features of different variants of such systems. The definition of kP systems is given, some examples illustrate various features of the model and the most significant results are presented.
- Published
- 2014
71. Real-time estimation of biomass and specific growth rate in physiologically variable recombinant fed-batch processes
- Author
-
Patrick Wechselberger, Christoph Herwig, and Patrick Sagmeister
- Subjects
0106 biological sciences ,Bioconversion ,Biomass ,Bioengineering ,Cell morphology ,01 natural sciences ,7. Clean energy ,Models, Biological ,Process model ,03 medical and health sciences ,Bioreactors ,010608 biotechnology ,Bioreactor ,Bioprocess ,030304 developmental biology ,Mathematics ,0303 health sciences ,Original Paper ,Real-time biomass quantification ,Process analytical technology (PAT) ,business.industry ,General Medicine ,Soft sensor ,Recombinant protein production ,Biotechnology ,Variable (computer science) ,Industrial and production engineering ,business ,Biological system - Abstract
The real-time measurement of biomass has been addressed since many years. The quantification of biomass in the induction phase of a recombinant bioprocess is not straight forward, since biological burden, caused by protein expression, can have a significant impact on the cell morphology and physiology. This variability potentially leads to poor generalization of the biomass estimation, hence is a very important issue in the dynamic field of process development with frequently changing processes and producer lines. We want to present a method to quantify “biomass” in real-time which avoids off-line sampling and the need for representative training data sets. This generally applicable soft-sensor, based on first principles, was used for the quantification of biomass in induced recombinant fed-batch processes. Results were compared with “state of the art” methods to estimate the biomass concentration and the specific growth rate µ. Gross errors such as wrong stoichiometric assumptions or sensor failure were detected automatically. This method allows for variable model coefficients such as yields in contrast to other process models, hence does not require prior experiments. It can be easily adapted to a different growth stoichiometry; hence the method provides good generalization, also for induced culture mode. This approach estimates the biomass (or anabolic bioconversion) in induced fed-batch cultures in real-time and provides this key variable for process development for control purposes.
- Published
- 2012
72. Formal Concept Analysis
- Author
-
Rokia Missaoui and Jürg Schmid
- Subjects
Theoretical computer science ,Data visualization ,Order theory ,business.industry ,Restructuring ,Thriving ,Formal concept analysis ,Position paper ,Lattice Miner ,business ,Data science ,Conceptual hierarchy ,Mathematics - Abstract
ThisvolumecontainsselectedpapersofICFCA2006,the4thInternationalC- ference on Formal Concept Analysis. The ICFCA conference series aims to be the prime forum for the publication of advances in applied lattice and order theory and in particular scienti?c advances related to formal concept analysis. Formalconceptanalysisis a?eldofappliedmathematics withitsmathema- calrootinordertheory,inparticularthetheoryofcompletelattices.Researchers had long been aware of the fact that these ?elds have many potential appli- tions. Formal concept analysis emerged in the 1980s from e?orts to restructure lattice theory to promote better communication between lattice theorists and potential users of lattice theory. The key theme was the mathematical form- ization of concept and conceptual hierarchy. Since then, the ?eld has developed into a growing research area in its own right with a thriving theoretical com- nity and an increasing number of applications in data and knowledge processing including data visualization, information retrieval, machine learning, data an- ysis and knowledge management. ICFCA2006re?ectedbothpracticalbene?tsandprogressinthefoundational theory of formal concept analysis. This volume contains four lecture notes from invited speakers and 17 regular papers, among them one position paper. All regular papers appearing in these proceedings were refereed by at least two, in most cases three independent reviewers. The ?nal decision to accept the papers was arbitrated by the Program Chairs based on the referee reports
- Published
- 2006
73. Origami, Linkages, and Polyhedra: Folding with Algorithms
- Author
-
Erik D. Demaine
- Subjects
Surface (mathematics) ,Polyhedron ,law ,Mathematics of paper folding ,Protein folding ,Linkage (mechanical) ,Folding (DSP implementation) ,Graphics ,Computational geometry ,Algorithm ,law.invention ,Mathematics - Abstract
What forms of origami can be designed automatically by algorithms? What shapes can result by folding a piece of paper flat and making one complete straight cut? What polyhedra can be cut along their surface and unfolded into a flat piece of paper without overlap? When can a linkage of rigid bars be untangled or folded into a desired configuration? Folding and unfolding is a branch of discrete and computational geometry that addresses these and many other intriguing questions. I will give a taste of the many results that have been proved in the past few years, as well as the several exciting open problems that remain open. Many folding problems have applications in areas including manufacturing, robotics, graphics, and protein folding.
- Published
- 2006
74. Robust Estimation of Amplitude Modification for Scalar Costa Scheme Based Audio Watermark Detection
- Author
-
Siho Kim and Keunsung Bae
- Subjects
Spread spectrum ,Audio signal ,Audio watermark ,Computational complexity theory ,Quantization (signal processing) ,Computer Science::Multimedia ,Electronic engineering ,Watermark ,Dirty paper coding ,Algorithm ,Digital watermarking ,Mathematics - Abstract
Recently, informed watermarking schemes based on Costa's dirty paper coding are drawing more attention than spread spectrum based techniques because these kinds of watermarking algorithms do not need an original host signal for watermark detection and the host signal does not affect the performance of watermark detection. For practical implementation, they mostly use uniform scalar quantizers, which are very vulnerable against amplitude modification. Hence, it is necessary to estimate the amplitude modification, i.e., a modified quantization step size, before watermark detection. In this paper, we propose a robust algorithm to estimate the modified quantization step size with an optimal search interval. It searches the quantization step size to minimize the quantization error of the received audio signal. It does not encroach the space for embedding watermark message because it just uses the received signal itself for estimation of the quantization step size. The optimal searching interval is determined to satisfy both detection performance and computational complexity. Experimental results show that the proposed algorithm can estimate the modified quantization step size accurately under amplitude modification attacks.
- Published
- 2005
75. Research on curriculum resources in mathematics education: a survey of the field
- Author
-
Rezat, Sebastian
- Published
- 2024
- Full Text
- View/download PDF
76. Approximate Concepts Based on N-Scale Relation
- Author
-
Qing Wan and Ling Wei
- Subjects
Discrete mathematics ,Algebra ,Knowledge extraction ,Lattice (order) ,Formal concept analysis ,Paper based ,Mathematics - Abstract
To be an efficient tool for knowledge discovery, formal concept analysis has been paid more attention to and applied to many fields in recent years. Through studying the n-scale relation defined in this paper based on the formal context, we obtain some knowledge: left neighborhood sets and right neighborhood sets, which also belong to the powerset of partition on the object set, like the set of extents. Especially, when n is a special value, the corresponding left neighborhood approximate concepts and right neighborhood approximate concepts are the join-dense subsets of the property oriented concept lattice and the concept lattice of a formal context, respectively. And then, the whole lattices can be obtained.
- Published
- 2012
77. Mathematics and Music Boxes
- Author
-
Vi Hart
- Subjects
symbols.namesake ,Music theory ,Paper tape ,symbols ,Literal (computer programming) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Möbius strip ,Arithmetic ,Topology (chemistry) ,Musical form ,Mathematics - Abstract
Music boxes which play a paper tape are fantastic tools for visually demonstrating some of the mathematical concepts in musical structure. The literal written notes in a piece can be transformed physically through reflections and rotations, and then easily played on the music box. Principles of topology can be demonstrated by playing loops and Mobius strips. Written music can also be transformed into different types of canons by sending it through multiple music boxes.
- Published
- 2012
78. A Fast Approximation Scheme for the Multiple Knapsack Problem
- Author
-
Klaus Jansen
- Subjects
Combinatorics ,Knapsack problem ,Bin packing problem ,Bounded function ,Computer Science::Data Structures and Algorithms ,Full paper ,Polynomial-time approximation scheme ,Mathematics ,Running time - Abstract
In this paper we propose an improved efficient approximation scheme for the multiple knapsack problem (MKP). Given a set ${\mathcal A}$ of n items and set ${\mathcal B}$ of m bins with possibly different capacities, the goal is to find a subset $S \subseteq{\mathcal A}$ of maximum total profit that can be packed into ${\mathcal B}$ without exceeding the capacities of the bins. Chekuri and Khanna presented a PTAS for MKP with arbitrary capacities with running time $n^{O(1/\epsilon^8 \log(1/\epsilon))}$ . Recently we found an efficient polynomial time approximation scheme (EPTAS) for MKP with running time $2^{O(1/\epsilon^5 \log(1/\epsilon))} poly(n)$ . Here we present an improved EPTAS with running time $2^{O(1/\epsilon \log^4(1/\epsilon))} + poly(n)$ . If the integrality gap between the ILP and LP objective values for bin packing with different sizes is bounded by a constant, the running time can be further improved to $2^{O(1/\epsilon \log^2(1/\epsilon))} + poly(n)$ .
- Published
- 2012
79. When Can You Fold a Map?
- Author
-
Steven Skiena, Erik D. Demaine, Joseph S. B. Mitchell, Michael A. Bender, Esther M. Arkin, Martin L. Demaine, and Saurabh Sethia
- Subjects
Combinatorics ,Quantitative Biology::Biomolecules ,Efficient algorithm ,Diagonal ,Diagonal matrix ,Geometry ,Iterative reconstruction ,Fold (geology) ,Computer Science::Computational Geometry ,Time complexity ,Full paper ,Decidability ,Mathematics - Abstract
We explore the following problem: given a collection of creases on a piece of paper, each assigned a folding direction of mountain or valley, is there a flat folding by a sequence of simple folds? There are several models of simple folds; the simplest one-layer simple fold rotates a portion of paper about a crease in the paper by ±180°. We first consider the analogous questions in one dimension lower--bending a segment into a flat object--which lead to interesting problems on strings. We develop efficient algorithms for the recognition of simply foldable 1-D crease patterns, and reconstruction of a sequence of simple folds. Indeed, we prove that a 1-D crease pattern is flat-foldable by any means precisely if it is by a sequence of one-layer simple folds. Next we explore simple foldability in two dimensions, and find a surprising contrast: "map" folding and variants are polynomial, but slight generalizations are NP-complete. Specifically, we develop a linear-time algorithm for deciding foldability of an orthogonal crease pattern on a rectangular piece of paper, and prove that it is (weakly) NP-complete to decide foldability of (1) an orthogonal crease pattern on a orthogonal piece of paper, (2) a crease pattern of axis-parallel and diagonal (45-degree) creases on a square piece of paper, and (3) crease patterns without a mountain/valley assignment.
- Published
- 2001
80. An Automata-Theoretic Completeness Proof for Interval Temporal Logic
- Author
-
Ben C. Moszkowski
- Subjects
Discrete mathematics ,Interval temporal logic ,Temporal logic ,Regular expression ,Finite time ,Rule of inference ,Propositional calculus ,Full paper ,Mathematics ,Automaton - Abstract
Interval Temporal Logic (ITL) is a formalism for reasoning about time periods. To date no one has proved completeness of a relatively simple ITL deductive system supporting infinite time and permitting infinite sequential iteration comparable to ω-regular expressions. We have developed a complete axiomatization for such a version of quantified ITL over finite domains and can show completeness by representing finite-state automata in ITL and then translating ITL formulas into them. Here we limit ourselves to finite time. The full paper (and another conference paper [15]) extends the approach to infinite time.
- Published
- 2000
81. Filters on Commutative Residuated Lattices
- Author
-
Michiro Kondo
- Subjects
Pure mathematics ,High Energy Physics::Lattice ,Lattice (order) ,Short paper ,Congruence (manifolds) ,Filter (mathematics) ,Residuated lattice ,Commutative property ,Mathematics - Abstract
In this short paper we define a filter of a commutative residuated lattice and prove that, for any commutative residuated lattice L, the lattice Fil(L) of all filters of L is isomorphic to the congruence lattice Con(L) of L, that is
- Published
- 2010
82. An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils
- Author
-
Athanasios A. Pantelous, Grigoris I. Kalogeropoulos, and Athanasios D. Karageorgos
- Subjects
Discrete mathematics ,Matrix (mathematics) ,Transformation matrix ,Efficient algorithm ,Short paper ,Bilinear interpolation ,Equivalence (measure theory) ,Mathematics - Abstract
In this short paper, we have two main objectives First, to present the basic elements of the strict bilinear equivalence Secondly, to describe an efficient algorithm for investigating the conditions for two homogeneous matrix pencils $sF_1-\hat{s}G_1$ and $sF_2-\hat{s}G_2$ to be bilinear strict equivalent The proposed problem is very interesting since the applications are merely numerous The algorithm is implemented in a numerical stable manner, giving efficient results.
- Published
- 2010
83. Ordering of Matrices for Iterative Aggregation - Disaggregation Methods
- Author
-
Ivana Pultarová
- Subjects
Iteration matrix ,Iterative method ,Short paper ,Convergence (routing) ,MathematicsofComputing_NUMERICALANALYSIS ,Stochastic matrix ,Algorithm ,Fast algorithm ,Eigenvalues and eigenvectors ,Mathematics ,Sparse matrix - Abstract
In this short paper we show how the convergence of the iterative aggregation-disaggregation methods for computing the Perron eigenvector of a large sparse irreducible stochastic matrix can be improved by an appropriate ordering of the data and by the choice of a basic iteration matrix. Some theoretical estimates are introduced and a fast algorithm is proposed for obtaining the desired ordering. Numerical examples are presented.
- Published
- 2009
84. Firefighting on Trees: (1 − 1/e)–Approximation, Fixed Parameter Tractability and a Subexponential Algorithm
- Author
-
Lin Yang, Leizhen Cai, and Elad Verbin
- Subjects
Linear programming relaxation ,Discrete mathematics ,Combinatorics ,Unit of time ,Firefighting ,Binary logarithm ,Randomized rounding ,Algorithm ,Graph ,Full paper ,Vertex (geometry) ,Mathematics - Abstract
The firefighter problem is defined as follows. Initially, a fire breaks out at a vertex r of a graph G. In each subsequent time unit, a firefighter chooses a vertex not yet on fire and protects it, and the fire spreads to all unprotected neighbors of the vertices on fire. The objective is to choose a sequence of vertices for the firefighter to protect so as to save the maximum number of vertices. The firefighter problem can be used to model the spread of fire, diseases, computer viruses and suchlike in a macro-control level. In this paper, we study algorithmic aspects of the firefighter problem on trees, which is NP-hard even for trees of maximum degree 3. We present a (1 - 1/e)-approximation algorithm based on LP relaxation and randomized rounding, and give several FPT algorithms using a random separation technique of Cai, Chan and Chan. Furthermore, we obtain an $2^{O(\sqrt{n}\log n)}$-time subexponential algorithm.
- Published
- 2008
85. Noise Analysis of a SFS Algorithm Formulated under Various Imaging Conditions
- Author
-
A.A. Farag, Abdelrehim Ahmed, Shireen Y. Elhabian, and Aly A. Farag
- Subjects
Brightness ,Photometric stereo ,Partial differential equation ,Point light source ,Robustness (computer science) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sand-paper ,Image noise ,Algorithm ,Reflectivity ,Mathematics - Abstract
Many different shape from shading (SFS) algorithms have emerged during the last three decades. Recently, we proposed [1] a unified framework that is capable of solving the SFS problem under various settings of imaging conditions representing the image irradiance equation of each setting as an explicit Partial Differential Equation (PDE). However, the result of any SFS algorithm is mainly affected by errors in the given image brightness, either due to image noise or modeling errors. In this paper, we are concerned with quantitatively assessing the degree of robustness of our unified approach with respect to these errors. Experimental results have revealed promising performance against noisy images but has also lacked in reconstructing the correct shape due to error in the modeling process. This result emphasizes the need for robust algorithms for surface reflectance estimation to aid SFS algorithms producing more realistic shapes.
- Published
- 2008
86. An Illumination Independent Face Verification Based on Gabor Wavelet and Supported Vector Machine
- Author
-
Jianfu Chen, Xingming Zhang, and Dian Liu
- Subjects
business.industry ,Gabor wavelet ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Paper based ,Support vector machine ,Face verification ,Principal component analysis ,Preprocessor ,Wavelet filter ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,business ,Mathematics - Abstract
Face verification technology is widely used in the fields of public safety, e-commerce and so on. Due its characteristic of insensitive to the varied illumination, a new method about face verification with illumination invariant is presented in this paper based on gabor wavelet. First, ATICR method is used to do light preprocessing on images. Second, certain gabor wavelet filters, which are selected on the experiment inducing different gagor wavelet filter has not the same effect in verification, are used to extract feature of the image, of which the dimension in succession is reduced by Principal Component Analysis. At last, SVM classifiers are modeled on the data with reduced dimension. The experiment results in IFACE database and NIRFACE database indicate the algorithm named “Selected Paralleled Gabor Method” can achieves higher verification performance and better adaptability to the variable illumination.
- Published
- 2008
87. On the Choice of the Kernel Function in Kernel Discriminant Analysis Using Information Complexity
- Author
-
Caterina Liberati, Furio Camillo, Hamparsum Bozdogan, Zani, S, Cerioli, A, Riani, M, Vichi, M, Bozdogan, H, Camillo, F, and Liberati, C
- Subjects
business.industry ,Feature vector ,Short paper ,Pattern recognition ,Linear discriminant analysis ,Nonlinear system ,SECS-S/03 - STATISTICA ECONOMICA ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Kernel Discriminant, Information Complexity, Model Selection, Kernel Parameter ,SECS-S/01 - STATISTICA ,Information complexity ,Artificial intelligence ,Kernel Fisher discriminant analysis ,business ,Classifier (UML) ,Mathematics - Abstract
In this short paper we shall consider the Kernel Fisher Discriminant Analysis (KFDA) and extend the idea of Linear Discriminant, Analysis (LDA) to nonlinear feature space. We shall present a new method of choosing the optimal kernel function and its effect on the KDA classifier using information-theoretic complexity measure.
- Published
- 2007
88. On the fundamental group of the complement of a hypersurface in ℂn
- Author
-
Vic. S. Kulikov
- Subjects
Algebra ,Fundamental group ,Conjecture ,Hypersurface ,Algebraic space ,Paper based ,Algebraic number ,Commutative property ,Knot (mathematics) ,Mathematics - Abstract
Let be a complex algebraic hypersurface in not passing through the point . The generators of the fundamental group and the relations among them are described in terms of the real cone over with apex at . This description is a generalization to the algebraic case of Wirtinger's corepresentation of the fundamental group of a knot in . A new proof of Zariski's conjecture about commutativity of the fundamental group for a projective nodal curve is given in the second part of the paper based on the description of the generators and the relations in the group obtained in the first part.
- Published
- 2006
89. Continuity of the Elastic BIE Formulation
- Author
-
J. D. Richardson and T. A. Cruse
- Subjects
Boundary integral equations ,Tangential displacement ,Mathematical analysis ,Elasticity (economics) ,Boundary displacement ,Full paper ,Mathematics - Abstract
The paper presents a brief mathematical investigation of the continuity properties of the Somigliana displacement and stress identities. It is shown that the regularity conditions for the boundary-integral equation are fully consistent with the continuity requirements for the interior displacements and stresses in elasticity. A new stress-based BIE is obtained. The implications of the new stress-based BIE on the continuity of BEM formulations will be discussed in the full paper.
- Published
- 1995
90. Symmetry Breaking in Peaceably Coexisting Armies of Queens
- Author
-
Karen E. Petrie
- Subjects
Equivalence class (music) ,Combinatorics ,White paper ,White (horse) ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Symmetry breaking ,Algorithm ,GeneralLiterature_MISCELLANEOUS ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
The “Peaceably Coexisting Armies of Queens” problem [1] is a difficult optimisation problem on a chess-board, requiring equal numbers of black and white papers to be placed on the board so that the white queens cannot attack the black queens (and necessarily vice versa).
- Published
- 2002
91. Development of Methods for Olive Oil Quality Evaluation
- Author
-
María Jesús Lerma García
- Subjects
Olive oil quality ,Electronic nose ,Pulp and paper industry ,Data treatment ,Mathematics ,Olive oil - Abstract
In this work, a simple and quick method for olive oil classification according to its quality grades, based on direct infusion ESI–MS and LDA, was developed. Moreover, mixtures of EVOO and VOO, and binary mixtures of these two oils with olive oils of lower quality grade have been also evaluated using this methodology and MLR and PLS data treatment.
- Published
- 2012
92. Research on gender and mathematics: exploring new and future directions
- Author
-
Becker, Joanne Rossi and Hall, Jennifer
- Published
- 2024
- Full Text
- View/download PDF
93. Exploring computational thinking as a boundary object between mathematics and computer programming for STEM teaching and learning
- Author
-
Ng, Oi-Lam, Leung, Allen, and Ye, Huiyan
- Published
- 2023
- Full Text
- View/download PDF
94. ADHDP for the pH Value Control in the Clarifying Process of Sugar Cane Juice
- Author
-
Shaojian Song, Shengyong Lei, Chunning Song, Xiaofeng Lin, and Derong Liu
- Subjects
chemistry.chemical_compound ,Sucrose ,Heuristic dynamic programming ,chemistry ,Scientific method ,Sugar cane ,Control (management) ,Value (economics) ,Sugar ,Pulp and paper industry ,Mathematics - Abstract
The clarifying process of sugar cane juice is the important craft in the control process, which has the characteristics of strong non-linearity, multi-constraint, time-varying, large time-delay, and multi-input. It is an important content to control the neutralized pH value within a required range, which has the vital significance for acquiring high quality purified juice, reducing energy consumption and raising sucrose recovery. This article uses ADHDP (Action-Dependent Heuristic Dynamic Programming) method to optimize and control the neutralized pH value in the clarifying process of sugar cane juice. In this way, we can stabilize the clarifying process and enhance the quality of the purified juice and lastly enhance the quality of product sugar. This method doesn't need the precise mathematical model of the controlled object, and it is trained on-line. The simulation results indicate this method has the good application prospect in industries.
- Published
- 2008
95. Nonnumeric Data Applications
- Author
-
Vassilis G. Kaburlasos
- Subjects
chemistry.chemical_compound ,chemistry ,Ammonium nitrate ,Pulp and paper industry ,Linear discriminant analysis ,Sugar production ,Mathematics - Published
- 2006
96. Performance Ratios for the Differencing Method Applied to the Balanced Number Partitioning Problem
- Author
-
Michiels, W.P.A.J., Korst, J.H.M., Aarts, E.H.L., Leeuwen, van, J., Alt, H., Habib, M., and Mathematics and Computer Science
- Subjects
Combinatorics ,Set (abstract data type) ,Cardinality ,Performance ratio ,Cardinal number ,Subset sum problem ,Integer programming ,Full paper ,Mathematics - Abstract
We consider the problem of partitioning a set of n numbers into m subsets of cardinality k = ¿n/m¿ or ¿n/m¿, such that the maximum subset sum is minimal. We prove that the performance ratios of the Differencing Method of Karmarkar and Karp for k = 3,4,5, and 6 are precisely 4/3, 19/12, 103/60, and 643/360, respectively, by means of a novel approach in which the ratios are explicitly calculated using mixed integer linear programming. Moreover, we show that for k = 7 the performance ratio lies between 2/3 - 2/k and 2/3 - 1/(k/3 - 1). For the case that m is given instead of k, we prove a performance ratio of precisely 2 - 1/m. The results settle the problem of determining theworst-case performance of the Differencing Method.
- Published
- 2003
97. Enumerative Combinatorics and Computer Science
- Author
-
Xavier Gérard Viennot
- Subjects
Discrete mathematics ,Mathematics::Combinatorics ,Algebraic combinatorics ,GeneralLiterature_INTRODUCTORYANDSURVEY ,Short paper ,TheoryofComputation_GENERAL ,Hardware_PERFORMANCEANDRELIABILITY ,Enumerative combinatorics ,Algebra ,Mathematics::Algebraic Geometry ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Mathematics::Symplectic Geometry ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
This short paper is a summary of a survey talk given on the interplay between enumerative Combinatorics and Computer Science.
- Published
- 1990
98. Development and assessment of MyAccessible Math: promoting self-learning for students with vision impairment
- Author
-
Jariwala, Abhishek, Jamshidi, Fatemeh, Marghitu, Daniela, and Chapman, Richard
- Published
- 2023
- Full Text
- View/download PDF
99. Die Beilinson-Deligne-Vermutungen
- Author
-
Günter Harder
- Subjects
law ,engineering ,engineering.material ,Pulp and paper industry ,Filtration ,Mathematics ,law.invention ,Lime - Published
- 1993
100. Rectangular point location and the dynamic closest pair problem
- Author
-
Michiel Smid
- Subjects
Combinatorics ,Iterative closest point ,Point location ,Closest pair of points problem ,Voronoi diagram ,Full paper ,Mathematics - Published
- 1991
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.