28,802 results
Search Results
2. Determination of Poisson’s Ratio of Kraft Paper Using Digital Image Correlation
- Author
-
Zhongchen Bi, Xing Wei, Xiaolong Cao, and Yong Xie
- Subjects
symbols.namesake ,Digital image correlation ,Containerboard ,Mathematical analysis ,symbols ,Analytical chemistry ,Deformation (meteorology) ,Anisotropy ,Poisson distribution ,Measure (mathematics) ,Kraft paper ,Poisson's ratio ,Mathematics - Abstract
Kraft paper is the most popular raw material for based-paper packaging containers. Poisson’s ratio is an important index to indicate the inherent property of materials. There were many difficulties to measure Poisson’s ratio of kraft paper using the traditional contract methods, because of its micro deformation and anisotropy. A simple and efficient non-contract method had been proposed to solve this problem using DIC(Digital Image Correlation) method in this paper. Obtained the relative deformation of samples by calibrated CCD images, Poisson’s ratio could be computed. The test results indicated that Poisson’s ratios of corrugating medium in the MD(Machine Direction) and CD (Cross-machine Direction) were 0.275 and 0.119, and ones of linerboard were 0.275 and 0.119 in MD and CD, respectively. This study showed that DIC was a new approach to measure Poisson’s ratio of kraft paper.
- Published
- 2012
3. Position Paper: Pragmatics in Fuzzy Theory
- Author
-
Karl Erich Wolff
- Subjects
business.industry ,Fuzzy set ,Formal concept analysis ,Position paper ,Distributed object ,Artificial intelligence ,Pragmatics ,Type-2 fuzzy sets and systems ,business ,Fuzzy logic ,Fuzzy cognitive map ,Mathematics - Abstract
This position paper presents the main problems in classical and modern Fuzzy Theory and gives solutions in Formal Concept Analysis for many of these problems. To support the successful cooperation between scientists from the communities of Fuzzy Theory and Formal Concept Analysis the author starts with this position paper an initiative, called "Pragmatics in Fuzzy Theory".
- Published
- 2011
4. Fixed-Distortion Orthogonal Dirty Paper Coding for Perceptual Still Image Watermarking
- Author
-
Andrea Abrardo and Mauro Barni
- Subjects
Distortion ,Human visual system model ,Turbo code ,Gold code ,Dirty paper coding ,Watermark ,Data_CODINGANDINFORMATIONTHEORY ,Algorithm ,Digital watermarking ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
A new informed image watermarking technique is proposed incorporating perceptual factors into dirty paper coding. Due to the equi-energetic nature of the adopted codewords and to the use of a correlation-based decoder, invariance to constant value-metric scaling (gain attack) is automatically achieved. By exploiting the simple structure of orthogonal and Gold codes, an optimal informed embedding technique is developed, permitting to maximize the watermark robustness while keeping the embedding distortion constant. The maximum admissible distortion level is computed on a block by block basis, by using Watson’s model of the Human Visual System (HVS). The performance of the watermarking algorithm are improved by concatenating dirty paper coding with a turbo coding (decoding) step. The validity of the assumptions underlying the theoretical analysis is evaluated by means of numerical simulations. Experimental results confirm the effectiveness of the proposed approach.
- Published
- 2004
5. Cryptographic Assumptions: A Position Paper
- Author
-
Shafi Goldwasser and Yael Tauman Kalai
- Subjects
Cryptographic primitive ,Theoretical computer science ,business.industry ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Cryptographic protocol ,Computer security ,computer.software_genre ,Mathematical proof ,01 natural sciences ,Computational hardness assumption ,Field (computer science) ,Random oracle ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Security of cryptographic hash functions ,business ,computer ,Mathematics - Abstract
The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals. Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for measures according to which we classify and compare assumptions, as to which are safe and which are not. In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor Crypto 2003. Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions.
- Published
- 2015
6. Some Remarks on the Paper 'On the q-type Distributions'
- Author
-
A. Kattuveettil and S. S. Nair
- Subjects
Combinatorics ,Pure mathematics ,Distribution (mathematics) ,Tsallis entropy ,Mathematical statistics ,Linear algebra ,Tsallis statistics ,Probability distribution ,Type (model theory) ,Mathematics ,Superstatistics - Abstract
A claim is made in the paper Nadarajah and Kotz [On the q-type distributions, Physica A 377:465–468, (2007)] that the many q-densities, which are widely used in physics literature, are special cases of or associated with Burr distributions in classical statistical literature. In the present paper it is pointed out that the q-densities are not coming from Burr distributions or from other classical statistical distributions, and that q-distributions are extensions of the limiting forms for q → 1. It is also shown that a statistical distribution which contains all q-distributions as special cases is the pathway model of Mathai [A pathway to matrix-variate gamma and normal densities, Linear Algebra and Its Applications 396:317–328, (2005)]. Tsallis statistics and superstatistics of Beck and Cohen [Superstatistics, Physica A 322:267–275, (2003)] are also examined here in the light of the discussions.
- Published
- 2009
7. Generalized Weighted Model Counting: An Efficient Monte-Carlo meta-algorithm (Working Paper)
- Author
-
Lirong Xia
- Subjects
Discrete mathematics ,Weight function ,Monte Carlo method ,Bayesian network ,Function (mathematics) ,Disjunctive normal form ,Time complexity ,Algorithm ,Oracle ,Importance sampling ,Mathematics - Abstract
In this paper, we focus on computing the prices of securities represented by logical formulas in combinatorial prediction markets when the price function is represented by a Bayesian network. This problem turns out to be a natural extension of the weighted model counting (WMC) problem [1], which we call generalized weighted model counting (GWMC) problem. In GWMC, we are given a logical formula F and a polynomial-time computable weight function. We are asked to compute the total weight of the valuations that satisfy F. Based on importance sampling, we propose a Monte-Carlo meta-algorithm that has a good theoretical guarantee for formulas in disjunctive normal form (DNF). The meta-algorithm queries an oracle algorithm that computes marginal probabilities in Bayesian networks, and has the following theoretical guarantee. When the weight function can be approximately represented by a Bayesian network for which the oracle algorithm runs in polynomial time, our meta-algorithm becomes a fully polynomial-time randomized approximation scheme (FPRAS).
- Published
- 2012
8. Summary of the Papers in this Volume
- Author
-
Manfredo P. do Carmo
- Subjects
Pure mathematics ,Curvature of Riemannian manifolds ,Minimal surface ,Mean curvature ,Differential geometry ,Hyperbolic space ,Mathematics::Differential Geometry ,Sectional curvature ,Riemannian manifold ,Convexity ,Mathematics - Abstract
My research work has been in the following topics of Differential Geometry: (A) Relations between topology and curvature of Riemannian manifolds. (B) Convexity and rigidity of isometric immersions. (C) Minimal Submanifolds, in particular, minimal surfaces. (D) Conformal immersions. (E) Hypersurfaces of constant mean curvature and constant r-mean curvatures. (F) Hopf-type theorems. (G) None of the above, but in this volume.
- Published
- 2012
9. Maximizing Steganographic Embedding Efficiency by Combining Hamming Codes and Wet Paper Codes
- Author
-
Xinpeng Zhang, Weiming Zhang, and Shuozhong Wang
- Subjects
Block code ,Theoretical computer science ,Concatenated error correction code ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Reed–Muller code ,Data_CODINGANDINFORMATIONTHEORY ,Luby transform code ,Hamming code ,Linear code ,Online codes ,Expander code ,Mathematics - Abstract
For good security and large payload in steganography, it is desired to embed as many messages as possible per change of the cover-object, i.e., to have high embedding efficiency. Steganographic codes derived from covering codes can improve embedding efficiency. In this paper, we propose a new method to construct stego-codes, showing that not just one but a family of stego-codes can be generated from one covering code by combining Hamming codes and wet paper codes. This method can enormously expand the set of embedding schemes as applied in steganography. Performances of stego-code families of structured codes and random codes are analyzed. By using the stego-code families of LDGM codes, we obtain a family of near optimal embedding schemes for binary steganography and ±1 steganography, respectively, which can approach the upper bound of embedding efficiency for various chosen embedding rate.
- Published
- 2008
10. How to Generalize Janken – Rock-Paper-Scissors-King-Flea
- Author
-
Hiro Ito
- Subjects
Combinatorics ,Vertex (graph theory) ,Amusement ,If and only if ,media_common.quotation_subject ,Tournament ,Arithmetic ,media_common ,Mathematics - Abstract
Janken, which is a very simple game and it is usually used as a coin-toss in Japan, originated in China, and many variants are seen throughout the world. A variant of janken can be represented by a tournament, where a vertex corresponds a sign and an arc x,y means sign x defeats sign y. However, not all tournaments define useful janken variants, i.e., some janken variants may include a useless sign, which is strictly inferior than another sign in any case. We first shows that for any positive integer n except 2 and 4, we can construct a janken variant with n signs without useless signs. Next we introduces a measure of amusement of janken variants by using the variation of the difference of out-degree and in-degree. Under this measure, we show that a janken variant has the best amusement among ones with n signs if and only if it corresponds to one of the tournaments defined by J.i??W.i??Moon in 1993. Following these results, we present a janken variant "King-fles-janken," which is the best amusing janken variant among ones with five signs.
- Published
- 2013
11. The Geometry of A4 Paper Sizes
- Author
-
Nuno Crato
- Subjects
Geometry ,Round number ,Mathematics - Abstract
The paper format generally used in photocopiers and printers everywhere outside North America, and which is also generally used for letters and writing pads, has the curious name of A4. Measuring 210 × 297 mm (approximately 8¼ × 11¾ in.), A4 sheets are an unusual size; it would certainly seem more logical if this measurement were a round number. Why not 200 × 300 mm, for example?
- Published
- 2010
12. Symbolic State Space of Stopwatch Petri Nets with Discrete-Time Semantics (Theory Paper)
- Author
-
Olivier Roux, Didier Lime, and Morgan Magnin
- Subjects
Discrete mathematics ,Theoretical computer science ,Bounded function ,Stochastic Petri net ,State space ,Petri net ,Process architecture ,Combinatorial explosion ,Decidability ,Mathematics ,Undecidable problem - Abstract
In this paper, we address the class of bounded Petri nets with stopwatches (SwPNs), which is an extension of T-time Petri nets (TPNs) where time is associated with transitions. Contrary to TPNs, SwPNs encompass the notion of actions that can be reset, stopped and started. Models can be defined either with discrete-time or dense-time semantics. Unlike dense-time, discrete-time leads to combinatorial explosion (state space is computed by an exhaustive enumeration of states). We can however take advantage from discrete-time, especially when it comes to SwPNs: state and marking reachability problems, undecidable even for bounded nets, become decidable once discrete-time is considered. Thus, to mitigate the issue of combinatorial explosion, we now aim to extend the well-known symbolic handling of time (using convex polyhedra) to the discrete-time setting. This is basically done by computing the state space of discrete-time nets as the discretization of the state space of the corresponding dense-time model. First, we prove that this technique is correct for TPNs but not for SwPNs in general: in fact, for the latter, it may add behaviors that do not really belong to the evolution of the discrete-time net. To overcome this problem, we propose a splitting of the general polyhedron that encompasses the temporal information of the net into an union of simpler polyhedra which are safe with respect to the symbolic successor computation. We then give an algorithm that computes symbolically the state space of discrete-time SwPNs and finally exhibit a way to perform TCTL model-checking on this model.
- Published
- 2008
13. An Efficient Linear Space Algorithm for Consecutive Suffix Alignment under Edit Distance (Short Preliminary Paper)
- Author
-
Heikki Hyyrö
- Subjects
Character (mathematics) ,Computation ,Linear space ,String (computer science) ,Edit distance ,Type (model theory) ,Suffix ,Space (mathematics) ,Algorithm ,Mathematics - Abstract
We discuss the following variant of incremental edit distance computation: Given strings A and B with lengths m and n , respectively, the task is to compute, in n successive iterations j = n ...1, an encoding of the edit distances between A and all prefixes of B j ..n . Here B j ..n is the suffix of B that begins at its j th character. This type of consecutive suffix alignment [3] is powerful e.g. in solving the cyclic string comparison problem [3]. There are two previous efficient algorithms that are capable of consecutive suffix alignment under edit distance: the algorithm of Landau et al. [2] that runs in O (kn ) time and uses O (m + n + k 2) space, and the algorithm of Kim and Park [1] that runs in O ((m + n )n ) time and uses O (mn ) space. Here k is a user-defined upper limit for the computed distances (0 ≤ k ≤ max {m ,n }). In this paper we propose the first efficient linear space algorithm for consecutive suffix alignment under edit distance. Our algorithm uses O ((m + n )n ) time and O (m + n ) space.
- Published
- 2008
14. Conference Paper Assignment Using a Combined Greedy/Evolutionary Algorithm
- Author
-
Pedro Castillo-Valdivieso and Juan J. Merelo-Guervós
- Subjects
business.industry ,Process (engineering) ,Genetic algorithm ,Evolutionary algorithm ,Artificial intelligence ,business ,Greedy algorithm ,Mathematics - Abstract
This paper presents a method that combines a greedy and an evolutionary algorithm to assign papers submitted to a conference to reviewers. The evolutionary algorithm tries to maximize match between the referee expertise and the paper topics, with the constraints that no referee should get more papers than a preset maximum and no paper should get less reviewers than an established minimum, taking into account also incompatibilities and conflicts of interest. A previous version of the method presented on this paper was tested in another conference obtaining not only a good match, but also a high satisfaction of referees with the papers they have been assigned; the current version has been also applied on that conference data, and to the conference where this paper has been submitted; results were obtained in a short time, and yielded a good match between reviewers and papers assigned to them, better than a greedy algorithm. The paper finishes with some conclusions and reflections on how the whole submission and refereeing process should be conducted.
- Published
- 2004
15. Paper Retraction: On the Hardness of Embeddings Between Two Finite Metrics
- Author
-
Ashish Sabharwal, Matthew Cary, and Atri Rudra
- Subjects
Discrete mathematics ,Regret ,Mathematics ,Automaton - Abstract
We regret to report that we have found an error in our paper "On the Hardness of Embeddings Between Two Finite Metrics," which appeared in the Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP), Lisboa, Portugal, July 2005.
- Published
- 2007
16. Finding Compact Reliable Broadcast in Unknown Fixed-Identity Networks (Short Paper)
- Author
-
Huafei Zhu and Jianying Zhou
- Subjects
Public-key cryptography ,business.industry ,Open problem ,Node (networking) ,Path (graph theory) ,Aggregate (data warehouse) ,Graph (abstract data type) ,Communication complexity ,business ,Algorithm ,Time complexity ,Mathematics - Abstract
At PODC'05, Subramanian, Katz, Roth, Shenker and Stoica (SKRSS) introduced and formulated a new theoretical problem called reliable broadcast problems in unknown fixed-identity networks [3] and further proposed a feasible result to this problem. Since the size of signatures of a message traversing a path grows linearly with the number of hops in their implementations, this leaves an interesting research problem (an open problem advertised by Subramanian et al in [3]) – how to reduce the communication complexity of their reliable broadcast protocol? In this paper, we provide a novel implementation of reliable broadcast problems in unknown fixed-identity networks with lower communication complexity. The idea behind of our improvement is that we first transfer the notion of path-vector signatures to that of sequential aggregate path-vector signatures and show that the notion of sequential aggregate path-vector is a special case of the notion of sequential aggregate signatures. As a result, the currently known results regarding sequential aggregate signatures can be used to solve the open problem. We then describe the work of [3] in light of sequential aggregate signatures working over independent RSA, and show that if the size of an node vi,j's public key |g(vi,j)| is ti,j and the number of hops in a path pi is di in the unknown fixed-identity graph G (with k adversaries), the reduced communication complexity is approximate to while the computation (time) complexity of our protocol is the same as that presented in [3].
- Published
- 2006
17. Left-to-Right Signed-Bit τ-Adic Representations of n Integers (Short Paper)
- Author
-
Billy Bob Brumley
- Subjects
Discrete mathematics ,Digital signature ,Generalization ,Computation ,Elliptic curve cryptography ,Arithmetic ,Signature (topology) ,Joint (audio engineering) ,Representation (mathematics) ,Scalar multiplication ,Mathematics - Abstract
Koblitz curves are often used in digital signature schemes where signature verifications need to be computed efficiently. Simultaneous elliptic scalar multiplication is a useful method of carrying out such verifications. This paper presents an efficient alternative to τ-adic Joint Sparse Form that moves left-to-right for computations involving two points. A generalization of this algorithm is then presented for generating a low joint weight representation of an arbitrary number of integers.
- Published
- 2006
18. Dirty-Paper Trellis-Code Watermarking with Orthogonal Arcs
- Author
-
Taesuk Oh, Seongjong Choi, Tae-Kyung Kim, and Yong Cheol Kim
- Subjects
Speedup ,Theoretical computer science ,Steganography ,Data_CODINGANDINFORMATIONTHEORY ,Trellis (graph) ,Viterbi algorithm ,symbols.namesake ,Viterbi decoder ,Code (cryptography) ,symbols ,Embedding ,Algorithm ,Digital watermarking ,Computer Science::Information Theory ,Mathematics - Abstract
Dirty-paper trellis-code watermarking with random-valued arcs is slow since the embedding into the cover work is performed on path-level which entails many Viterbi decodings through random convergence. We present a fast deterministic embedding in a trellis-code with orthogonal arcs. The proposed algorithm has a speedup factor of the message size since it is based on arc-level modification of the cover work in a bit-by-bit manner. Experimental results show that the proposed embedding provides higher fidelity and lower BER against various types of attacks, compared with conventional informed methods.
- Published
- 2006
19. Discussion of Boris Chrikov's paper
- Author
-
P. Suppes, H. P. Noyes, B. Chirikov, P. Weingartner, G. Schurz, and R. W. Batterman
- Subjects
Liouville equation ,Wave packet ,Correspondence principle (sociology) ,Mathematics ,Mathematical physics - Published
- 2007
20. Addendum to the paper On Certain Probabilities Equivalent to Wiener Measure d’après Dublins, Feldman, Smorodinsky and Tsirelson
- Author
-
Walter Schachermayer
- Subjects
Discrete mathematics ,Calculus ,Addendum ,Measure (mathematics) ,Mathematics - Published
- 2003
21. Recursive Derivational Length Bounds for Confluent Term Rewrite Systems Research Paper
- Author
-
Elias Tahhan-Bittar
- Subjects
Discrete mathematics ,Size function ,Systems research ,Computer Science::Logic in Computer Science ,Confluence ,Bounded function ,Computer Science::Programming Languages ,Transitive closure ,Function (mathematics) ,Signature (topology) ,Mathematics ,Term (time) - Abstract
Let F be a signature and \( \mathcal{R} \) a term rewrite system on ground terms of F. We define the concepts of a context-free potential redex in a term and of bounded confluent terms. We bound recursively the lengths of derivations of a bounded confluent term t by a function of the length of derivations of context-free potential redexes of this term. We define the concept of inner redex and we apply the recursive bounds that we obtained to prove that, whenever \( \mathcal{R} \) is a confluent overlay term rewrite system, the derivational length bound for arbitrary terms is an iteration of the derivational length bound for inner redexes.
- Published
- 2002
22. Collected Papers - Gesammelte Abhandlungen
- Author
-
Ina Kersten and Ernst Witt
- Subjects
Mathematics - Published
- 1998
23. An editorial comment on the preceding paper
- Author
-
Gideon Schechtman
- Subjects
Unit sphere ,Discrete mathematics ,Lebesgue measure ,Mathematics - Published
- 2000
24. An Overview of the Isoperimetric Method in Coding Theory (Extended Abstract) [Invited Paper]
- Author
-
Jean-Pierre Tillich and Gilles Zémor
- Subjects
Algebra ,Channel parameter ,Phenomenon ,Calculus ,Coding theory ,Isoperimetric inequality ,Hamming code ,Decoding methods ,Computer Science::Information Theory ,Coding (social sciences) ,Mathematics - Abstract
When decoding a threshold phenomenon is often observed: decoding deteriorates very suddenly around some critical value of the channel parameter. Threshold behaviour has been studied in many situations outside coding theory and a number of tools have been developped. One of those turns out to be particularly relevant to coding, namely the derivation of isoperimetric inequalities for product measures on Hamming spaces. we discuss this approach and derive consequences.
- Published
- 1999
25. Œuvres Complètes Collected Papers
- Author
-
Thomas Jan Stieltjes and Gerrit van Dijk
- Subjects
Combinatorics ,Pure mathematics ,Number theory ,Orthogonal polynomials ,Bibliography ,Riemann–Stieltjes integral ,Function (mathematics) ,Mathematics - Abstract
Volume I.- Biographical Note.- The Impact of Stieltjes' Work on Continued Fractions and Orthogonal Polynomials.- Number Theory.- The Stieltjes Integral, the Concept that Transformed Analysis.- On the History of the Function $$ M(x)/\sqrt {x} $$ Since Stieltjes.- xuvres Completes * Tome I.- On a Uniform Function (Translation).- Bibliography of T. J. Stieltjes.- xuvres Completes Tome II.- Investigations on Continued Fractions (Translation).- Bibliography of T. J. Stieltjes.
- Published
- 1993
26. The Mathematical Papers
- Author
-
George W. Mackey
- Subjects
Finite group ,symbols.namesake ,Character (mathematics) ,Unitary representation ,Number theory ,Irreducible representation ,Hilbert space ,symbols ,Group theory ,Group representation ,Mathematics ,Mathematical physics - Abstract
Eugene Wigner is above all a theoretical physicist. However he was one of the two men (Hermann Weyl was the other) who introduced a powerful new mathematical tool into quantum mechanics in its earliest years. This is the theory of group representations, invented by Frobenius in 1896, and apparently not applied outside of pure group theory until E. Artin’s startling application to number theory in 1923. Wigner’s first application of this theory to quantum mechanics was published only four years later in 1927. Weyl’s contribution was of a completely different character and was made a few months after Wigner’s.
- Published
- 1993
27. Making inconsistency respectable: A logical framework for inconsistency in reasoning, part I — A position paper
- Author
-
Anthony Hunter and Dov M. Gabbay
- Subjects
Reason maintenance ,Logical framework ,Consistency (database systems) ,Action (philosophy) ,business.industry ,Classical logic ,Paraconsistent logic ,Context (language use) ,Artificial intelligence ,business ,Epistemology ,Mathematics ,Argumentation theory - Abstract
We claim there is a fundamental difference between the way humans handle inconsistency and the way it is currently handled in formal logical systems: To a human, resolving inconsistencies is not necessarily done by "restoring" consistency but by supplying rules telling one how to act when the inconsistency arises. For artificial intelligence there is an urgent need to revise the view that inconsistency is a ‘bad’ thing, and instead view it as mostly a ‘good’ thing. Inconsistencies can be read as signals to take external action, such as ‘ask the user,’ or invoke a ‘truth maintenance system’, or as signals for internal actions that activate some rules and deactivate other rules. There is a need to develop a framework in which inconsistency can be viewed according to context, as a vital trigger for actions, for learning, and as an important source of direction in argumentation.
- Published
- 1991
28. A remark on the paper 'Martingale inequalities in rearrangement invariant function spaces' by W.B. Johnson and G. Schechtman
- Author
-
Pawel Hitczenko
- Subjects
Invariant function ,Discrete mathematics ,Inequality ,media_common.quotation_subject ,Martingale (probability theory) ,Mathematics ,media_common - Published
- 1991
29. Papers of G. Cross, Y. Kubota, J.L. Mawhin, M. Morayne, W.F. Pfeffer and W.-C. Yang, and C.A. Rogers
- Author
-
Patrick Muldowney, Jean Mawhin, Washek F. Pfeffer, Peter S. Bullen, and Peng Yee Lee
- Subjects
Combinatorics ,Mathematics - Published
- 1990
30. Notes on the Papers on Geometry of Numbers and on Diophantine Approximations
- Author
-
Edmund Hlawka
- Subjects
Pure mathematics ,Geometry of numbers ,Diophantine equation ,Mathematical analysis ,Mathematics - Published
- 1990
31. List of papers on or relevant to groups of self-homotopy equivalences
- Author
-
Renzo A. Piccinini
- Subjects
Discrete mathematics ,Pure mathematics ,Homotopy group ,Homotopy ,Mathematics - Published
- 1990
32. Notes on the papers on geometry of numbers and on Diophantine approximations
- Author
-
Wolfgang M. Schmidt and Peter M. Gruber
- Subjects
Pure mathematics ,Geometry of numbers ,Diophantine equation ,Mathematical analysis ,Mathematics - Published
- 1990
33. Optimality and fairness of partisan gerrymandering
- Author
-
Tristan Tomala and Antoine Lagarde
- Subjects
Infinite number ,Fairness ,Optimality ,Full Length Paper ,General Mathematics ,media_common.quotation_subject ,Gerrymandering ,Stochastic game ,91B12 ,Bayesian persuasion ,Outcome (game theory) ,Microeconomics ,Function (engineering) ,Districting ,Software ,Legislator ,Mathematics ,media_common - Abstract
We consider the problem of optimal partisan gerrymandering: a legislator in charge of redrawing the boundaries of equal-sized congressional districts wants to ensure the best electoral outcome for his own party. The so-called gerrymanderer faces two issues: the number of districts is finite and there is uncertainty at the level of each district. Solutions to this problem consists in cracking favorable voters in as many districts as possible to get tight majorities, and in packing unfavorable voters in the remaining districts. The optimal payoff of the gerrymanderer tends to increase as the uncertainty decreases and the number of districts is large. With an infinite number of districts, this problem boils down to concavifying a function, similarly to the optimal Bayesian persuasion problem. We introduce a measure of fairness and show that optimal gerrymandering is accordingly closer to uniform districting (full cracking), which is most unfair, than to community districting (full packing), which is very fair.
- Published
- 2021
34. High predictive QSAR models for predicting the SARS coronavirus main protease inhibition activity of ketone-based covalent inhibitors
- Author
-
Mohammad Kohnehpoushi, Raouf Ghavami, and Bakhtyar Sepehri
- Subjects
chemistry.chemical_classification ,2019-20 coronavirus outbreak ,Quantitative structure–activity relationship ,Original Paper ,Ketone ,QSAR ,SARS-CoV-2 ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,SARS coronavirus main protease ,SARS-CoV-1 ,COVID-19 ,General Chemistry ,3CLpro inhibition activity ,chemistry ,Covalent bond ,Computational chemistry ,Molecular descriptor ,Test set ,Mathematics - Abstract
In this research, a dataset including 29 ketone-based covalent inhibitors with SARS-CoV-1 3CLpro inhibition activity was used to develop high predictive QSAR models. Twenty-two molecules were put in train set and seven molecules in test set. By using stepwise MLR method for molecules in train set, four molecular descriptors including Mor26p, Hy, GATS7p and Mor04v were selected to build QSAR models. MLR and ANN methods were used to create QSAR models for predicting the activity of molecules in both train and test sets. Both QSAR models were validated by calculating several statistical parameters. R2 values for the test set of MLR and ANN models were 0.93 and 0.95, respectively, and RMSE values for their test sets were 0.24 and 0.17, respectively. Other calculated statistical parameters (especially \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{F3}^{2}$$\end{document}QF32 parameter) show that created ANN model has more predictive power with respect to developed MLR model (with four descriptor). Calculated leverages for all molecules show that predicted pIC50 (by both QSAR models) for all molecules is acceptable, and drawn residuals plots show that there is no systematic error in building both QSAR modes. Also, based on developed MLR model, used molecular descriptors were interpreted.
- Published
- 2021
35. True COVID-19 mortality rates from administrative data
- Author
-
Depalo, Domenico
- Subjects
Selection bias ,Economics and Econometrics ,Original Paper ,Coronavirus disease 2019 (COVID-19) ,I18 ,media_common.quotation_subject ,Mortality rate ,Yield (finance) ,05 social sciences ,Inference ,COVID-19 ,Identification (information) ,Bounds ,0502 economics and business ,Credibility ,Econometrics ,C81 ,C24 ,050207 economics ,Mortality ,Set (psychology) ,050205 econometrics ,Demography ,media_common ,Mathematics - Abstract
In this paper, I use administrative data to estimate the number of deaths, the number of infections, and mortality rates from COVID-19 in Lombardia, the hot spot of the disease in Italy and Europe. The information will assist policy makers in reaching correct decisions and the public in adopting appropriate behaviors. As the available data suffer from sample selection bias, I use partial identification to derive the above quantities. Partial identification combines assumptions with the data to deliver a set of admissible values or bounds. Stronger assumptions yield stronger conclusions but decrease the credibility of the inference. Therefore, I start with assumptions that are always satisfied, then I impose increasingly more restrictive assumptions. Using my preferred bounds, during March 2020 in Lombardia, there were between 10,000 and 18,500 more deaths than in previous years. The narrowest bounds of mortality rates from COVID-19 are between 0.1 and 7.5%, much smaller than the 17.5% discussed in earlier reports. This finding suggests that the case of Lombardia may not be as special as some argue.
- Published
- 2020
36. Does linear equating improve prediction in mapping? Crosswalking MacNew onto EQ-5D-5L value sets
- Author
-
Admassu Nadew Lamu
- Subjects
Adult ,Male ,Canada ,Mean squared error ,Cost-Benefit Analysis ,Economics, Econometrics and Finance (miscellaneous) ,Coronary Disease ,Heart disease ,03 medical and health sciences ,QALY ,0302 clinical medicine ,C1 ,Utility ,EQ-5D ,I1 ,Germany ,Equating ,Statistics ,Health Status Indicators ,Humans ,Generalizability theory ,030212 general & internal medicine ,Mathematics ,Parametric statistics ,Aged ,Original Paper ,Norway ,030503 health policy & services ,Health Policy ,Australia ,Function (mathematics) ,Middle Aged ,Economic evaluation ,United States ,Concordance correlation coefficient ,EQ-5D-5L ,Mapping ,England ,Linear Models ,Female ,Quality-Adjusted Life Years ,MacNew ,0305 other medical science ,Value (mathematics) ,Algorithms - Abstract
Purpose Preference-based measures are essential for producing quality-adjusted life years (QALYs) that are widely used for economic evaluations. In the absence of such measures, mapping algorithms can be applied to estimate utilities from disease-specific measures. This paper aims to develop mapping algorithms between the MacNew Heart Disease Quality of Life Questionnaire (MacNew) instrument and the English and the US-based EQ-5D-5L value sets. Methods Individuals with heart disease were recruited from six countries: Australia, Canada, Germany, Norway, UK and the US in 2011/12. Both parametric and non-parametric statistical techniques were applied to estimate mapping algorithms that predict utilities for MacNew scores from EQ-5D-5L value sets. The optimal algorithm for each country-specific value set was primarily selected based on root mean square error (RMSE), mean absolute error (MAE), concordance correlation coefficient (CCC), and r-squared. Leave-one-out cross-validation was conducted to test the generalizability of each model. Results For both the English and the US value sets, the one-inflated beta regression model consistently performed best in terms of all criteria. Similar results were observed for the cross-validation results. The preferred model explained 59 and 60% for the English and the US value set, respectively. Linear equating provided predicted values that were equivalent to observed values. Conclusions The preferred mapping function enables to predict utilities for MacNew data from the EQ-5D-5L value sets recently developed in England and the US with better accuracy. This allows studies, which have included the MacNew to be used in cost-utility analyses and thus, the comparison of services with interventions across the health system.
- Published
- 2020
37. A human-like artificial intelligence for mathematics
- Author
-
Alonso-Diaz, Santiago
- Published
- 2024
- Full Text
- View/download PDF
38. Robust fitting of mixtures of GLMs by weighted likelihood
- Author
-
Luca Greco
- Subjects
0106 biological sciences ,Statistics and Probability ,Generalized linear model ,Economics and Econometrics ,Maximum likelihood ,Sample (statistics) ,MSC 62H30 ,010603 evolutionary biology ,01 natural sciences ,010104 statistics & probability ,MSC 62H25 ,Weighted likelihood ,Expectation–maximization algorithm ,Mixture ,Outliers ,0101 mathematics ,Mathematics ,Original Paper ,Applied Mathematics ,Classification ,EM ,Modeling and Simulation ,Outlier ,MSC 62G35 ,MSC 62F35 ,GLM ,Algorithm ,Social Sciences (miscellaneous) ,Analysis - Abstract
Finite mixtures of generalized linear models are commonly fitted by maximum likelihood and the EM algorithm. The estimation process and subsequent inferential and classification procedures can be badly affected by the occurrence of outliers. Actually, contamination in the sample at hand may lead to severely biased fitted components and poor classification accuracy. In order to take into account the potential presence of outliers, a robust fitting strategy is proposed that is based on the weighted likelihood methodology. The technique exhibits a satisfactory behavior in terms of both fitting and classification accuracy, as confirmed by some numerical studies and real data examples.
- Published
- 2021
39. Tracing the origin of paracetamol tablets by near-infrared, mid-infrared, and nuclear magnetic resonance spectroscopy using principal component analysis and linear discriminant analysis
- Author
-
Yulia B. Monakhova, Curd Schollmayer, Ulrike Holzgrabe, and Alexander Becht
- Subjects
Linear discriminant analysis ,Manufacturer ,Mid infrared ,02 engineering and technology ,Tracing ,01 natural sciences ,Biochemistry ,Analytical Chemistry ,Mathematics ,Acetaminophen ,Principal Component Analysis ,business.industry ,Spectrum Analysis ,010401 analytical chemistry ,Near-infrared spectroscopy ,1H NMR ,Discriminant Analysis ,Pattern recognition ,Nuclear magnetic resonance spectroscopy ,Analgesics, Non-Narcotic ,021001 nanoscience & nanotechnology ,0104 chemical sciences ,ddc:540 ,Principal component analysis ,Multivariate Analysis ,IR ,Artificial intelligence ,0210 nano-technology ,business ,Research Paper ,Tablets - Abstract
Graphical abstract Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer. Supplementary Information The online version contains supplementary material available at 10.1007/s00216-021-03249-z.
- Published
- 2021
40. Original Scientific Papers Wissenschaftliche Originalarbeiten
- Author
-
Walter Blum, Helmut Rechenberg, Werner Heisenberg, and Hans-Peter Dürr
- Subjects
symbols.namesake ,Theoretical physics ,Pauli exclusion principle ,Spinor ,Group (mathematics) ,symbols ,Unified field theory ,Mathematics - Abstract
This is the final volume of Heisenberg's Collected Works. It contains his papers on a (nonlinear) unified theory of elementary particles, as well as his contribution to superconductivity and multiparticle production. Especially interesting is the first group of papers, which is split intotwo sections dealing with, firstly, the formulation of the famous nonlinear spinor equation and, secondly,its applications. Among others the reader willfind a thorough discussion of Heisenberg's collaboration with W. Pauli on these matters. Illuminating annotations to the various sections in this volume have been provided by H. Koppe, R. Hagedorn and the editors.
- Published
- 1985
41. When mathematics has spirit: Aki Chike Win
- Author
-
Robinson, Loretta, West, Karen, Daoust, Melissa, Sylliboy, Simon, Lafferty, Anita, Wiseman, Dawn, Lunney Borden, Lisa, Ghostkeeper, Elmer, Glanfield, Florence, Ribbonleg, Monica, and Bernard, Kyla
- Published
- 2023
- Full Text
- View/download PDF
42. Transfer of graph constructs in Goguen’s paper to net constructs
- Author
-
Wolfgang Hinderer
- Subjects
Data flow diagram ,Discrete mathematics ,Transfer (group theory) ,Flow (mathematics) ,Computer Science::Programming Languages ,Graph (abstract data type) ,Sheaf ,Homomorphism ,State (functional analysis) ,Net (mathematics) ,Mathematics - Abstract
Goguen /1/ presents in his paper a definition of sequential programs with a given graph as flow diagrams and a definition of program homomorphism which allows to contract and expand paths. The nodes of a flow diagram correspond to global state vectors, and the paths between two nodes correspond to the global state transformations of the program.
- Published
- 1982
43. The use of remote sensing to derive maize sowing dates for large-scale crop yield simulations
- Author
-
Javier Gonzalez, Ehsan Eyshi Rezaei, Olena Dubovyk, Natalie Cornish, Gohar Ghazaryan, and Stefan Siebert
- Subjects
Atmospheric Science ,010504 meteorology & atmospheric sciences ,Health, Toxicology and Mutagenesis ,01 natural sciences ,Zea mays ,Crop ,symbols.namesake ,South Africa ,Soil ,Yield (wine) ,Precipitation ,0105 earth and related environmental sciences ,Mathematics ,Remote sensing ,Drought ,MODIS ,Maize ,Crop modeling ,Sowing date ,Original Paper ,Ecology ,Crop yield ,Sowing ,Agriculture ,04 agricultural and veterinary sciences ,Pearson product-moment correlation coefficient ,Field (geography) ,Remote Sensing Technology ,040103 agronomy & agriculture ,symbols ,0401 agriculture, forestry, and fisheries ,Scale (map) - Abstract
One of the major sources of uncertainty in large-scale crop modeling is the lack of information capturing the spatiotemporal variability of crop sowing dates. Remote sensing can contribute to reducing such uncertainties by providing essential spatial and temporal information to crop models and improving the accuracy of yield predictions. However, little is known about the impacts of the differences in crop sowing dates estimated by using remote sensing (RS) and other established methods, the uncertainties introduced by the thresholds used in these methods, and the sensitivity of simulated crop yields to these uncertainties in crop sowing dates. In the present study, we performed a systematic sensitivity analysis using various scenarios. The LINTUL-5 crop model implemented in the SIMPLACE modeling platform was applied during the period 2001–2016 to simulate maize yields across four provinces in South Africa using previously defined scenarios of sowing dates. As expected, the selected methodology and the selected threshold considerably influenced the estimated sowing dates (up to 51 days) and resulted in differences in the long-term mean maize yield reaching up to 1.7 t ha−1 (48% of the mean yield) at the province level. Using RS-derived sowing date estimations resulted in a better representation of the yield variability in space and time since the use of RS information not only relies on precipitation but also captures the impacts of socioeconomic factors on the sowing decision, particularly for smallholder farmers. The model was not able to reproduce the observed yield anomalies in Free State (Pearson correlation coefficient: 0.16 to 0.23) and Mpumalanga (Pearson correlation coefficient: 0.11 to 0.18) in South Africa when using fixed and precipitation rule-based sowing date estimations. Further research with high-resolution climate and soil data and ground-based observations is required to better understand the sources of the uncertainties in RS information and to test whether the results presented herein can be generalized among crop models with different levels of complexity and across distinct field crops.
- Published
- 2020
44. A number-theoretical note on Cornish's paper
- Author
-
Peter Leske and Jane Pitman
- Subjects
Cornish ,language ,Genealogy ,language.human_language ,Mathematics - Published
- 1983
45. A remark on a paper of K. Ramachandra
- Author
-
Imre Kátai
- Subjects
Mathematics - Published
- 1985
46. Addendum to my paper a brief summary of some results in the analytic theory of numbers
- Author
-
K. Ramachandra
- Subjects
Number theory ,Calculus ,Addendum ,Mathematics - Published
- 1982
47. Appendix to the paper 'complete families of stable vector bundles over ℙ'
- Author
-
K. Hulek and S. A. Strømme
- Subjects
Pure mathematics ,Chern class ,Vector bundle ,Mathematics - Published
- 1986
48. A J-homomorphism associated with a space of empty varieties (addenda and corrigenda to two papers on the J-homomorphism)
- Author
-
Victor Snaith and Robert A. Seymour
- Subjects
Combinatorics ,J-homomorphism ,Topology ,Space (mathematics) ,Mathematics - Published
- 1979
49. Introduction to the paper by A.S. Bakaj
- Author
-
F. Verhulst
- Subjects
Canonical variable ,Canonical transformation ,Mathematics ,Integral manifold ,Mathematical physics ,Hamiltonian system - Published
- 1983
50. The papers of Alfred Young 1873–1940
- Author
-
Gilbert de B. Robinson
- Subjects
Algebra ,Ladder operator ,Binary form ,Irreducible representation ,Representation theory ,Mathematics - Published
- 1977
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.