320 results on '"Selective sampling"'
Search Results
2. Scoping Review of Active Learning Strategies and Their Evaluation Environments for Entity Recognition Tasks
- Author
-
Kohl, Philipp, Krämer, Yoka, Fohry, Claudia, Kraft, Bodo, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Fred, Ana, editor, Hadjali, Allel, editor, Gusikhin, Oleg, editor, and Sansone, Carlo, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Selective sampling with Gromov–Hausdorff metric: Efficient dense-shape correspondence via Confidence-based sample consensus
- Author
-
Dvir Ginzburg and Dan Raviv
- Subjects
Dense-shape correspondence ,Spatial information ,Neural networks ,Spectral maps ,Selective sampling ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Background: Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” sce- nario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods: A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results: The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions: The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.
- Published
- 2024
- Full Text
- View/download PDF
4. Active learning for data streams: a survey.
- Author
-
Cacciarelli, Davide and Kulahci, Murat
- Subjects
ACTIVE learning ,SUPERVISED learning ,ONLINE education ,SECURE Sockets Layer (Computer network protocol) ,MACHINE performance ,LEARNING strategies - Abstract
Online active learning is a paradigm in machine learning that aims to select the most informative data points to label from a data stream. The problem of minimizing the cost associated with collecting labeled observations has gained a lot of attention in recent years, particularly in real-world applications where data is only available in an unlabeled form. Annotating each observation can be time-consuming and costly, making it difficult to obtain large amounts of labeled data. To overcome this issue, many active learning strategies have been proposed in the last decades, aiming to select the most informative observations for labeling in order to improve the performance of machine learning models. These approaches can be broadly divided into two categories: static pool-based and stream-based active learning. Pool-based active learning involves selecting a subset of observations from a closed pool of unlabeled data, and it has been the focus of many surveys and literature reviews. However, the growing availability of data streams has led to an increase in the number of approaches that focus on online active learning, which involves continuously selecting and labeling observations as they arrive in a stream. This work aims to provide an overview of the most recently proposed approaches for selecting the most informative observations from data streams in real time. We review the various techniques that have been proposed and discuss their strengths and limitations, as well as the challenges and opportunities that exist in this area of research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. An Insufficiently Substantiated Claim Based on a Confirmation Strategy: Comment on Bartels' "Indoctrination in Introduction to Psychology".
- Author
-
Ermark, Florian and Plessner, Henning
- Subjects
PSYCHOLOGY education ,CRITICAL thinking ,PSYCHOLOGY textbooks - Abstract
In his target article on "Indoctrination in Introduction to Psychology," Bartels proposes that in introductory textbooks of psychology studies and their results are systematically presented in such a way that they tend to correspond to left-liberal political positions and that the state of psychological knowledge is reflected in a correspondingly distorted way. In our commentary, we clarify that the evidence Bartels presents for this claim is insufficient. At first, he takes a purely hypothesis-confirming approach based on selective sampling. Second, he draws an invalid causal inference from a supposed liberal majority in the psychological community to their representation of psychological content in textbooks. And third, he assigns introductory textbooks a function that we believe they do not have. Nonetheless, we welcome the discussion of how best to teach critical reflective thinking in psychology courses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Active learning : an explicit treatment of unreliable parameters
- Author
-
Becker, Markus and Osborne, Miles
- Subjects
006.3 ,Informatics ,Computer Science ,selective sampling ,machine learning ,natural language processing ,active learning - Abstract
Active learning reduces annotation costs for supervised learning by concentrating labelling efforts on the most informative data. Most active learning methods assume that the model structure is fixed in advance and focus upon improving parameters within that structure. However, this is not appropriate for natural language processing where the model structure and associated parameters are determined using labelled data. Applying traditional active learning methods to natural language processing can fail to produce expected reductions in annotation cost. We show that one of the reasons for this problem is that active learning can only select examples which are already covered by the model. In this thesis, we better tailor active learning to the need of natural language processing as follows. We formulate the Unreliable Parameter Principle: Active learning should explicitly and additionally address unreliably trained model parameters in order to optimally reduce classification error. In order to do so, we should target both missing events and infrequent events. We demonstrate the effectiveness of such an approach for a range of natural language processing tasks: prepositional phrase attachment, sequence labelling, and syntactic parsing. For prepositional phrase attachment, the explicit selection of unknown prepositions significantly improves coverage and classification performance for all examined active learning methods. For sequence labelling, we introduce a novel active learning method which explicitly targets unreliable parameters by selecting sentences with many unknown words and a large number of unobserved transition probabilities. For parsing, targeting unparseable sentences significantly improves coverage and f-measure in active learning.
- Published
- 2008
7. Selective Sampling and Optimal Filtering for Subpixel-Based Image Down-Sampling
- Author
-
Sung-Ho Chae, Sung-Tae Kim, Joon-Yeon Kim, Cheol-Hwan Yoo, and Sung-Jea Ko
- Subjects
Aliasing ,color-fringing ,frequency domain analysis ,image down-sampling ,optimal filtering ,selective sampling ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Subpixel-based image down-sampling has been widely used to improve the apparent resolution of down-sampled images on display. However, previous subpixel rendering methods often introduce distortions, such as aliasing and color-fringing. This study proposes a novel subpixel rendering method that uses selective sampling and optimal filtering. We first generalize the previous frequency domain analysis results indicating the relationships between various down-sampling patterns and the aliasing artifact. Based on this generalized analysis, a subpixel-based down-sampling pattern for each image is selectively determined by utilizing the edge distribution of the image. Moreover, we investigate the origin of the color-fringing artifact in the frequency domain. Optimal spatial filters that can effectively remove distortions caused by the selected down-sampling pattern are designed via frequency domain analyses of aliasing and color-fringing. The experimental results show that the proposed method is not only robust to the aliasing and color-fringing artifacts but also outperforms the existing ones in terms of information preservation.
- Published
- 2019
- Full Text
- View/download PDF
8. “Forward Genetics” as a Method to Maximize Power and Cost-Efficiency in Studies of Human Complex Traits
- Author
-
Boks, MPM, Derks, EM, Dolan, CV, Kahn, RS, and Ophoff, RA
- Subjects
Health Sciences ,Genetics ,Human Genome ,Aetiology ,2.5 Research design and methodologies (aetiology) ,Computational Biology ,Genetics ,Population ,Genome-Wide Association Study ,Genotype ,Humans ,Models ,Genetic ,Phenotype ,Quantitative Trait ,Heritable ,Power ,Forward genetics ,Complex traits ,Selective sampling ,Phenomics ,Zoology ,Neurosciences ,Psychology ,Genetics & Heredity ,Biomedical and clinical sciences ,Health sciences - Abstract
There is increasing interest in methods to disentangle the relationship between genotype and (endo)phenotypes in human complex traits. We present a population-based method of increasing the power and cost-efficiency of studies by selecting random individuals with a particular genotype and then assessing the accompanying quantitative phenotypes. Using statistical derivations, power- and cost graphs we show that such a "forward genetics" approach can lead to a marked reduction in sample size and costs. This approach is particularly apt for implementing in epidemiological studies for which DNA is already available but the phenotyping costs are high.
- Published
- 2010
9. Hub-based subspace clustering.
- Author
-
Mani, Priya and Domeniconi, Carlotta
- Subjects
- *
NETWORK hubs , *DATA mining , *ALGORITHMS - Abstract
Data often exists in subspaces embedded within a high-dimensional space. Subspace clustering seeks to group data according to the dimensions relevant to each subspace. This requires the estimation of subspaces as well as the clustering of data. Subspace clustering becomes increasingly challenging in high dimensional spaces due to the curse of dimensionality which affects reliable estimations of distances and density. Recently, another aspect of high-dimensional spaces has been observed, known as the hubness phenomenon, whereby few data points appear frequently as nearest neighbors of the rest of the data. The distribution of neighbor occurrences becomes skewed with increasing intrinsic dimensionality of the data, and few points with high neighbor occurrences emerge as hubs. Hubs exhibit useful geometric properties and have been leveraged for clustering data in the full-dimensional space. In this paper, we study hubs in the context of subspace clustering. We present new characterizations of hubs in relation to subspaces, and design graph-based meta-features to identify a subset of hubs which are well fit to serve as seeds for the discovery of local latent subspaces and clusters. We propose and evaluate a hubness-driven algorithm to find subspace clusters, and show that our approach is superior to the baselines, and is competitive against state-of-the-art subspace clustering methods. We also identify the data characteristics that make hubs suitable for subspace clustering. Such characterization gives valuable guidelines to data mining practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Active learning for data streams: a survey
- Author
-
Cacciarelli, Davide, Kulahci, Murat, Cacciarelli, Davide, and Kulahci, Murat
- Abstract
Online active learning is a paradigm in machine learning that aims to select the most informative data points to label from a data stream. The problem of minimizing the cost associated with collecting labeled observations has gained a lot of attention in recent years, particularly in real-world applications where data is only available in an unlabeled form. Annotating each observation can be time-consuming and costly, making it difficult to obtain large amounts of labeled data. To overcome this issue, many active learning strategies have been proposed in the last decades, aiming to select the most informative observations for labeling in order to improve the performance of machine learning models. These approaches can be broadly divided into two categories: static pool-based and stream-based active learning. Pool-based active learning involves selecting a subset of observations from a closed pool of unlabeled data, and it has been the focus of many surveys and literature reviews. However, the growing availability of data streams has led to an increase in the number of approaches that focus on online active learning, which involves continuously selecting and labeling observations as they arrive in a stream. This work aims to provide an overview of the most recently proposed approaches for selecting the most informative observations from data streams in real time. We review the various techniques that have been proposed and discuss their strengths and limitations, as well as the challenges and opportunities that exist in this area of research.
- Published
- 2023
11. Clustering-Based Optimised Probabilistic Active Learning (COPAL)
- Author
-
Krempl, Georg, Ha, Tuan Cuong, Spiliopoulou, Myra, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Japkowicz, Nathalie, editor, and Matwin, Stan, editor
- Published
- 2015
- Full Text
- View/download PDF
12. Selective vs Complete Sampling in Hysterectomy Specimens Performed for Atypical Hyperplasia.
- Author
-
Bryant, Bronwyn H, Doughty, Elizabeth, and Kalof, Alexandra N
- Subjects
- *
HYSTERECTOMY , *HYPERPLASIA , *DIAGNOSIS , *ENDOMETRIUM , *DISEASE risk factors , *COLLECTION & preservation of biological specimens , *BIOPSY , *DIAGNOSTIC errors , *UTERINE diseases , *ENDOMETRIAL tumors , *TREATMENT effectiveness , *RESEARCH bias - Abstract
Objectives: Atypical hyperplasia of the endometrium is a significant risk factor for uterine endometrioid carcinoma (EC) and an indication for hysterectomy. Standard sampling of these specimens includes evaluation of the entire endometrium to identify possible EC. We evaluated a method of selective sampling in an effort to balance resource utilization with diagnostic accuracy in the detection of EC.Methods: Histologic diagnoses based on selective sampling (exclusion of every other block of endometrium) were compared with the original diagnosis based on complete sampling.Results: Double-blinded review of these cases using selective sampling detected EC in 92% of hysterectomies, including all high-grade/high-stage carcinomas. Selective sampling had an 82% agreement with the original diagnoses, with most discordant diagnoses attributable to interobserver variability. Adjusting for interobserver variability increased diagnostic agreement between selective and complete sampling to 96%.Conclusions: Selective sampling is a feasible method to save time and resources while maintaining diagnostic accuracy. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
13. Selective sampling for trees and forests.
- Author
-
Badarna, Murad and Shimshoni, Ilan
- Subjects
- *
RANDOM forest algorithms , *DECISION trees - Abstract
In this paper we describe selective sampling algorithms for decision trees and random forests and their contribution to the classification accuracy. In our selective sampling algorithms, the instance that yields the highest expected utility is chosen to be labeled by the expert. We show that it is possible to obtain the most valuable unlabeled instance to be labeled by the expert and added to the training dataset of the decision tree simply by depicting the influence of this new instance on the class probabilities of the leaves. All the unlabeled instances that fall into the same leaf will have the same class probabilities. As a result, we can compute the expected accuracy of the decision tree according to its leaves instead for each individual unlabeled instance. An extension for random forests is also presented. Moreover, we show that the selective sampling classifier has to belong to the same family as the classifier whose accuracy we wish to improve but need not be identical to it. For example, a random forest classifier can be used for the selective sampling process, and the results can be used to improve the classification accuracy of a decision tree. Likewise, a random forest classifier consisting of three trees can be used in the selective sampling algorithm to improve the classification accuracy of a random forest consisting of ten trees. Our experiments show that the proposed selective sampling algorithms achieve better accuracy than the standard random sampling, uncertainty sampling and the active belief decision tree learning approach (ABC4.5) for several real-world datasets. We also show that our selective sampling algorithms improve significantly the classification performance of several state-of-the-art classifiers such as the random rotation forest classifier for real-world large-scale datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Reasoning mechanism: An effective data reduction algorithm for on-line point cloud selective sampling of sculptured surfaces.
- Author
-
Li, Yan, Liu, Haibo, Tao, Ye, and Liao, Jianxing
- Subjects
- *
POINT cloud , *DATA reduction , *PARAMETRIC modeling , *ONLINE algorithms , *SAMPLING (Process) , *SAMPLING methods - Abstract
For obtaining a high-quality profile of measured sculptured surface, scanning devices have to produce massive point cloud data with great sampling rates. Bottlenecks are created owing to inefficiencies in storing, manipulating and transferring these data, and the parametric modeling from them is quite a time-consuming work. The purpose of this paper is to effectively simplify point cloud data from a measured sculptured surface during the on-line point cloud data selective sampling process. The key contribution is the generation of a novel reasoning mechanism which is based on a predictor–corrector scheme, and it is capable of eliminating data redundancy caused by spatial similarity of collected point clouds. In particular, this mechanism is embedded in our newly designed framework for on-line point cloud data selective sampling of sculptured surfaces. This framework consists of two stages: First, the initial point data flow is selective sampled using bi-Akima method; second, the data flow is refined based on our proposed reasoning mechanism. Moreover, our versatile framework is capable of obtaining high-quality resampling results with smaller data reduction ratio than other existing on-line point cloud data reduction/selective sampling methods. Experiment is conducted and the results demonstrate the superior performance of the proposed method. • A versatile framework for point cloud data reduction of sculptured surfaces is designed. • A novel reasoning mechanism is proposed and embedded in this framework. • It is able to eliminate data redundancy caused by spatial similarity of point clouds. • It gets high-quality resampling results with smaller data reduction ratio than other methods. • 3D scans from six simulated surfaces validate the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. The Relationship Between Agnostic Selective Classification, Active Learning and the Disagreement Coefficient.
- Author
-
Gelbhart, Roei and El-Yaniv, Ran
- Subjects
- *
STATISTICAL learning , *CLASSIFICATION , *MACHINE learning - Abstract
A selective classifier (f, g) comprises a classification function f and a binary selection function g, which determines if the classifier abstains from prediction, or uses f to predict. The classifier is called pointwise-competitive if it classifies each point identically to the best classifier in hindsight (from the same class), whenever it does not abstain. The quality of such a classifier is quantified by its rejection mass, defined to be the probability mass of the points it rejects. A "fast" rejection rate is achieved if the rejection mass is bounded from above by ~O(1=m) where m is the number of labeled examples used to train the classi fier (and O hides logarithmic factors). Pointwise-competitive selective (PCS) classifiers are intimately related to disagreement-based active learning and it is known that in the realizable case, a fast rejection rate of a known PCS algorithm (called Consistent Selective Strategy) is equivalent to an exponential speedup of the well-known CAL active algorithm. We focus on the agnostic setting, for which there is a known algorithm called LESS that learns a PCS classifier and achieves a fast rejection rate (depending on Hanneke's disagreement coefficient) under strong assumptions. We present an improved PCS learning algorithm called ILESS for which we show a fast rate (depending on Hanneke's disagreement coefficient) without any assumptions. Our rejection bound smoothly interpolates the realizable and agnostic settings. The main result of this paper is an equivalence between the following three entities: (i) the existence of a fast rejection rate for any PCS learning algorithm (such as ILESS); (ii) a poly-logarithmic bound for Hanneke's disagreement coefficient; and (iii) an exponential speedup for a new disagreement-based active learner called Active-ILESS. [ABSTRACT FROM AUTHOR]
- Published
- 2019
16. Weakly-supervised anomaly detection with a Sub-Max strategy.
- Author
-
Zhang, Bohua and Xue, Jianru
- Subjects
- *
ANOMALY detection (Computer security) , *OPTICAL flow , *INTRUSION detection systems (Computer security) , *VIDEO surveillance , *BOTTLENECKS (Manufacturing) - Abstract
We study weakly-supervised anomaly detection where only video-level "anomalous"/"normal" labels are available in training, while anomaly events should be temporally localized in testing. For this task, a commonly used framework is multiple instance learning (MIL), where clip instances are sampled from individual videos to form video-level bags. This sampling process arguably is a bottleneck of MIL. If too many instances are sampled, we not only encounter high computational overheads but also have many noisy instances in the bag. On the other hand, when too few instances are used, e. g., through enlarged grids, much background noise may be included in the anomaly instances. To resolve this dilemma, we propose a simple yet effective method named Sub-Max. In partitioned image regions, it identifies instances that are most probable candidates for anomaly events by selecting cuboids that have high optical flow magnitudes. We show that our method effectively brings down the computational cost of the baseline MIL and at the same time significantly filters out the influence of noise. Albeit simple, this strategy is shown to facilitate the learning of discriminative features and thus improve event classification and localization performance. For example, after annotating the event location ground truths of the UCF-Crime test set, we report very competitive accuracy compared with the state of the art on both frame-level and pixel-level metrics, corresponding to classification and localization, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Lookahead selective sampling for incomplete data
- Author
-
Abdallah Loai and Shimshoni Ilan
- Subjects
selective sampling ,missing values ,ensemble clustering ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Missing values in data are common in real world applications. There are several methods that deal with this problem. In this paper we present lookahead selective sampling (LSS) algorithms for datasets with missing values. We developed two versions of selective sampling. The first one integrates a distance function that can measure the similarity between pairs of incomplete points within the framework of the LSS algorithm. The second algorithm uses ensemble clustering in order to represent the data in a cluster matrix without missing values and then run the LSS algorithm based on the ensemble clustering instance space (LSS-EC). To construct the cluster matrix, we use the k-means and mean shift clustering algorithms especially modified to deal with incomplete datasets. We tested our algorithms on six standard numerical datasets from different fields. On these datasets we simulated missing values and compared the performance of the LSS and LSS-EC algorithms for incomplete data to two other basic methods. Our experiments show that the suggested selective sampling algorithms outperform the other methods.
- Published
- 2016
- Full Text
- View/download PDF
18. Computational Methods for Selective Acquisition of Depth Measurements: An Experimental Evaluation
- Author
-
Payeur, Pierre, Curtis, Phillip, Cretu, Ana-Maria, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Blanc-Talon, Jacques, editor, Kasinski, Andrzej, editor, Philips, Wilfried, editor, Popescu, Dan, editor, and Scheunders, Paul, editor
- Published
- 2013
- Full Text
- View/download PDF
19. Estimación del crecimiento de dos especies de Pinus de la Región Centro de Guerrero, México
- Author
-
Brenda Mireya Bretado Medrano, Ezequiel Márquez Bernal, Benedicto Vargas Larreta, Juan Abel Nájera Luna, and Francisco Javier Hernández
- Subjects
Coefficient of determination ,Management unit ,biology ,Selective sampling ,General Earth and Planetary Sciences ,Forestry ,Pinus oocarpa ,biology.organism_classification ,Pinus pseudostrobus ,General Environmental Science ,Basal area ,Mathematics - Abstract
Resumen La aplicación de modelos de crecimiento para árboles individuales en bosques mezclados permite realizar estimaciones a nivel de la unidad de manejo. El objetivo del presente estudio fue evaluar los modelos de crecimiento en diámetro normal, área basal, altura total y volumen fustal de Chapman-Richards, Schumacher, Hossfeld I y Weibull para árboles individuales de Pinus pseudostrobus y Pinus oocarpa de Guerrero, México. Mediante muestreo selectivo se recolectaron 27 árboles dominantes y 28 codominantes para reconstruir los perfiles de árboles ordenados en grupos de diez años, por medio de la técnica de análisis troncales. La selección de los mejores modelos para cada variable se realizó con base en el coeficiente de determinación ajustado, la raíz del error medio cuadrático, las propiedades de los parámetros y las tendencias lógicas de crecimiento. Los resultados indican que el modelo de Schumacher fue el mejor para estimar el crecimiento en diámetro normal y la altura en ambas especies, así como el área basal de Pinus pseudostrobus y el volumen de Pinus oocarpa; mientras que, el modelo de Chapman-Richards fue el mejor para estimar el área basal de Pinus oocarpa y el volumen para Pinus pseudostrobus. Las edades estimadas del turno para volumen en Pinus oocarpa fueron de 62 años y para Pinus pseudostrobus de 82 años.
- Published
- 2021
- Full Text
- View/download PDF
20. An Improved Active Learning in Unbalanced Data Classification
- Author
-
Park, Woon Jeung, Lee, Changhoon, editor, Seigneur, Jean-Marc, editor, Park, James J., editor, and Wagner, Roland R., editor
- Published
- 2011
- Full Text
- View/download PDF
21. Uncertainty Measure for Selective Sampling Based on Class Probability Output Networks
- Author
-
Kim, Ho-Gyeong, Kil, Rhee Man, Lee, Soo-Young, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Lu, Bao-Liang, editor, Zhang, Liqing, editor, and Kwok, James, editor
- Published
- 2011
- Full Text
- View/download PDF
22. The application of molecularly imprinted polymers in passive sampling for selective sampling perfluorooctanesulfonic acid and perfluorooctanoic acid in water environment.
- Author
-
Cao, Fengmei, Wang, Lei, Ren, Xinhao, Wu, Fengchang, Sun, Hongwen, and Lu, Shaoyong
- Subjects
POLYMERS ,PERFLUOROOCTANE sulfonate ,PERFLUOROOCTANOIC acid ,SORBENTS ,HYDROGEN-ion concentration - Abstract
Modeling and predicting of a novel polar organic chemical integrative sampler (POCIS) for sampling of perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS) using molecularly imprinted polymers (MIPs) as receiving phase are presented in this study. Laboratory microcosm experiments were conducted to investigate the uptake kinetics, effects of flow velocity, pH, and dissolved organic matter (DOM), and also the selectivity of the POCIS. In this study, uptake study of PFOA and PFOS sampling on MIP-POCIS, over 14 days, was investigated. Laboratory calibrations of MIP-POCIS yielded sampling rate (R
s ) values for PFOA and PFOS were 0.387 and 0.229 L/d, higher than POCIS using commercial sorbent WAX as receiving phase (0.133 and 0.141 L/d for PFOA and PFOS, respectively) in quiescent condition. The Rs values for PFOA and PFOS sampling on MIP-POCIS were increased to 0.591 and 0.281 L/d in stirred condition (0.01 m/s), and no significant increase occurred when the flow velocity was further increased. The Rs values were kept relatively high in the solution of which the pH was lower than the isoelectric point (IEP) of MIP-sorbent and decreased when the solution pH was extend the IEP value. Under the experimental conditions, DOM seemed to slightly facilitate the Rs values of PFOA and PFOS in MIP-POCIS. The results showed that the interaction between the target compounds and the receiving phase was fully integrated by the imprinting effects and also the electrostatic interaction. Finally, comparing the sampling rate of WAX-POCIS and the MIP-POCIS, the MIP-POCIS offers promising perspectives for selective sampling ability for PFOA and PFOS. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
23. Efficient Learning from Few Labeled Examples
- Author
-
Wang, Jiao, Luo, Siwei, Zhong, Jingjing, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Yu, Wen, editor, He, Haibo, editor, and Zhang, Nian, editor
- Published
- 2009
- Full Text
- View/download PDF
24. Selective Sampling Based on Dynamic Certainty Propagation for Image Retrieval
- Author
-
Zhang, Xiaoyu, Cheng, Jian, Lu, Hanqing, Ma, Songde, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Satoh, Shin’ichi, editor, Nack, Frank, editor, and Etoh, Minoru, editor
- Published
- 2008
- Full Text
- View/download PDF
25. Selective Sampling for Combined Learning from Labelled and Unlabelled Data
- Author
-
Petrakieva, Lina, Gabrys, Bogdan, Kacprzyk, Janusz, editor, Lotfi, Ahamad, editor, and Garibaldi, Jonathan M., editor
- Published
- 2004
- Full Text
- View/download PDF
26. Combining Active Learning and Boosting for Naïve Bayes Text Classifiers
- Author
-
Kim, Han-joon, Kim, Je-uk, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Li, Qing, editor, Wang, Guoren, editor, and Feng, Ling, editor
- Published
- 2004
- Full Text
- View/download PDF
27. Selective Sampling with a Hierarchical Latent Variable Model
- Author
-
Mamitsuka, Hiroshi, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, R. Berthold, Michael, editor, Lenz, Hans-Joachim, editor, Bradley, Elizabeth, editor, Kruse, Rudolf, editor, and Borgelt, Christian, editor
- Published
- 2003
- Full Text
- View/download PDF
28. Learning Probabilistic Linear-Threshold Classifiers via Selective Sampling
- Author
-
Cesa-Bianchi, Nicolò, Conconi, Alex, Gentile, Claudio, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Schölkopf, Bernhard, editor, and Warmuth, Manfred K., editor
- Published
- 2003
- Full Text
- View/download PDF
29. Index Driven Selective Sampling for CBR
- Author
-
Wiratunga, Nirmalie, Craw, Susan, Massie, Stewart, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Ashley, Kevin D., editor, and Bridge, Derek G., editor
- Published
- 2003
- Full Text
- View/download PDF
30. Selective Sampling Methods in One-Class Classification Problems
- Author
-
Juszczak, Piotr, Duin, Robert P. W., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Kaynak, Okyay, editor, Alpaydin, Ethem, editor, Oja, Erkki, editor, and Xu, Lei, editor
- Published
- 2003
- Full Text
- View/download PDF
31. Improving Naïve Bayes Text Classifier with Modified EM Algorithm
- Author
-
Kim, Han-joon, Chang, Jae-young, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Zhong, Ning, editor, Raś, Zbigniew W., editor, Tsumoto, Shusaku, editor, and Suzuki, Einoshin, editor
- Published
- 2003
- Full Text
- View/download PDF
32. DAGGER: Instance Selection for Combining Multiple Models Learnt from Disjoint Subsets
- Author
-
Davies, Winton, Edwards, Pete, Liu, Huan, editor, and Motoda, Hiroshi, editor
- Published
- 2001
- Full Text
- View/download PDF
33. Energy Management for Energy Harvesting Wireless Sensors With Adaptive Retransmission.
- Author
-
Yadav, Animesh, Dobre, Octavia A., Goonewardena, Mathew, Ajib, Wessam, and Elbiaze, Halima
- Subjects
- *
ENERGY harvesting , *ENERGY management , *WIRELESS sensor networks , *MARKOV processes , *SAMPLING (Process) - Abstract
This paper analyzes the communication between two energy harvesting wireless sensor nodes. The nodes use automatic repeat request and forward error correction mechanism for the error control. The random nature of available energy and arrivals of harvested energy may induce interruption to the signal sampling and decoding operations. We propose a selective sampling scheme, where the length of the transmitted packet to be sampled depends on the available energy at the receiver. The receiver performs the decoding when complete samples of the packet are available. The selective sampling information bits are piggybacked on the automatic repeat request messages for the transmitter use. This way, the receiver node manages more efficiently its energy use. Besides, we present the partially observable Markov decision process formulation, which minimizes the long-term average pairwise error probability and optimizes the transmit power. Optimal and suboptimal power assignment strategies are introduced for retransmissions, which are adapted to the selective sampling and channel state information. With finite battery size and fixed power assignment policy, an analytical expression for the average packet drop probability is derived. Numerical simulations show the performance gain of the proposed scheme with power assignment strategy over the conventional scheme. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
34. Informed Selection of Training Examples for Knowledge Refinement
- Author
-
Wiratunga, Nirmalie, Craw, Susan, Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Dieng, Rose, editor, and Corby, Olivier, editor
- Published
- 2000
- Full Text
- View/download PDF
35. Selective-sampling Raman imaging techniques for ex vivo assessment of surgical margins in cancer surgery
- Author
-
Radu Boitor, Ioan Notingher, and Maria Giovanna Lizio
- Subjects
business.industry ,Selective sampling ,Raman imaging ,01 natural sciences ,Biochemistry ,Analytical Chemistry ,Objective assessment ,010309 optics ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Optical imaging ,Tissue sections ,030220 oncology & carcinogenesis ,0103 physical sciences ,Electrochemistry ,symbols ,Environmental Chemistry ,Medicine ,business ,Raman spectroscopy ,Spectroscopy ,Cancer surgery ,Ex vivo ,Biomedical engineering - Abstract
One of the main challenges in cancer surgery is to ensure the complete excision of the tumour while sparing as much healthy tissue as possible. Histopathology, the gold-standard technique used to assess the surgical margins on the excised tissue, is often impractical for intra-operative use because of the time-consuming tissue cryo-sectioning and staining, and availability of histopathologists to assess stained tissue sections. Raman micro-spectroscopy is a powerful technique that can detect microscopic residual tumours on ex vivo tissue samples with accuracy, based entirely on intrinsic chemical differences. However, raster-scanning Raman micro-spectroscopy is a slow imaging technique that typically requires long data acquisition times wich are impractical for intra-operative use. Selective-sampling Raman imaging overcomes these limitations by using information regarding the spatial properties of the tissue to reduce the number of Raman spectra. This paper reviews the latest advances in selective-sampling Raman techniques and applications, mainly based on multimodal optical imaging. We also highlight the latest results of clinical integration of a prototype device for non-melanoma skin cancer. These promising results indicate the potential impact of Raman spectroscopy for providing fast and objective assessment of surgical margins, helping surgeons ensure the complete removal of tumour cells while sparing as much healthy tissue as possible.
- Published
- 2021
- Full Text
- View/download PDF
36. Mitigating the Tsunami of COVID-19 through Sustainable Traceability
- Author
-
Mohamed Buheji
- Subjects
medicine.medical_specialty ,Traceability ,Coronavirus disease 2019 (COVID-19) ,Public health ,Selective sampling ,06 humanities and the arts ,0603 philosophy, ethics and religion ,Sustainable community ,Risk analysis (engineering) ,Preparedness ,060302 philosophy ,Sustainability ,medicine ,Business ,General Economics, Econometrics and Finance ,Competence (human resources) - Abstract
Many countries differed in its way of response and management to the fierce infectious COVID-19 outbreak. Almost all the world countries agreed on the adequate verification and traceability of the suspected infected contacts, besides followed strict measures for containment and isolation. However, life has to go on towards regular routines at a certain point, to fulfil many of the demanding socio-economic needs. The literature does not have enough methods on how to do go back smoothly to life routines. In contrast, the infected individuals or those who have a probability of spreading infections will not go without being identified. This work focus on selective traceability that would be like a default system that would ensure the availability of sustainable community preparedness model. Therefore, this paper focuses on developing a simple. Yet, robust implementable scale and framework that help any public health authority, or organizations to take appropriate decision when to quarantine, direct for self-isolate, or consider the case to be safe; afterlife starts to go back to normal. The framework helps to sustain the testing without disrupting the people life, based on evidence-based selective sampling. The paper concludes with recommending the sustainable traceability framework be added to post-surveillance strategy as active case-finding technique. The main implication of this paper is that it raises the competence of the community in mitigating the risks of virus tsunami, similar to the COVID-19, and closes its future vulnerability to any new outbreak. The paper concludes with limitations and future research recommendations.
- Published
- 2020
- Full Text
- View/download PDF
37. Estudio etnográfico de la bebida tradicional Zende en la comunidad San Lucas, Amanalco Estado de México
- Author
-
Paul Misael Garza-López, Francisco Joaquín Villafaña-Rivera, Josefa Espitia-López, and Abel Efraín Peña-Hernández
- Subjects
Mixed approach ,Geography ,Ethnography ,Selective sampling ,Ethnic group ,Food practices ,Ethnology ,Consumption (sociology) - Abstract
Objetivo: realizar un estudio etnográfico del consumo de la bebida tradicional zende. Metodología: mediante un enfoque mixto, se diseñó una encuesta estructurada con variables cualitativas y cuantitativas, la cual se aplicó con un muestreo selectivo a doscientos dieciocho habitantes de la comunidad San Lucas, perteneciente al municipio de Amanalco de Becerra, Estado de México. Resultados: muestran que el zende es un elemento gastronómico para la región por su elaboración y relación con festividades religiosas o civiles cuyas tradiciones y practicas culturales se han modificado, perdiendo presencia ante la comunidad por nuevas formas de alimentación. Limitaciones: no hay suficiente información referente a bebidas tradicionales mexicanas. Conclusiones: la cocina mexicana es una expresión social, caracterizada por los grupos étnicos asentados en la república mexicana, con estudios etnográficos se identifica, estilos de vida en prácticas alimentarias, un ejemplo es el zende; bebida prehispánica a base de maíz azul fermentado, originaria de la cultura otomí.
- Published
- 2021
- Full Text
- View/download PDF
38. Perceptual image hashing with selective sampling for salient structure features.
- Author
-
Qin, Chuan, Chen, Xueqin, Dong, Jing, and Zhang, Xinpeng
- Subjects
- *
HASHING , *IMAGE retrieval , *MATHEMATICAL regularization , *DIMENSION reduction (Statistics) , *ROBUST control - Abstract
In this paper, a robust and secure image hashing scheme based on salient structure features is proposed, which can be applied in image authentication and retrieval. In order to acquire the fixed length of image hash, the pre-processing for image regularization is first conducted on input image. Salient edge detection is then applied on the secondary image, and a series of non-overlapping blocks containing the richest structural information in the secondary image are selectively sampled according to the edge binary map. Dominant DCT coefficients of the sampled blocks with their corresponding position information are retrieved as the robust features. After the compression with dimensionality reduction for the concatenated features, the final hash can be produced. Experimental results show that the proposed scheme has better performances of perceptual robustness and discrimination compared with some state-of-the-art schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images.
- Author
-
van Grinsven, Mark J. J. P., van Ginneken, Bram, Hoyng, Carel B., Theelen, Thomas, and Sanchez, Clara I.
- Subjects
- *
HEMORRHAGE diagnosis , *FUNDUS oculi , *ARTIFICIAL neural networks , *EYE color , *COMPUTER vision , *DEEP learning - Abstract
Convolutional neural networks (CNNs) are deep learning network architectures that have pushed forward the state-of-the-art in a range of computer vision applications and are increasingly popular in medical image analysis. However, training of CNNs is time-consuming and challenging. In medical image analysis tasks, the majority of training examples are easy to classify and therefore contribute little to the CNN learning process. In this paper, we propose a method to improve and speed-up the CNN training for medical image analysis tasks by dynamically selecting misclassified negative samples during training. Training samples are heuristically sampled based on classification by the current status of the CNN. Weights are assigned to the training samples and informative samples are more likely to be included in the next CNN training iteration. We evaluated and compared our proposed method by training a CNN with (SeS) and without (NSeS) the selective sampling method. We focus on the detection of hemorrhages in color fundus images. A decreased training time from 170 epochs to 60 epochs with an increased performance—on par with two human experts—was achieved with areas under the receiver operating characteristics curve of 0.894 and 0.972 on two data sets. The SeS CNN statistically outperformed the NSeS CNN on an independent test set. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
40. Evaluation of a mercapto-functionalized silica binding phase for the selective sampling of SeIV by Diffusive Gradients in Thin films
- Author
-
Stéphane Simon, Anne-Lise Pommier, Rémy Buzier, Gilles Guibaud, Univ Limoges, PEIRENE, Equipe Dev Indicateurs Previs Qual Eaux, URA IRSTEA, 123 Ave Albert Thomas, F-87060 Limoges, France, and Region Auvergne-Rhone-AlpesRegion Bourgogne-Franche-ComteRegion Hauts-de-FranceRegion Nouvelle-AquitaineEDF company
- Subjects
Inorganic chemistry ,Selective sampling ,chemistry.chemical_element ,02 engineering and technology ,01 natural sciences ,Analytical Chemistry ,chemistry.chemical_compound ,[CHIM.ANAL]Chemical Sciences/Analytical chemistry ,Phase (matter) ,chemistry.chemical_classification ,Silica gel ,Selenium speciation ,010401 analytical chemistry ,equipment and supplies ,DGT ,021001 nanoscience & nanotechnology ,Diffusive gradients in thin films ,0104 chemical sciences ,surgical procedures, operative ,Passive sampling ,chemistry ,Ionic strength ,cardiovascular system ,Thiol ,Solid phases ,0210 nano-technology ,Selenium - Abstract
International audience; This study evaluates binding discs based on 3-mercaptopropyl-functionalized silica gel for the selective sampling of selenite (Sew) using Diffusive Gradients in Thin films sampler (DGT). Selv accumulation was quantitative and selective over Sew and followed the theoretical linear accumulation with the exposure time up to 0.7 pg. The sampling was not affected by ionic strength variations down to 10-2 mol L-1 (as NaNO3) but Sew accumulation was found to decrease significantly for pH greater than 5 and was nearly zero at pH 9. Both the limited accumulation range and the pH dependence were unexpected because they have not been reported in the literature related to the Se"' trapping by thiol-based solid phases. Our experiments showed that after Sew was bound to thiol functional groups, a further pH-dependent reaction occurred with free thiols, resulting in the reduction of Sew into elemental selenium (Se) followed by its release and back-diffusion through the DGT sampler. Unfortunately, such a reversible accumulation is incompatible with the implementation of the mercapto-functionalized silica binding phase in DGT devices for Sew selective sampling.
- Published
- 2019
- Full Text
- View/download PDF
41. Application of machine learning-based selective sampling to determine BaZrO3 grain boundary structures
- Author
-
Ole Martin Løvvik, Tarjei Bondevik, and Akihide Kuwabara
- Subjects
General Computer Science ,Selective sampling ,General Physics and Astronomy ,02 engineering and technology ,010402 general chemistry ,Machine learning ,computer.software_genre ,01 natural sciences ,symbols.namesake ,Dimension (vector space) ,Brute force ,General Materials Science ,Gaussian process ,Bayesian optimization ,Mathematics ,business.industry ,Sampling (statistics) ,Statistical model ,Grain boundary structure ,General Chemistry ,021001 nanoscience & nanotechnology ,0104 chemical sciences ,Computational Mathematics ,Mechanics of Materials ,Density functional theory ,symbols ,Grain boundary ,Artificial intelligence ,Gaussian Process ,0210 nano-technology ,business ,computer - Abstract
A selective sampling procedure is applied to reduce the number of density functional theory calculations needed to find energetically favorable grain boundary structures. The procedure is based on a machine learning algorithm involving a Gaussian process, and uses statistical modelling to map the energies of the all grain boundaries. Using the procedure, energetically favorable grain boundaries in BaZrO3 are identified with up to 85% lower computational cost than the brute force alternative of calculating all possible structures. Furthermore, our results suggest that using a grid size of 0.3 A in each dimension is sufficient when creating grain boundary structures using such sampling procedures.
- Published
- 2019
- Full Text
- View/download PDF
42. Minimax Analysis of Active Learning.
- Author
-
Hanneke, Steve and Liu Yang
- Subjects
- *
ACTIVE learning , *CRITICAL literacy , *INTERACTIVE learning , *FACILITATED learning , *COGNITIVE structures - Abstract
This work establishes distribution-free upper and lower bounds on the minimax label complexity of active learning with general hypothesis classes, under various noise models. The results reveal a number of surprising facts. In particular, under the noise model of Tsy-bakov (2004), the minimax label complexity of active learning with a VC class is always asymptotically smaller than that of passive learning, and is typically significantly smaller than the best previously-published upper bounds in the active learning literature. In high-noise regimes, it turns out that all active learning problems of a given VC dimension have roughly the same minimax label complexity, which contrasts with well-known results for bounded noise. In low-noise regimes, we find that the label complexity is well-characterized by a simple combinatorial complexity measure we call the star number. Interestingly, we find that almost all of the complexity measures previously explored in the active learning literature have worst-case values exactly equal to the star number. We also propose new active learning strategies that nearly achieve these minimax label complexities. [ABSTRACT FROM AUTHOR]
- Published
- 2015
43. Update vs. upgrade: Modeling with indeterminate multi-class active learning.
- Author
-
Zhang, Xiao-Yu, Wang, Shupeng, Zhu, Xiaobin, Yun, Xiaochun, Wu, Guangjun, and Wang, Yipeng
- Subjects
- *
ACTIVE learning , *STATISTICAL sampling , *ALGORITHMS , *CLASSIFICATION , *SET theory - Abstract
This paper brings up a very important issue for active learning in practice. Traditional active learning mechanism is based on the assumption that the number of classes happens to be known in advance, and thus selective sampling is confined to the determinate model. However, as is the case for many applications, the model class is usually indeterminate and there is every chance that the hypothesis itself is inappropriate. To address this problem, we propose a novel indeterminate multi-class active learning algorithm, which comprehensively evaluates the instance based on both the value in refining the existing model and the potential in triggering model rectification. In this way, balance is effectively achieved between model update and model upgrade. Advantage of the proposed algorithm is demonstrated by experiments of classification tasks on both synthetic and real-world dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
44. A Compression Technique for Analyzing Disagreement-Based Active Learning.
- Author
-
Wiener, Yair, Hanneke, Steve, and El-Yaniv, Ran
- Subjects
- *
DATA compression , *ACTIVE learning , *COMPUTATIONAL complexity , *GAUSSIAN processes , *MACHINE learning - Abstract
We introduce a new and improved characterization of the label complexity of disagreement-based active learning, in which the leading quantity is the version space compression set size. This quantity is defined as the size of the smallest subset of the training data that induces the same version space. We show various applications of the new characterization, including a tight analysis of CAL and refined label complexity bounds for linear separators under mixtures of Gaussians and axis-aligned rectangles under product densities. The version space compression set size, as well as the new characterization of the label complexity, can be naturally extended to agnostic learning problems, for which we show new speedup results for two well known active learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
45. An Energy-Efficient Adaptive Sensing Framework for Gait Monitoring Using Smart Insole.
- Author
-
Yingxiao Wu, Wenyao Xu, Liu, Jason J., Ming-Chun Huang, Shuang Luan, and Yuju Lee
- Abstract
Gait analysis is an important process to gauge human motion. Recently, longitudinal gait analysis received much attention from the medical and healthcare domains. The challenge in studies over extended time periods is the battery life. Due to the continuous sensing and computing, wearable gait devices cannot fulfill a full-day work schedule. In this paper, we present an energy-efficient adaptive sensing framework to address this problem. Through presampling for content understanding, a selective sensing and sparsity-based signal reconstruction method is proposed. In particular, we develop and implement the new sensing scheme in a smart insole system to reduce the number of samples, while still preserving the information integrity of gait parameters. Experimental results show the effectiveness of our method in data point reduction. Our proposed method improves the battery life to 10.47 h, while normalized mean square error is within 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
46. Selectively Sampled Subharmonic-Free Digital Current Mode Control Using Direct Duty Control.
- Author
-
Kapat, Santanu
- Abstract
Benefits of digital current mode control are often limited by the choice of a current-loop sampling rate. A higher rate requires a fast analog-to-digital converter that consumes substantial power and increases cost. A lower rate often results in subharmonic oscillations, even using a programmable ramp compensation. This brief proposes a simple technique to compute the steady-state duty ratio in real time, when the closed-loop controller is in action. A time-to-digital converter translates cycle-by-cycle duty ratio information into digital code, and a “duty ratio computation” block generates the computed duty ratio using a moving average filter. At steady state, this enforces a virtual open-loop configuration and completely disables current-loop sampling and controller computation, thereby saving substantial power and eliminating subharmonic oscillations. Considering a dc–dc buck converter as the test case, it is found that even in the presence of high periodic behavior under the closed-loop control, a near-ideal steady-state duty ratio can be reconstructed. Design-related issues along with duty ratio saturation are discussed with test cases. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
47. Improving Relevance Feedback for Image Retrieval with Asymmetric Sampling.
- Author
-
Niu, Biao, Cheng, Jian, and Lu, Hanqing
- Abstract
Relevance feedback is a quite effective approach to improve performance for image retrieval. Recently, active learning method has attracted much attention due to its capability of alleviating the burden of labeling in relevance feedback. However, most of the traditional studies focus on single sample selection in each feedback which needs heavy computational cost in practice. In this paper, we presents a novel batch mode active learning method for informative sample selection. Inspired by graph propagation, we consider the certainty of labels as asymmetric propagation information on graph, and formulate the correlation between labeled samples and unlabeled samples in an united scheme. Extensive experiments on publicly available data sets show that the proposed method is promising. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
48. On the automatic identification of difficult examples for beat tracking: Towards building new evaluation datasets.
- Author
-
Holzapfel, A., Davies, M. E. P., Zapata, J. R., Oliveira, J. L., and Gouyon, F.
- Abstract
In this paper, an approach is presented that identifies music samples which are difficult for current state-of-the-art beat trackers. In order to estimate this difficulty even for examples without ground truth, a method motivated by selective sampling is applied. This method assigns a degree of difficulty to a sample based on the mutual disagreement between the output of various beat tracking systems. On a large beat annotated dataset we show that this mutual agreement is correlated with the mean performance of the beat trackers evaluated against the ground truth, and hence can be used to identify difficult examples by predicting poor beat tracking performance. Towards the aim of advancing future beat tracking systems, we demonstrate how our method can be used to form new datasets containing a high proportion of challenging music examples. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
49. Moving Virtual Boundary strategy for selective sampling.
- Author
-
Xiaoyu Zhang and Cheng, Jian
- Abstract
In relevance feedback of information retrieval system, selective sampling is often used to alleviate the burden of labeling by selecting only the most informative data to label. The traditional batch labeling model neglects the data's correlation and thus degrades the performance; while the theoretical optimal one-by-one training model is not efficient enough because of the high computational complexity. In this paper, we propose a Moving Virtual Boundary (MVB) strategy for informative data selection. We adopt a novel one-by-one labeling model, using the previous labeled data as extra guidance for the selection of next, and achieve better experimental results. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
50. Evaluation of Copper-1,3,5-benzenetricarboxylate Metal-organic Framework (Cu-MOF) as a Selective Sorbent for Lewis-base Analytes
- Author
-
Thallapally, Praveen
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.