19 results on '"Buscaglia, A."'
Search Results
2. Why do latent fingerprint examiners differ in their conclusions?
- Author
-
Hicklin, R. Austin, Ulery, Bradford T., Ausdemore, Madeline, and Buscaglia, JoAnn
- Published
- 2020
- Full Text
- View/download PDF
3. Factors associated with latent fingerprint exclusion determinations
- Author
-
Ulery, Bradford T., Hicklin, R. Austin, Roberts, Maria Antonia, and Buscaglia, JoAnn
- Published
- 2017
- Full Text
- View/download PDF
4. Microscopical characterization of known postmortem root bands using light and scanning electron microscopy
- Author
-
Hietpas, Jack, Buscaglia, JoAnn, Richard, Adam H., Shaw, Stephen, Castillo, Hilda S., and Donfack, Joseph
- Published
- 2016
- Full Text
- View/download PDF
5. Interexaminer variation of minutia markup on latent fingerprints
- Author
-
Ulery, Bradford T., Hicklin, R. Austin, Roberts, Maria Antonia, and Buscaglia, JoAnn
- Published
- 2016
- Full Text
- View/download PDF
6. Changes in latent fingerprint examiners’ markup between analysis and comparison
- Author
-
Ulery, Bradford T., Hicklin, R. Austin, Roberts, Maria Antonia, and Buscaglia, JoAnn
- Published
- 2015
- Full Text
- View/download PDF
7. A novel approach for latent print identification using accurate overlays to prioritize reference prints
- Author
-
Gantz, Daniel T., Gantz, Donald T., Walch, Mark A., Roberts, Maria Antonia, and Buscaglia, JoAnn
- Published
- 2014
- Full Text
- View/download PDF
8. Understanding the sufficiency of information for latent fingerprint value determinations
- Author
-
Ulery, Bradford T., Hicklin, R. Austin, Kiebuzinski, George I., Roberts, Maria Antonia, and Buscaglia, JoAnn
- Published
- 2013
- Full Text
- View/download PDF
9. Assessing the clarity of friction ridge impressions
- Author
-
Hicklin, R. Austin, Buscaglia, JoAnn, and Roberts, Maria Antonia
- Published
- 2013
- Full Text
- View/download PDF
10. Score-based likelihood ratios for handwriting evidence
- Author
-
Hepler, Amanda B., Saunders, Christopher P., Davis, Linda J., and Buscaglia, JoAnn
- Published
- 2012
- Full Text
- View/download PDF
11. Using subsampling to estimate the strength of handwriting evidence via score-based likelihood ratios
- Author
-
Davis, Linda J., Saunders, Christopher P., Hepler, Amanda, and Buscaglia, JoAnn
- Published
- 2012
- Full Text
- View/download PDF
12. Factors associated with latent fingerprint exclusion determinations
- Author
-
R. Austin Hicklin, Maria Antonia Roberts, Bradford T. Ulery, and JoAnn Buscaglia
- Subjects
Quality Control ,media_common.quotation_subject ,Decision Making ,01 natural sciences ,Latent fingerprint ,Pathology and Forensic Medicine ,Terminology ,03 medical and health sciences ,0302 clinical medicine ,Predictive Value of Tests ,Humans ,Medicine ,Quality (business) ,030216 legal & forensic medicine ,Dermatoglyphics ,Reliability (statistics) ,media_common ,Models, Statistical ,Actuarial science ,business.industry ,010401 analytical chemistry ,0104 chemical sciences ,business ,Law ,Social psychology ,Quality assurance - Abstract
Exclusion is the determination by a latent print examiner that two friction ridge impressions did not originate from the same source. The concept and terminology of exclusion vary among agencies. Much of the literature on latent print examination focuses on individualization, and much less attention has been paid to exclusion. This experimental study assesses the associations between a variety of factors and exclusion determinations. Although erroneous exclusions are more likely to occur on some images and for some examiners, they were widely distributed among images and examiners. Measurable factors found to be associated with exclusion rates include the quality of the latent, value determinations, analysis minutia count, comparison difficulty, and the presence of cores or deltas. An understanding of these associations will help explain the circumstances under which errors are more likely to occur and when determinations are less likely to be reproduced by other examiners; the results should also lead to improved effectiveness and efficiency of training and casework quality assurance. This research is intended to assist examiners in improving the examination process and provide information to the broader community regarding the accuracy, reliability, and implications of exclusion decisions.
- Published
- 2017
- Full Text
- View/download PDF
13. Interexaminer variation of minutia markup on latent fingerprints
- Author
-
Maria Antonia Roberts, R. Austin Hicklin, Bradford T. Ulery, and JoAnn Buscaglia
- Subjects
Observer Variation ,Minutiae ,Markup language ,Biometrics ,Test procedures ,business.industry ,010401 analytical chemistry ,Reproducibility of Results ,01 natural sciences ,0104 chemical sciences ,Pathology and Forensic Medicine ,03 medical and health sciences ,0302 clinical medicine ,Variation (linguistics) ,Statistics ,Humans ,Medicine ,030216 legal & forensic medicine ,Dermatoglyphics ,Observer variation ,business ,Law - Abstract
Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process.
- Published
- 2016
- Full Text
- View/download PDF
14. A novel approach for latent print identification using accurate overlays to prioritize reference prints
- Author
-
Donald T. Gantz, JoAnn Buscaglia, Maria Antonia Roberts, Mark A. Walch, and Daniel Thomas Gantz
- Subjects
Minutiae ,Matching (statistics) ,Point (typography) ,business.industry ,Computer science ,Orientation (computer vision) ,Fingerprint (computing) ,Pattern recognition ,Mathematical Concepts ,computer.software_genre ,Pattern Recognition, Automated ,Pathology and Forensic Medicine ,Identification (information) ,ComputingMethodologies_PATTERNRECOGNITION ,Ranking ,Humans ,NIST ,Artificial intelligence ,Data mining ,Dermatoglyphics ,business ,Law ,computer - Abstract
A novel approach to automated fingerprint matching and scoring that produces accurate locally and nonlinearly adjusted overlays of a latent print onto each reference print in a corpus is described. The technology, which addresses challenges inherent to latent prints, provides the latent print examiner with a prioritized ranking of candidate reference prints based on the overlays of the latent onto each candidate print. In addition to supporting current latent print comparison practices, this approach can make it possible to return a greater number of AFIS candidate prints because the ranked overlays provide a substantial starting point for latent-to-reference print comparison. To provide the image information required to create an accurate overlay of a latent print onto a reference print, "Ridge-Specific Markers" (RSMs), which correspond to short continuous segments of a ridge or furrow, are introduced. RSMs are reliably associated with any specific local section of a ridge or a furrow using the geometric information available from the image. Latent prints are commonly fragmentary, with reduced clarity and limited minutiae (i.e., ridge endings and bifurcations). Even in the absence of traditional minutiae, latent prints contain very important information in their ridges that permit automated matching using RSMs. No print orientation or information beyond the RSMs is required to generate the overlays. This automated process is applied to the 88 good quality latent prints in the NIST Special Database (SD) 27. Nonlinear overlays of each latent were produced onto all of the 88 reference prints in the NIST SD27. With fully automated processing, the true mate reference prints were ranked in the first candidate position for 80.7% of the latents tested, and 89.8% of the true mate reference prints ranked in the top ten positions. After manual post-processing of those latents for which the true mate reference print was not ranked first, these frequencies increased to 90.9% (1st rank) and 96.6% (top ten), respectively. Because the computational process is highly parallelizable, it is feasible for this method to work with a reference corpus of several thousand prints.
- Published
- 2014
- Full Text
- View/download PDF
15. Understanding the sufficiency of information for latent fingerprint value determinations
- Author
-
Bradford T. Ulery, JoAnn Buscaglia, George Ihor Kiebuzinski, R. Austin Hicklin, and Maria Antonia Roberts
- Subjects
business.industry ,media_common.quotation_subject ,Reproducibility of Results ,Latent fingerprint ,Pathology and Forensic Medicine ,law.invention ,Impression ,Logistic Models ,ROC Curve ,law ,Statistics ,CLARITY ,Feature (machine learning) ,Humans ,Medicine ,Quality (business) ,Dermatoglyphics ,business ,Association (psychology) ,Law ,Social psychology ,Value (mathematics) ,media_common - Abstract
A latent print examiner's assessment of the value, or suitability, of a latent impression is the process of determining whether the impression has sufficient information to make a comparison. A "no value" determination preemptively states that no individualization or exclusion determination could be made using the impression, regardless of quality of the comparison prints. Factors contributing to a value determination include clarity and the types, quantity, and relationships of features. These assessments are made subjectively by individual examiners and may vary among examiners. We modeled the relationships between value determinations and feature annotations made by 21 certified latent print examiners on 1850 latent impressions. Minutia count was strongly associated with value determinations. None of the models resulted in a stronger intraexaminer association with "value for individualization" determinations than minutia count alone. The association between examiner annotation and value determinations is greatly limited by the lack of reproducibility of both annotation and value determinations.
- Published
- 2013
- Full Text
- View/download PDF
16. Assessing the clarity of friction ridge impressions
- Author
-
JoAnn Buscaglia, Maria Antonia Roberts, and R. Austin Hicklin
- Subjects
Matching (statistics) ,Computer science ,Image processing ,computer.software_genre ,Pathology and Forensic Medicine ,law.invention ,Software ,law ,Fingerprint ,Image Processing, Computer-Assisted ,Humans ,Dermatoglyphics ,geography ,geography.geographical_feature_category ,business.industry ,Image Enhancement ,Impression ,Ridge ,CLARITY ,Artificial intelligence ,Data mining ,business ,Law ,Quality assurance ,computer ,Algorithms ,Natural language processing - Abstract
The ability of friction ridge examiners to correctly discern and make use of the ridges and associated features in finger or palm impressions is limited by clarity. The clarity of an impression relates to the examiner's confidence that the presence, absence, and attributes of features can be correctly discerned. Despite the importance of clarity in the examination process, there have not previously been standard methods for assessing clarity in friction ridge impressions. We introduce a process for annotation, analysis, and interchange of friction ridge clarity information that can be applied to latent or exemplar impressions. This paper: (1) describes a method for evaluating the clarity of friction ridge impressions by using color-coded annotations that can be used by examiners or automated systems; (2) discusses algorithms for overall clarity metrics based on manual or automated clarity annotation; and (3) defines a method of quantifying the correspondence of clarity when comparing a pair of friction ridge images, based on clarity annotation and resulting metrics. Different uses of this approach include examiner interchange of data, quality assurance, metrics, and as an aid in automated fingerprint matching.
- Published
- 2013
- Full Text
- View/download PDF
17. Score-based likelihood ratios for handwriting evidence
- Author
-
Amanda Hepler, Christopher P. Saunders, JoAnn Buscaglia, and Linda J. Davis
- Subjects
Handwriting ,Likelihood Functions ,Process (engineering) ,Computer science ,Proposition ,Mathematical Concepts ,Pathology and Forensic Medicine ,Data set ,Forensic statistics ,Databases as Topic ,Statistics ,Range (statistics) ,Humans ,Probability distribution ,Law ,Value (mathematics) - Abstract
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation – in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing samples from over 400 writers. We found that, when provided with the same two items of evidence, the three methods often would lead to differing conclusions (with rates of disagreement ranging from 0.005 to 0.48). Rates of misleading evidence and Tippet plots are both used to characterize the range of behavior for the methods over varying sized questioned documents. The appendix shows that the three score-based likelihood ratios are theoretically very different not only from each other, but also from the likelihood ratio, and as a consequence each display drastically different behavior.
- Published
- 2012
- Full Text
- View/download PDF
18. Using subsampling to estimate the strength of handwriting evidence via score-based likelihood ratios
- Author
-
JoAnn Buscaglia, Linda J. Davis, Christopher P. Saunders, and Amanda Hepler
- Subjects
Handwriting ,Likelihood Functions ,Strength of evidence ,Computer science ,Forensic Sciences ,Statistics ,Novelty ,Humans ,Law ,Algorithms ,Pathology and Forensic Medicine - Abstract
The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions.
- Published
- 2012
- Full Text
- View/download PDF
19. Changes in latent fingerprint examiners' markup between analysis and comparison
- Author
-
Bradford T. Ulery, JoAnn Buscaglia, Maria Antonia Roberts, and R. Austin Hicklin
- Subjects
Minutiae ,Quality Control ,Markup language ,business.industry ,Speech recognition ,media_common.quotation_subject ,Decision Making ,computer.software_genre ,Latent fingerprint ,Pathology and Forensic Medicine ,Confirmation bias ,Medicine ,Image pair ,Humans ,Artificial intelligence ,Dermatoglyphics ,business ,Law ,computer ,Natural language processing ,Problem Solving ,media_common - Abstract
After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%).
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.