1,039 results on '"perceptual similarity"'
Search Results
2. Children (and Many Adults) Use Perceptual Similarity to Assess Relative Impossibility.
- Author
-
Tipper, Zoe, Kim, Terryn, and Friedman, Ori
- Subjects
- *
RESEARCH funding , *MAGIC , *VISUAL perception in children , *CHI-squared test , *DESCRIPTIVE statistics , *AGE distribution , *CHILD development , *VISUAL perception , *ATTRIBUTION (Social psychology) , *JUDGMENT (Psychology) , *CONCEPTS , *CONFIDENCE intervals - Abstract
People see some impossible events as more impossible than others. For example, walking through a solid wall seems more impossible if it is made of stone rather than wood. Across four experiments, we investigated how children and adults assess the relative impossibility of events, contrasting two kinds of information they may use: perceptual information and causal knowledge. In each experiment, participants were told about a wizard who could magically transform target objects into other things. Participants then assessed which of the two transformation spells would be easier or harder, a spell transforming a target object into a perceptual match (i.e., a similar-looking thing) or one transforming it into a causal match (e.g., an item made of similar materials). In Experiments 1–3, children aged 4–7 mainly thought that transformations into the perceptual match would be easier, though this tendency varied with age. Adults were overall split when choosing which spell would be easier. In Experiment 1, this was because of variations in their judgments across different pairs of spells; in Experiments 2 and 4, the split resulted because different subsets of adults preferred either the perceptual or causal match. Overall, these findings show that children, and many adults, use perceptual reasoning to assess relative impossibility. Public Significance Statement: Besides differentiating possible events from ones that are impossible, people see some events as more impossible than others. We investigated how children and adults make these judgments by asking them about magical transformations. One way that people might assess the relative impossibility of magic is by drawing on their causal knowledge. However, we find that young children, and even many adults, instead make these judgments using shallow perceptual information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. ACSwinNet: A Deep Learning-Based Rigid Registration Method for Head-Neck CT-CBCT Images in Image-Guided Radiotherapy.
- Author
-
Peng, Kuankuan, Zhou, Danyu, Sun, Kaiwen, Wang, Junfeng, Deng, Jianchun, and Gong, Shihua
- Subjects
- *
CONE beam computed tomography , *TRANSFORMER models , *IMAGE-guided radiation therapy , *COMPUTED tomography , *NECK tumors - Abstract
Accurate and precise rigid registration between head-neck computed tomography (CT) and cone-beam computed tomography (CBCT) images is crucial for correcting setup errors in image-guided radiotherapy (IGRT) for head and neck tumors. However, conventional registration methods that treat the head and neck as a single entity may not achieve the necessary accuracy for the head region, which is particularly sensitive to radiation in radiotherapy. We propose ACSwinNet, a deep learning-based method for head-neck CT-CBCT rigid registration, which aims to enhance the registration precision in the head region. Our approach integrates an anatomical constraint encoder with anatomical segmentations of tissues and organs to enhance the accuracy of rigid registration in the head region. We also employ a Swin Transformer-based network for registration in cases with large initial misalignment and a perceptual similarity metric network to address intensity discrepancies and artifacts between the CT and CBCT images. We validate the proposed method using a head-neck CT-CBCT dataset acquired from clinical patients. Compared with the conventional rigid method, our method exhibits lower target registration error (TRE) for landmarks in the head region (reduced from 2.14 ± 0.45 mm to 1.82 ± 0.39 mm), higher dice similarity coefficient (DSC) (increased from 0.743 ± 0.051 to 0.755 ± 0.053), and higher structural similarity index (increased from 0.854 ± 0.044 to 0.870 ± 0.043). Our proposed method effectively addresses the challenge of low registration accuracy in the head region, which has been a limitation of conventional methods. This demonstrates significant potential in improving the accuracy of IGRT for head and neck tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Autoencoder-Based Unsupervised Surface Defect Detection Using Two-Stage Training.
- Author
-
Getachew Shiferaw, Tesfaye and Yao, Li
- Subjects
SURFACE defects ,RECEIVER operating characteristic curves ,MODEL railroads - Abstract
Accurately detecting defects while reconstructing a high-quality normal background in surface defect detection using unsupervised methods remains a significant challenge. This study proposes an unsupervised method that effectively addresses this challenge by achieving both accurate defect detection and a high-quality normal background reconstruction without noise. We propose an adaptive weighted structural similarity (AW-SSIM) loss for focused feature learning. AW-SSIM improves structural similarity (SSIM) loss by assigning different weights to its sub-functions of luminance, contrast, and structure based on their relative importance for a specific training sample. Moreover, it dynamically adjusts the Gaussian window's standard deviation (σ) during loss calculation to balance noise reduction and detail preservation. An artificial defect generation algorithm (ADGA) is proposed to generate an artificial defect closely resembling real ones. We use a two-stage training strategy. In the first stage, the model trains only on normal samples using AW-SSIM loss, allowing it to learn robust representations of normal features. In the second stage of training, the weights obtained from the first stage are used to train the model on both normal and artificially defective training samples. Additionally, the second stage employs a combined learned Perceptual Image Patch Similarity (LPIPS) and AW-SSIM loss. The combined loss helps the model in achieving high-quality normal background reconstruction while maintaining accurate defect detection. Extensive experimental results demonstrate that our proposed method achieves a state-of-the-art defect detection accuracy. The proposed method achieved an average area under the receiver operating characteristic curve (AuROC) of 97.69% on six samples from the MVTec anomaly detection dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Sensory translation between audition and vision.
- Author
-
Spence, Charles and Di Stefano, Nicola
- Subjects
- *
SYNESTHESIA , *JOURNALISTS , *COMPOSERS , *COLOR , *AESTHETICS , *SENSES - Abstract
Across the millennia, and across a range of disciplines, there has been a widespread desire to connect, or translate between, the senses in a manner that is meaningful, rather than arbitrary. Early examples were often inspired by the vivid, yet mostly idiosyncratic, crossmodal matches expressed by synaesthetes, often exploited for aesthetic purposes by writers, artists, and composers. A separate approach comes from those academic commentators who have attempted to translate between structurally similar dimensions of perceptual experience (such as pitch and colour). However, neither approach has succeeded in delivering consensually agreed crossmodal matches. As such, an alternative approach to sensory translation is needed. In this narrative historical review, focusing on the translation between audition and vision, we attempt to shed light on the topic by addressing the following three questions: (1) How is the topic of sensory translation related to synaesthesia, multisensory integration, and crossmodal associations? (2) Are there common processing mechanisms across the senses that can help to guarantee the success of sensory translation, or, rather, is mapping among the senses mediated by allegedly universal (e.g., amodal) stimulus dimensions? (3) Is the term 'translation' in the context of cross-sensory mappings used metaphorically or literally? Given the general mechanisms and concepts discussed throughout the review, the answers we come to regarding the nature of audio-visual translation are likely to apply to the translation between other perhaps less-frequently studied modality pairings as well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. The Role of Perceptual Similarities in Determining the Asymmetric Mixed-Category Advantage.
- Author
-
Peled, Reut and Luria, Roy
- Abstract
Considering working memory capacity limitations, representing all relevant data simultaneously is unlikely. What remains unclear is why some items are better remembered than others when all data are equally relevant. While trying to answer this question, the literature has identified a pattern named the mixed-category benefit in which performance is enhanced when presenting stimuli from different categories as compared to presenting a similar number of items that all belong to just one category. Moreover, previous studies revealed an asymmetry in performance while mixing certain categories, suggesting that not all categories benefit equally from being mixed. In a series of three change-detection experiments, the present study investigated the role of low-level perceptual similarities between categories in determining the mixed-category asymmetric advantages. Our primary conclusion is that items' similarity at the perceptual level has a significant role in the asymmetric performance in the mixed-category phenomenon. We measured sensitivity (d′) to detect a change between sample and test displays and found that the mixed-category advantage dropped when the mixed categories shared basic features. Furthermore, we found that sensitivity to novel items was impaired when presented with another category sharing its basic features. Finally, increasing the encoding interval improved performance for the novel items, but novel items' performance was still impaired when these items were mixed with another category that shared their basic features. Our findings highlight the significant role low-level similarities play in the asymmetric mixed-category performances, for both novel and familiar categories. Public Significance Statement: Our visual environment is a diverse collection of stimuli sourced from various categories. Research has shown that memory performance is better when different categories of stimuli are combined, rather than when a single category is presented. However, recent studies have revealed that not all combinations of stimuli produce comparable memory enhancement, which has resulted in an unresolved asymmetry in the mixed-category advantage. Our findings, in contrast to previous interpretations that primarily rely on high-level cognitive processing, demonstrate that low-level encoding processes also impact memory enhancement. Specifically, our study provides evidence that the perceptual similarities between stimuli derived from different categories have an effect on the extent of the mixed-category advantage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. ACSwinNet: A Deep Learning-Based Rigid Registration Method for Head-Neck CT-CBCT Images in Image-Guided Radiotherapy
- Author
-
Kuankuan Peng, Danyu Zhou, Kaiwen Sun, Junfeng Wang, Jianchun Deng, and Shihua Gong
- Subjects
Swin Transformer ,anatomical constraint ,perceptual similarity ,CT-CBCT rigid registration ,IGRT ,Chemical technology ,TP1-1185 - Abstract
Accurate and precise rigid registration between head-neck computed tomography (CT) and cone-beam computed tomography (CBCT) images is crucial for correcting setup errors in image-guided radiotherapy (IGRT) for head and neck tumors. However, conventional registration methods that treat the head and neck as a single entity may not achieve the necessary accuracy for the head region, which is particularly sensitive to radiation in radiotherapy. We propose ACSwinNet, a deep learning-based method for head-neck CT-CBCT rigid registration, which aims to enhance the registration precision in the head region. Our approach integrates an anatomical constraint encoder with anatomical segmentations of tissues and organs to enhance the accuracy of rigid registration in the head region. We also employ a Swin Transformer-based network for registration in cases with large initial misalignment and a perceptual similarity metric network to address intensity discrepancies and artifacts between the CT and CBCT images. We validate the proposed method using a head-neck CT-CBCT dataset acquired from clinical patients. Compared with the conventional rigid method, our method exhibits lower target registration error (TRE) for landmarks in the head region (reduced from 2.14 ± 0.45 mm to 1.82 ± 0.39 mm), higher dice similarity coefficient (DSC) (increased from 0.743 ± 0.051 to 0.755 ± 0.053), and higher structural similarity index (increased from 0.854 ± 0.044 to 0.870 ± 0.043). Our proposed method effectively addresses the challenge of low registration accuracy in the head region, which has been a limitation of conventional methods. This demonstrates significant potential in improving the accuracy of IGRT for head and neck tumors.
- Published
- 2024
- Full Text
- View/download PDF
8. Rethinking cross-domain semantic relation for few-shot image generation.
- Author
-
Gou, Yao, Li, Min, Lv, Yilong, Zhang, Yusen, Xing, Yuhang, and He, Yujie
- Subjects
GENERATIVE adversarial networks ,SOCIAL responsibility of business ,PROBLEM solving - Abstract
Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at https://github.com/gouayao/CSR. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Unsupervised single-shot depth estimation using perceptual reconstruction.
- Author
-
Angermann, Christoph, Schwab, Matthias, Haltmeier, Markus, Laubichler, Christian, and Jónsson, Steinbjörn
- Abstract
Real-time estimation of actual object depth is an essential module for various autonomous system tasks such as 3D reconstruction, scene understanding and condition assessment. During the last decade of machine learning, extensive deployment of deep learning methods to computer vision tasks has yielded approaches that succeed in achieving realistic depth synthesis out of a simple RGB modality. Most of these models are based on paired RGB-depth data and/or the availability of video sequences and stereo images. However, the lack of RGB-depth pairs, video sequences, or stereo images makes depth estimation a challenging task that needs to be explored in more detail. This study builds on recent advances in the field of generative neural networks in order to establish fully unsupervised single-shot depth estimation. Two generators for RGB-to-depth and depth-to-RGB transfer are implemented and simultaneously optimized using the Wasserstein-1 distance, a novel perceptual reconstruction term, and hand-crafted image filters. We comprehensively evaluate the models using a custom-generated industrial surface depth data set as well as the Texas 3D Face Recognition Database, the CelebAMask-HQ database of human portraits and the SURREAL dataset that records body depth. For each evaluation dataset, the proposed method shows a significant increase in depth accuracy compared to state-of-the-art single-image transfer methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Acoustic matching and phonotactic expectations in the adaptation of English /s/ into Korean.
- Author
-
Xinyi Zhang, Yun Jung Kim, and Mira Oh
- Subjects
ENGLISH language ,KOREAN language ,LOANWORDS ,EXPECTATION (Psychology) ,SPEECH perception - Abstract
This study aims to demonstrate that the adaptation of loanwords cannot be solely explained by speech perception. Specifically, it focuses on the adaptation of the English /s/ sound into Korean. In English, a singleton /s/ is loaned as tense [s*], while a preconsonantal /s/ is adapted as lenis [s] in Korean. Kim and Curtis (2002) and Kang (2008) provided an acoustic cue-based analysis for the adaptation of English /s/, considering duration and voice quality, respectively. To investigate the perception of English /s/ by Korean listeners, we conducted a perception experiment. We examined the impact of prosodic factors, such as domain-edge position and lexical stress, on the perception of /s/. Our findings revealed diverse perception patterns depending on stress and position within a word. Specifically, /s/ in a stressed syllable and /s/ in a word-edge position were more likely to be perceived as tense [s*]. Contrary to Kim and Curtis (2002), the duration of English /s/ alone was insufficient to explain perception patterns. Based on our varied perception patterns, we argue that categorical adaptation of the singleton /s/ as tense [s*] and the preconsonantal /s/ as lenis [s] does not solely reflect speech perception. Instead, we propose that loanword adaptation is a process driven by top-down/phonotactic expectations and acoustic matching, with the goal of maximizing perceptual similarity between the source word and the loanword. This is consistent with the findings of Daland et al. (2019). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Autoencoder-Based Unsupervised Surface Defect Detection Using Two-Stage Training
- Author
-
Tesfaye Getachew Shiferaw and Li Yao
- Subjects
autoencoder ,surface defect detection ,structural similarity ,perceptual similarity ,artificial defect generation ,Photography ,TR1-1050 ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Accurately detecting defects while reconstructing a high-quality normal background in surface defect detection using unsupervised methods remains a significant challenge. This study proposes an unsupervised method that effectively addresses this challenge by achieving both accurate defect detection and a high-quality normal background reconstruction without noise. We propose an adaptive weighted structural similarity (AW-SSIM) loss for focused feature learning. AW-SSIM improves structural similarity (SSIM) loss by assigning different weights to its sub-functions of luminance, contrast, and structure based on their relative importance for a specific training sample. Moreover, it dynamically adjusts the Gaussian window’s standard deviation (σ) during loss calculation to balance noise reduction and detail preservation. An artificial defect generation algorithm (ADGA) is proposed to generate an artificial defect closely resembling real ones. We use a two-stage training strategy. In the first stage, the model trains only on normal samples using AW-SSIM loss, allowing it to learn robust representations of normal features. In the second stage of training, the weights obtained from the first stage are used to train the model on both normal and artificially defective training samples. Additionally, the second stage employs a combined learned Perceptual Image Patch Similarity (LPIPS) and AW-SSIM loss. The combined loss helps the model in achieving high-quality normal background reconstruction while maintaining accurate defect detection. Extensive experimental results demonstrate that our proposed method achieves a state-of-the-art defect detection accuracy. The proposed method achieved an average area under the receiver operating characteristic curve (AuROC) of 97.69% on six samples from the MVTec anomaly detection dataset.
- Published
- 2024
- Full Text
- View/download PDF
12. Image Perceptual Similarity Metrics for the Assessment of Basal Cell Carcinoma.
- Author
-
Spyridonos, Panagiota, Gaitanis, Georgios, Likas, Aristidis, Seretis, Konstantinos, Moschovos, Vasileios, Feldmeyer, Laurence, Heidemeyer, Kristine, Zampeta, Athanasia, and Bassukas, Ioannis D.
- Subjects
- *
DIGITAL image processing , *MANN Whitney U Test , *INTER-observer reliability , *PICTURE archiving & communication systems , *PHOTOGRAPHY , *BASAL cell carcinoma , *ARTIFICIAL neural networks , *PHYSICIANS , *IMAGE retrieval , *COLOR ,RESEARCH evaluation - Abstract
Simple Summary: The impact of basal cell carcinomas (BCCs) on a patient's appearance can be significant. Reliable assessments are crucial for the effective management and evaluation of therapeutic interventions. Given that color and texture are critical attributes that describe the clinical aspect of skin lesions, our focus was to devise metrics that capture the way experts perceive deviations of target BCC areas from the surrounding healthy skin. Using computerized image analysis, we explored various similarity metrics to predict perceptual similarity, including different color spaces and distances between features from image embeddings derived from a pre-trained deep convolutional neural network. The results are promising in providing a valid, reliable, and affordable modality, enabling more accurate and standardized assessments of BCC tumors and post-treatment scars. Our approach to modeling color and texture lesion similarity from the surrounding healthy skin is a promising paradigm for the further development of a valid and reliable scar assessment tool. Efficient management of basal cell carcinomas (BCC) requires reliable assessments of both tumors and post-treatment scars. We aimed to estimate image similarity metrics that account for BCC's perceptual color and texture deviation from perilesional skin. In total, 176 clinical photographs of BCC were assessed by six physicians using a visual deviation scale. Internal consistency and inter-rater agreement were estimated using Cronbach's α, weighted Gwet's AC2, and quadratic Cohen's kappa. The mean visual scores were used to validate a range of similarity metrics employing different color spaces, distances, and image embeddings from a pre-trained VGG16 neural network. The calculated similarities were transformed into discrete values using ordinal logistic regression models. The Bray–Curtis distance in the YIQ color model and rectified embeddings from the 'fc6' layer minimized the mean squared error and demonstrated strong performance in representing perceptual similarities. Box plot analysis and the Wilcoxon rank-sum test were used to visualize and compare the levels of agreement, conducted on a random validation round between the two groups: 'Human–System' and 'Human–Human.' The proposed metrics were comparable in terms of internal consistency and agreement with human raters. The findings suggest that the proposed metrics offer a robust and cost-effective approach to monitoring BCC treatment outcomes in clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. The P3 indexes the distance between perceived and target image.
- Author
-
de la Torre‐Ortiz, Carlos, Spapé, Michiel, and Ruotsalo, Tuukka
- Subjects
- *
GENERATIVE adversarial networks , *VISUAL evoked potentials , *VISUAL perception , *REGRESSION analysis - Abstract
Visual recognition requires inferring the similarity between a perceived object and a mental target. However, a measure of similarity is difficult to determine when it comes to complex stimuli such as faces. Indeed, people may notice someone "looks like" a familiar face, but find it hard to describe on the basis of what features such a comparison is based. Previous work shows that the number of similar visual elements between a face pictogram and a memorized target correlates with the P300 amplitude in the visual evoked potential. Here, we redefine similarity as the distance inferred from a latent space learned using a state‐of‐the‐art generative adversarial neural network (GAN). A rapid serial visual presentation experiment was conducted with oddball images generated at varying distances from the target to determine how P300 amplitude related to GAN‐derived distances. The results showed that distance‐to‐target was monotonically related to the P300, showing perceptual identification was associated with smooth, drifting image similarity. Furthermore, regression modeling indicated that while the P3a and P3b sub‐components had distinct responses in location, time, and amplitude, they were similarly related to target distance. The work demonstrates that the P300 indexes the distance between perceived and target image in smooth, natural, and complex visual stimuli and shows that GANs present a novel modeling methodology for studying the relationships between stimuli, perception, and recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Deep perceptual similarity is adaptable to ambiguous contexts
- Author
-
Pihlgren, Gustav Grund, Sandin, Fredrik, Liwicki, Marcus, Pihlgren, Gustav Grund, Sandin, Fredrik, and Liwicki, Marcus
- Abstract
This work examines the adaptability of Deep Perceptual Similarity (DPS) metrics to context beyond those that align with average human perception and contexts in which the standard metrics have been shown to perform well. Prior works have shown that DPS metrics are good at estimating human perception of similarity, so-called perceptual similarity. However, it remains unknown whether such metrics can be adapted to other contexts. In this work, DPS metrics are evaluated for their adaptability to different contradictory similarity contexts. Such contexts are created by randomly ranking six image distortions. Metrics are adapted to consider distortions more or less disruptive to similarity depending on their place in the random rankings. This is done by training pretrained CNNs to measure similarity according to given contexts. The adapted metrics are also evaluated on a perceptual similarity dataset to evaluate whether adapting to a ranking affects their prior performance. The findings show that DPS metrics can be adapted with high performance. While the adapted metrics have difficulties with the same contexts as baselines, performance is improved in 99% of cases. Finally, it is shown that the adaption is not significantly detrimental to prior performance on perceptual similarity. The implementation of this work is available online.
- Published
- 2024
15. Umami taste components in chicken-spices blends and potential effect of aroma on umami taste intensity
- Author
-
Bioquímica i Biotecnologia, Universitat Rovira i Virgili, Andaleeb, Rani; Zhu, Yiwen; Zhang, Ninglong; Zhang, Danni; Hussain, Muzahir; Zhang, Yin; Lu, Yingshuang; Liu, Yuan, Bioquímica i Biotecnologia, Universitat Rovira i Virgili, and Andaleeb, Rani; Zhu, Yiwen; Zhang, Ninglong; Zhang, Danni; Hussain, Muzahir; Zhang, Yin; Lu, Yingshuang; Liu, Yuan
- Abstract
In this study, umami taste intensity (UTI) and umami taste components in chicken breast (CB) and chickenspices blends were characterized using sensory and instrumental analysis. Our main objective was to assess the aroma-umami taste interactions in different food matrices and reconcile the aroma-taste perception to assist future product development. The impact of key aroma, including vegetable-note "2-pentylfuran", meaty "methional", green "hexanal", and spicy-note "estragole and caryophyllene" on UTI was evaluated in monosodium glutamate and chicken extract. We found that spices significantly decreased UTI and umami taste components in CB. Interestingly, the perceptually similar odorants and tastants exhibited the potential to enhance UTI in food matrices. Methional was able to increase the UTI, whereas spicy and green-note components could reduce the UTI significantly. This information would be valuable to food engineers and formulators in aroma selection to control the UTI perceived by consumers, thus, improving the quality and acceptability of the chicken products. (c) 2024 Beijing Academy of Food Sciences. Publishing services by Tsinghua University Press. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
- Published
- 2024
16. Syntactic Generation of Similar Pictures
- Author
-
Jingili, Nuru, Ewert, Sigrid, Sanders, Ian, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Obaidat, Mohammad S., editor, Ören, Tuncer, editor, and Rango, Floriano De, editor
- Published
- 2020
- Full Text
- View/download PDF
17. Variation in loanword adaptation: A case from Mandarin Chinese.
- Author
-
Chen, Yangyu and Lu, Yu-An
- Subjects
- *
LOANWORDS , *MANDARIN dialects , *VOWELS , *NASALITY (Phonetics) , *PHONOTACTICS , *HIGHER education , *UNIVERSITIES & colleges , *ADULTS - Abstract
Mandarin speakers tend to adapt intervocalic nasals as either an onset of the following syllable (e.g. Bruno → bù.lŭ .n uò), as a nasal geminate (e.g. Daniel → dā n.n í.ěr), or as one of the above forms (e.g. Tiffany → dì.fú. n í or dì.fē n.n í). Huang and Lin (2013, 2016) identified two factors that may induce the nasal gemination repair: (1) when stress falls on the pre-nasal vowel and (2) when the pre-nasal vowel is a non-high lax vowel. They hypothesized that Mandarin Chinese speakers insert a nasal coda to perceptually approximate the stronger nasalization and longer syllable duration associated with the stressed syllables, and the shorter vowel duration of a lax vowel because the vowels in closed syllables are shorter in Mandarin. The results from two forced-choice identification experiments and an open-ended transcription task showed that although Mandarin speakers' choices of different repairs were indeed biased by the different phonetic manipulations, suggesting an effect of perceptual similarity, their decisions were mainly guided by native phonotactics. The overall findings suggest that phonotactic, phonetic, as well as non-linguistic (i.e. frequency) factors interact with each other, resulting in the variable adaptation pattern. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Integration of Perceptual Similarity With Faithful Mapping of Phonological Contrast in Loanword Adaptation: Mandarin Chinese Adaptation of English Stops.
- Author
-
Mosi He and Jianing He
- Subjects
MANDARIN dialects ,LOANWORDS ,SIMILARITY (Psychology) ,PHONOLOGICAL encoding ,PHONOLOGY - Abstract
In loanword phonology, perceptual similarity and faithful mapping of phonological contrast are two main factors which influence loanword adaptation. Previous studies observe that English phonological voicing contrast is mapped to Mandarin Chinese (hereafter, Mandarin) phonological aspiration contrast, indicating faithful mapping of phonological contrast. Nevertheless, the role of perceptual similarity in adaptation of English stops to Mandarin has not been fully explored. The current study investigates the influence of perceptual similarity on loanword adaptation by examining how English voiced and voiceless stops are adapted in Mandarin Chinese using a data set of 1427 novel Mandarin loanwords from English. The results show consistent adaptation of English voiced stops as Mandarin unaspirated stops and English aspirated voiceless stops as Mandarin aspirated ones, while inconsistent adaptation patterns are found for the English unaspirated voiceless stops. In particular, English post-/s/unaspirated voiceless stops which occupy a similar voice onset time (VOT) region to Mandarin unaspirated stops are adapted as Mandarin unaspirated ones, whereas the rest are mapped to aspirated stops in Mandarin. The overall adaptation patterns provide partial support to faithful mapping of phonological contrast and provide robust evidence for an integration of perceptual similarity with faithful mapping of phonological contrast in loanword adaptation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Perceptual similarity effect in people with Down syndrome.
- Author
-
Barrón-Martínez, Julia B. and Arias-Trejo, Natalia
- Subjects
DIAGNOSIS of Down syndrome ,EXPERIMENTAL design ,EYE movements ,PHONOLOGICAL awareness ,ANALYSIS of variance ,CROSS-sectional method ,SENSORY perception ,T-test (Statistics) ,VISUAL perception - Abstract
Background The perceptual similarity between two objects, specifically similarity in the shape of the referents, is a crucial element for relating words in earlier stages of development. The role of this perceptual similarity has not been systematically explored in children with Down syndrome (DS). Method: The aim was to explore the role of perceptual similarity in relationships between words in children with DS. Two groups, children with typical development (TD) and children with DS, matched by gender and mental age, participated in a priming task with a preferential looking paradigm. The task presented validated perceptually-related word pairs (prime-target) and perceptually unrelated pairs. In the priming task both groups were asked to look at a target image (e.g. ball) that was perceptually related (e.g. cookie) or unrelated (e.g. skirt) to the prime. Results: Participants from both groups looked more at targets without perceptual similarity than at those with similarity to the prime, suggesting an inhibition effect. Conclusions: This finding suggests the role of visual information, particularly the shape of the referents, in the construction of the lexical system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. GPU-Based Integrated Security System for Minimizing Data Loss in Big Data Transmission
- Author
-
Bhattacharjee, Shiladitya, Chakkaravarthy, Midhun, Midhun Chakkaravarthy, Divya, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Balas, Valentina Emilia, editor, Sharma, Neha, editor, and Chakrabarti, Amlan, editor
- Published
- 2019
- Full Text
- View/download PDF
21. Mandarin adaptations of voiced alveolar fricative [z] in English loanwords.
- Author
-
Wei Wang
- Subjects
LOANWORDS ,CHINESE characters ,ORTHOGRAPHY & spelling ,PHONOLOGY ,SPEECH perception - Abstract
This paper explores the adaptation of English voiced fricative [z]E into Mandarin. The principal findings are that the adaptation of [z]E depends on its position in the source word. If [z]E occupies the word initial or middle position, it tends to be borrowed as Mandarin [ts]M or [s]M. If [z]E is in the ending position, more variations would be observed as it may correspond to [ts]M, [s]M, [ʈʂ]M or [ʂ]M. Moreover, adapting [z]E into Mandarin is also heavily influenced by the orthography. The English [z]E can be spelled with
or. The letters of andare also listed in Mandarin Pinyin, a Romanization system that transcribes the sounds of Chinese characters using Roman alphabets. Specifically, is pronounced as [s]M, while [z] is pronounced as [ts]M. It turns out that [s]M sounds different from [z]E, while [ts]M is perceptually similar to [z]E. Generally, there emerged two major adaptation patterns, that is,[z]E [ts]M and [z]E [s]M. The latter is based on the spelling similarity, rather than perceptual similarity. It is therefore concluded that the loanword adaptation is not only determined by the speech perception, phonology and legitimacy of sound structures, but also systematically interfered with the orthography, that is, the source-loan spelling similarity. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
22. The effect of categorical superiority in subsequent search misses
- Author
-
Olga Rubtsova and Elena S. Gorbunova
- Subjects
Visual attention ,Visual search ,Subsequent search misses ,Perceptual similarity ,Categorical similarity ,Psychology ,BF1-990 - Abstract
Subsequent search misses (SSM) refer to the decrease in accuracy of second target detection in dual-target visual search. One of the theoretical explanations of SSM errors is similarity bias – the tendency to search for similar targets and to miss the dissimilar ones. The current study focuses on both perceptual and categorical similarity and their individual roles in SSM. Five experiments investigated the role of perceptual and categorical similarity in subsequent search misses, wherein perceptual and categorical similarities were manipulated separately, and task relevance was controlled. The role of both perceptual and categorical similarity was revealed, however, the categorical similarity had greater impact on second target detection. The findings of this research suggest the revision of the traditional perceptual set hypothesis that mainly focuses on perceptual target similarity in multiple target visual search.
- Published
- 2021
- Full Text
- View/download PDF
23. Supervised Learning With Perceptual Similarity for Multimodal Gene Expression Registration of a Mouse Brain Atlas
- Author
-
Jan Krepl, Francesco Casalegno, Emilie Delattre, Csaba Erö, Huanxiang Lu, Daniel Keller, Dimitri Rodarie, Henry Markram, and Felix Schürmann
- Subjects
multimodal image registration ,perceptual similarity ,gene expression brain atlas ,Allen mouse brain atlas ,non-rigid ,machine learning ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The acquisition of high quality maps of gene expression in the rodent brain is of fundamental importance to the neuroscience community. The generation of such datasets relies on registering individual gene expression images to a reference volume, a task encumbered by the diversity of staining techniques employed, and by deformations and artifacts in the soft tissue. Recently, deep learning models have garnered particular interest as a viable alternative to traditional intensity-based algorithms for image registration. In this work, we propose a supervised learning model for general multimodal 2D registration tasks, trained with a perceptual similarity loss on a dataset labeled by a human expert and augmented by synthetic local deformations. We demonstrate the results of our approach on the Allen Mouse Brain Atlas (AMBA), comprising whole brain Nissl and gene expression stains. We show that our framework and design of the loss function result in accurate and smooth predictions. Our model is able to generalize to unseen gene expressions and coronal sections, outperforming traditional intensity-based approaches in aligning complex brain structures.
- Published
- 2021
- Full Text
- View/download PDF
24. Supervised Learning With Perceptual Similarity for Multimodal Gene Expression Registration of a Mouse Brain Atlas.
- Author
-
Krepl, Jan, Casalegno, Francesco, Delattre, Emilie, Erö, Csaba, Lu, Huanxiang, Keller, Daniel, Rodarie, Dimitri, Markram, Henry, and Schürmann, Felix
- Subjects
SUPERVISED learning ,IMAGE registration ,GENE expression ,PERCEPTUAL learning ,DEEP learning ,MICE - Abstract
The acquisition of high quality maps of gene expression in the rodent brain is of fundamental importance to the neuroscience community. The generation of such datasets relies on registering individual gene expression images to a reference volume, a task encumbered by the diversity of staining techniques employed, and by deformations and artifacts in the soft tissue. Recently, deep learning models have garnered particular interest as a viable alternative to traditional intensity-based algorithms for image registration. In this work, we propose a supervised learning model for general multimodal 2D registration tasks, trained with a perceptual similarity loss on a dataset labeled by a human expert and augmented by synthetic local deformations. We demonstrate the results of our approach on the Allen Mouse Brain Atlas (AMBA), comprising whole brain Nissl and gene expression stains. We show that our framework and design of the loss function result in accurate and smooth predictions. Our model is able to generalize to unseen gene expressions and coronal sections, outperforming traditional intensity-based approaches in aligning complex brain structures. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Statistical learning of syllable sequences as trajectories through a perceptual similarity space.
- Author
-
Qi, Wendy and Zevin, Jason D.
- Subjects
- *
STATISTICAL learning , *SEQUENTIAL learning , *COGNITION , *ARTIFICIAL languages , *SPEECH - Abstract
Learning from sequential statistics is a general capacity common across many cognitive domains and species. One form of statistical learning (SL) – learning to segment "words" from continuous streams of speech syllables in which the only segmentation cue is ostensibly the transitional (or conditional) probability from one syllable to the next – has been studied in great detail. Typically, this phenomenon is modeled as the calculation of probabilities over discrete, featureless units. Here we present an alternative model, in which sequences are learned as trajectories through a similarity space. A simple recurrent network coding syllables with representations that capture the similarity relations among them correctly simulated the result of a classic SL study, as did a similar model that encoded syllables as three dimensional points in a continuous similarity space. We then used the simulations to identify a sequence of "words" that produces the reverse of the typical SL effect, i.e., part-words are predicted to be more familiar than Words. Results from two experiments with human participants are consistent with simulation results. Additional analyses identified features that drive differences in what is learned from a set of artificial languages that have the same transitional probabilities among syllables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Target-Target Perceptual Similarity Within the Attentional Blink
- Author
-
Ivan M. Makarov and Elena S. Gorbunova
- Subjects
task relevance ,interference ,perceptual similarity ,attentional blink ,visual attention ,Psychology ,BF1-990 - Abstract
Three experiments investigated the role of target-target perceptual similarity within the attentional blink (AB). Various geometric shapes were presented in a rapid serial visual presentation task. Targets could have 2, 1, or 0 shared features. Features included shape and size. The second target was presented after five or six different lags after the first target. The task was to detect both targets on each trial. Second-target report accuracy was increased by target-target similarity. This modulation was observed more for mixed-trial design as compared with blocked design. Results are discussed in terms of increased stability of working memory representations and reduced interference for second-target processing.
- Published
- 2020
- Full Text
- View/download PDF
27. Target-Target Perceptual Similarity Within the Attentional Blink.
- Author
-
Makarov, Ivan M. and Gorbunova, Elena S.
- Subjects
ATTENTIONAL blink ,GEOMETRIC shapes ,SHORT-term memory - Abstract
Three experiments investigated the role of target-target perceptual similarity within the attentional blink (AB). Various geometric shapes were presented in a rapid serial visual presentation task. Targets could have 2, 1, or 0 shared features. Features included shape and size. The second target was presented after five or six different lags after the first target. The task was to detect both targets on each trial. Second-target report accuracy was increased by target-target similarity. This modulation was observed more for mixed-trial design as compared with blocked design. Results are discussed in terms of increased stability of working memory representations and reduced interference for second-target processing. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. A Perception-Inspired Deep Learning Framework for Predicting Perceptual Texture Similarity.
- Author
-
Gao, Ying, Gan, Yanhai, Qi, Lin, Zhou, Huiyu, Dong, Xinghui, and Dong, Junyu
- Subjects
- *
CONVOLUTIONAL neural networks , *PATTERN recognition systems , *TEXTURE mapping , *TEXTURES , *FORECASTING - Abstract
Similarity learning plays a fundamental role in the fields of multimedia retrieval and pattern recognition. Prediction of perceptual similarity is a challenging task as in most cases we lack human labeled ground-truth data and robust models to mimic human visual perception. Although in the literature, some studies have been dedicated to similarity learning, they mainly focus on the evaluation of whether or not two images are similar, rather than prediction of perceptual similarity which is consistent with human perception. Inspired by the human visual perception mechanism, we here propose a novel framework in order to predict perceptual similarity between two texture images. Our proposed framework is built on the top of Convolutional Neural Networks (CNNs). The proposed framework considers both powerful features and perceptual characteristics of contours extracted from the images. The similarity value is computed by aggregating resemblances between the corresponding convolutional layer activations of the two texture maps. Experimental results show that the predicted similarity values are consistent with the human-perceived similarity data. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Consistent behavioral and electrophysiological evidence for rapid perceptual discrimination among the six human basic facial expressions.
- Author
-
Luo, Qiuling and Dzhelyova, Milena
- Subjects
- *
DIFFERENTIATION (Cognition) , *FACIAL expression , *VISUAL discrimination , *FACIAL expression & emotions (Psychology) , *SADNESS , *ELECTROENCEPHALOGRAPHY , *HAPPINESS - Abstract
The extent to which the six basic human facial expressions perceptually differ from one another remains controversial. For instance, despite the importance of rapidly decoding fearful faces, this expression often is confused with other expressions, such as Surprise in explicit behavioral categorization tasks. We quantified implicit visual discrimination among rapidly presented facial expressions with an oddball periodic visual stimulation approach combined with electroencephalography (EEG), testing for the relationship with behavioral explicit measures of facial emotion discrimination. We report robust facial expression discrimination responses bilaterally over the occipito-temporal cortex for each pairwise expression change. While fearful faces presented as repeated stimuli led to the smallest deviant responses from all other basic expressions, deviant fearful faces were well discriminated overall and to a larger extent than expressions of Sadness and Anger. Expressions of Happiness did not differ quantitatively as much in EEG as for behavioral subjective judgments, suggesting that the clear dissociation between happy and other expressions, typically observed in behavioral studies, reflects higher-order processes. However, this expression differed from all others in terms of scalp topography, pointing to a qualitative rather than quantitative difference. Despite this difference, overall, we report for the first time a tight relationship of the similarity matrices across facial expressions obtained for implicit EEG responses and behavioral explicit measures collected under the same temporal constraints, paving the way for new approaches of understanding facial expression discrimination in developmental, intercultural, and clinical populations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Image Morphing With Perceptual Constraints and STN Alignment.
- Author
-
Fish, N., Zhang, R., Perry, L., Cohen‐Or, D., Shechtman, E., and Barnes, C.
- Subjects
- *
IMAGE , *LETTERS , *ANNOTATIONS , *TEXTURES - Abstract
In image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances. Intermediates must remain faithful to the input, stand on their own as members of the set and maintain a well‐paced visual transition from one to the next. In this paper, we propose a conditional generative adversarial network (GAN) morphing framework operating on a pair of input images. The network is trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames. While individual frame plausibility is boosted by the adversarial setup, a special training protocol producing sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time. Explicit stating of correspondences is replaced with a grid‐based freeform deformation spatial transformer that predicts the geometric warp between the inputs, instituting the smooth geometric effect by bringing the shapes into an initial alignment. We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self‐supervision, our network learns to generate visually pleasing morphing effects featuring believable in‐betweens, with robustness to changes in shape and texture, requiring no correspondence annotation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Preestablished Harmony (1995)
- Author
-
Quine, W. V., Janssen-Lauret, Frederique, editor, and Kemp, Gary, editor
- Published
- 2016
- Full Text
- View/download PDF
32. Response to Gary Ebbs (1995)
- Author
-
Quine, W. V., Janssen-Lauret, Frederique, editor, and Kemp, Gary, editor
- Published
- 2016
- Full Text
- View/download PDF
33. Introduction to ‘Preestablished Harmony’ and ‘Response to Gary Ebbs’
- Author
-
Ebbs, Gary, Janssen-Lauret, Frederique, editor, and Kemp, Gary, editor
- Published
- 2016
- Full Text
- View/download PDF
34. Deep Perceptual Loss and Similarity
- Author
-
Grund Pihlgren, Gustav and Grund Pihlgren, Gustav
- Abstract
This thesis investigates deep perceptual loss and (deep perceptual) similarity; methods for computing loss and similarity for images as the distance between the deep features extracted from neural networks. The primary contributions of the thesis consist of (i) aggregating much of the existing research on deep perceptual loss and similarity, and (ii) presenting novel research into understanding and improving the methods. This novel research provides insight into how to implement the methods for a given task, their strengths and weaknesses, how to mitigate those weaknesses, and if these methods can handle the inherent ambiguity of similarity. Society increasingly relies on computer vision technology, from everyday smartphone applications to legacy industries like agriculture and mining. Much of that groundbreaking computer vision technology relies on machine learning methods for their success. In turn, the most successful machine learning methods rely on the ability to compute the similarity of instances. In computer vision, computation of image similarity often strives to mimic human perception, called perceptual similarity. Deep perceptual similarity has proven effective for this purpose and achieves state-of-the-art performance. Furthermore, this method has been used for loss calculation when training machine learning models with impressive results in various computer vision tasks. However, many open questions exist, including how to best utilize and improve the methods. Since similarity is ambiguous and context-dependent, it is also uncertain whether the methods can handle changing contexts. This thesis addresses these questions through (i) a systematic study of different implementations of deep perceptual loss and similarity, (ii) a qualitative analysis of the strengths and weaknesses of the methods, (iii) a proof-of-concept investigation of the method's ability to adapt to new contexts, and (iv) cross-referencing the findings with already published works. Sever
- Published
- 2023
35. A Sketch-texture Retrieval Framework using Perceptual Similarity.
- Author
-
Liu, Yan, Gao, Ying, Sadia, Nawaz Hafiza, Qi, Lin, and Dong, Junyu
- Abstract
Sketch-based image retrieval is an important research topic in the field of image processing. Hand-drawn sketches consist only of contour lines, and lack detailed information such as color and textons. As a result, they differ significantly from color images in terms of image feature distribution, making sketch-based image retrieval a typical cross-domain retrieval problem. To solve this problem, we constructed a perceptual space consistent with both textures and sketches, and using perceptual similarity for sketch-based texture retrieval. To implement this approach, we first conduct a set of psychological experiments to analyze the similarity of visual perception of the textures, then we create a dataset of over a thousand hand-drawn sketches according to the textures. We proposed a layer-wise perceptual similarity learning method that integrates perceptual similarity, with which we trained a similarity prediction network to learn the perceptual similarity between hand-drawn sketches and natural texture images. The trained network can be used for perceptual similarity prediction and efficient retrieval. Our experimental results demonstrate the effectiveness of sketch-based texture retrieval using perceptual similarity. • A hand-drawn natural texture image dataset was created. More than a thousand hand-drawn texture sketches are collected. • A free-grouping psychophysical experiment was completed, the texture perceptual similarity matrix is obtained. • A natural texture retrieval framework based on hand-drawn sketches was constructed. We introduced a layer-wise perceptual similarity measurement method and trained a small regression convolutional neural network using paired textures, enabling efficient retrieval based on perceptual similarity from psychophysical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Image Perceptual Similarity Metrics for the Assessment of Basal Cell Carcinoma
- Author
-
Bassukas, Panagiota Spyridonos, Georgios Gaitanis, Aristidis Likas, Konstantinos Seretis, Vasileios Moschovos, Laurence Feldmeyer, Kristine Heidemeyer, Athanasia Zampeta, and Ioannis D.
- Subjects
basal cell carcinoma ,scar assessment ,perceptual similarity ,texture similarity ,color similarity ,convolutional neural network - Abstract
Efficient management of basal cell carcinomas (BCC) requires reliable assessments of both tumors and post-treatment scars. We aimed to estimate image similarity metrics that account for BCC’s perceptual color and texture deviation from perilesional skin. In total, 176 clinical photographs of BCC were assessed by six physicians using a visual deviation scale. Internal consistency and inter-rater agreement were estimated using Cronbach’s α, weighted Gwet’s AC2, and quadratic Cohen’s kappa. The mean visual scores were used to validate a range of similarity metrics employing different color spaces, distances, and image embeddings from a pre-trained VGG16 neural network. The calculated similarities were transformed into discrete values using ordinal logistic regression models. The Bray–Curtis distance in the YIQ color model and rectified embeddings from the ‘fc6’ layer minimized the mean squared error and demonstrated strong performance in representing perceptual similarities. Box plot analysis and the Wilcoxon rank-sum test were used to visualize and compare the levels of agreement, conducted on a random validation round between the two groups: ‘Human–System’ and ‘Human–Human.’ The proposed metrics were comparable in terms of internal consistency and agreement with human raters. The findings suggest that the proposed metrics offer a robust and cost-effective approach to monitoring BCC treatment outcomes in clinical settings.
- Published
- 2023
- Full Text
- View/download PDF
37. Industrial Buyer ‘Choice Criteria’ Under Diverse Industrial Settings
- Author
-
Rao, C. P., Rosenberg, L. Joseph, White, Randy, Academy of Marketing Science, and Bahn, Kenneth D., editor
- Published
- 2015
- Full Text
- View/download PDF
38. The Role of Acoustic Similarity and Non-Native Categorisation in Predicting Non-Native Discrimination: Brazilian Portuguese Vowels by English vs. Spanish Listeners
- Author
-
Jaydene Elvin, Daniel Williams, Jason A. Shaw, Catherine T. Best, and Paola Escudero
- Subjects
acoustic similarity ,perceptual similarity ,non-native discrimination ,non-native categorisation ,Language and Literature - Abstract
This study tests whether Australian English (AusE) and European Spanish (ES) listeners differ in their categorisation and discrimination of Brazilian Portuguese (BP) vowels. In particular, we investigate two theoretically relevant measures of vowel category overlap (acoustic vs. perceptual categorisation) as predictors of non-native discrimination difficulty. We also investigate whether the individual listener’s own native vowel productions predict non-native vowel perception better than group averages. The results showed comparable performance for AusE and ES participants in their perception of the BP vowels. In particular, discrimination patterns were largely dependent on contrast-specific learning scenarios, which were similar across AusE and ES. We also found that acoustic similarity between individuals’ own native productions and the BP stimuli were largely consistent with the participants’ patterns of non-native categorisation. Furthermore, the results indicated that both acoustic and perceptual overlap successfully predict discrimination performance. However, accuracy in discrimination was better explained by perceptual similarity for ES listeners and by acoustic similarity for AusE listeners. Interestingly, we also found that for ES listeners, the group averages explained discrimination accuracy better than predictions based on individual production data, but that the AusE group showed no difference.
- Published
- 2021
- Full Text
- View/download PDF
39. Dispersion Theory and Phonology
- Author
-
Flemming, Edward
- Published
- 2017
- Full Text
- View/download PDF
40. Perceptual Similarity and Analogy in Creativity and Cognitive Development
- Author
-
Stojanov, Georgi, Indurkhya, Bipin, Kacprzyk, Janusz, Series editor, Prade, Henri, editor, and Richard, Gilles, editor
- Published
- 2014
- Full Text
- View/download PDF
41. Wasserstein Generative Adversarial Network Based De-Blurring Using Perceptual Similarity.
- Author
-
Hong, Minsoo and Choe, Yoonsik
- Subjects
MULTIMEDIA computer applications ,RESEMBLANCE (Philosophy) ,COST functions ,ARTIFICIAL neural networks - Abstract
The de-blurring of blurred images is one of the most important image processing methods and it can be used for the preprocessing step in many multimedia and computer vision applications. Recently, de-blurring methods have been performed by neural network methods, such as the generative adversarial network (GAN), which is a powerful generative network. Among many different types of GAN, the proposed method is performed using the Wasserstein generative adversarial network with gradient penalty (WGANGP). Since edge information is the most important factor in an image, the style loss function is applied to represent the perceptual information of the edge in order to preserve small edge information and capture its perceptual similarity. As a result, the proposed method improves the similarity between sharp and blurred images by minimizing the Wasserstein distance, and it captures well the perceptual similarity using the style loss function, considering the correlation of features in the convolutional neural network (CNN). To confirm the performance of the proposed method, three experiments are conducted using two datasets: the GOPRO Large and Kohler dataset. The optimal solutions are found by changing the parameter values experimentally. Consequently, the experiments depict that the proposed method achieves 0.98 higher performance in structural similarity (SSIM) and outperforms other de-blurring methods in the case of both datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Quine and the Aufbau: The Possibility of Objective Knowledge
- Author
-
Hylton, Peter and Reck, Erich H., editor
- Published
- 2013
- Full Text
- View/download PDF
43. The Importance of Long-Range Interactions to Texture Similarity
- Author
-
Dong, Xinghui, Chantler, Mike J., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Wilson, Richard, editor, Hancock, Edwin, editor, Bors, Adrian, editor, and Smith, William, editor
- Published
- 2013
- Full Text
- View/download PDF
44. Ontological Constraints in Children's Inductive Inferences: Evidence From a Comparison of Inferences Within Animals and Vehicles
- Author
-
Andrzej Tarlowski
- Subjects
concepts ,categories ,inferential reasoning ,naive biology ,domain knowledge ,perceptual similarity ,Psychology ,BF1-990 - Abstract
There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects.
- Published
- 2018
- Full Text
- View/download PDF
45. The Role of Category Label in Adults’ Inductive Reasoning
- Author
-
Wang, Xuyan, Long, Zhoujun, Fan, Sanxia, Yu, Weiyan, Zhou, Haiyan, Qin, Yulin, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Zanzotto, Fabio Massimo, editor, Tsumoto, Shusaku, editor, Taatgen, Niels, editor, and Yao, Yiyu, editor
- Published
- 2012
- Full Text
- View/download PDF
46. Similarity
- Author
-
Chang, Edward Y. and Chang, Edward Y.
- Published
- 2011
- Full Text
- View/download PDF
47. Children’s use of comparison and function in novel object categorization.
- Author
-
Kimura, Katherine, Hunley, Samuel B., and Namy, Laura L.
- Subjects
- *
PERCEPTUAL learning , *NONVERBAL learning , *CHILD psychology , *CHILDREN'S psychic ability , *CATEGORIZATION (Psychology) - Abstract
Although young children often rely on salient perceptual cues, such as shape, when categorizing novel objects, children eventually shift towards deeper relational reasoning about category membership. This study investigates what information young children use to classify novel instances of familiar categories. Specifically, we investigated two sources of information that have the potential to facilitate the classification of novel exemplars: (1) comparison of familiar category instances, and (2) attention to function information that might direct children’s attention to functionally relevant perceptual features. Across two experiments, we found that comparing two perceptually similar category members—particularly when function information was also highlighted—led children to discover non-obvious relational features that supported their categorization of novel category instances. Together, these findings demonstrate that comparison may aid in novel object categorization by heightening the salience of less obvious, yet functionally relevant, relational structures that support conceptual reasoning. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Visual discrimination of primate species based on faces in chimpanzees.
- Author
-
Wilson, Duncan A. and Tomonaga, Masaki
- Abstract
Many primate studies have investigated discrimination of individual faces within the same species. However, few studies have looked at discrimination between primate species faces at the categorical level. This study systematically examined the factors important for visual discrimination between primate species faces in chimpanzees, including: colour, orientation, familiarity, and perceptual similarity. Five adult female chimpanzees were tested on their ability to discriminate identical and categorical (non-identical) images of different primate species faces in a series of touchscreen matching-to-sample experiments. Discrimination performance for chimpanzee, gorilla, and orangutan faces was better in colour than in greyscale. An inversion effect was also found, with higher accuracy for upright than inverted faces. Discrimination performance for unfamiliar (baboon and capuchin monkey) and highly familiar (chimpanzee and human) but perceptually different species was equally high. After excluding effects of colour and familiarity, difficulty in discriminating between different species faces can be best explained by their perceptual similarity to each other. Categorical discrimination performance for unfamiliar, perceptually similar faces (gorilla and orangutan) was significantly worse than unfamiliar, perceptually different faces (baboon and capuchin monkey). Moreover, multidimensional scaling analysis of the image similarity data based on local feature matching revealed greater similarity between chimpanzee, gorilla and orangutan faces than between human, baboon and capuchin monkey faces. We conclude our chimpanzees appear to perceive similarity in primate faces in a similar way to humans. Information about perceptual similarity is likely prioritized over the potential influence of previous experience or a conceptual representation of species for categorical discrimination between species faces. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. Ontological Constraints in Childrens Inductive Inferences: Evidence From a Comparison of Inferences Within Animals and Vehicles.
- Author
-
Tarlowski, Andrzej
- Subjects
INFERENCE (Logic) ,CHILD psychology ,CONCEPTS ,CATEGORIES (Philosophy) ,INDUCTION (Logic) - Abstract
There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. A Haar wavelet-based perceptual similarity index for image quality assessment.
- Author
-
Reisenhofer, Rafael, Bosse, Sebastian, Kutyniok, Gitta, and Wiegand, Thomas
- Subjects
- *
IMAGE quality analysis , *IMAGE quality in imaging systems , *IMAGE transmission , *IMAGE storage & retrieval systems , *SHEAR waves - Abstract
In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer. Vice versa, image and video restoration techniques, such as inpainting or denoising, aim to enhance the quality of experience of human viewers. Correctly assessing the similarity between an image and an undistorted reference image as subjectively experienced by a human viewer can thus lead to significant improvements in any transmission, compression, or restoration system. This paper introduces the Haar wavelet-based perceptual similarity index (HaarPSI), a novel and computationally inexpensive similarity measure for full reference image quality assessment. The HaarPSI utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas. The consistency of the HaarPSI with the human quality of experience was validated on four large benchmark databases containing thousands of differently distorted images. On these databases, the HaarPSI achieves higher correlations with human opinion scores than state-of-the-art full reference similarity measures like the structural similarity index (SSIM), the feature similarity index (FSIM), and the visual saliency-based index (VSI). Along with the simple computational structure and the short execution time, these experimental results suggest a high applicability of the HaarPSI in real world tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.