8,278 results
Search Results
2. An Optimal Clustering Approach Applying to Asynchronous Finite-State Machine Design
- Author
-
Bychko, Volodymyr A., Yershov, Roman D., Bryukhovetsky, Vasyl V., Bychko, Kyrylo V., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kazymyr, Volodymyr, editor, Morozov, Anatoliy, editor, Palagin, Alexander, editor, Shkarlet, Serhiy, editor, Stoianov, Nikolai, editor, Vinnikov, Dmitri, editor, and Zheleznyak, Mark, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Machine Printed Page Number Anomaly Detection Method Based on Multi-scale Self Attention Encoding Decoding
- Author
-
Shao, Xiangchao, Xiao, Xueli, Leng, Yingxiong, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Jin, Hai, editor, Pan, Yi, editor, and Lu, Jianfeng, editor
- Published
- 2024
- Full Text
- View/download PDF
4. The Random Fault Model
- Author
-
Dhooghe, Siemen, Nikova, Svetla, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Carlet, Claude, editor, Mandal, Kalikinkar, editor, and Rijmen, Vincent, editor
- Published
- 2024
- Full Text
- View/download PDF
5. Complex Contourlet Transform Domain Based Image Compression
- Author
-
Saranya, G., Shrinidhi, G. S., Bargavi, S., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nagar, Atulya K., editor, Jat, Dharm Singh, editor, Marín-Raventós, Gabriela, editor, and Mishra, Durgesh Kumar, editor
- Published
- 2022
- Full Text
- View/download PDF
6. Process Mining Encoding via Meta-learning for an Enhanced Anomaly Detection
- Author
-
Tavares, Gabriel Marques, Junior, Sylvio Barbon, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Bellatreche, Ladjel, editor, Dumas, Marlon, editor, Karras, Panagiotis, editor, Matulevičius, Raimundas, editor, Awad, Ahmed, editor, Weidlich, Matthias, editor, Ivanović, Mirjana, editor, and Hartig, Olaf, editor
- Published
- 2021
- Full Text
- View/download PDF
7. Neural-like Real-Time Data Protection and Transmission System
- Author
-
Tsmots, Ivan, Rabyk, Vasyl, Skorokhoda, Oleksa, Tsymbal, Yurii, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Shakhovska, Natalya, editor, and Medykovskyy, Mykola O., editor
- Published
- 2021
- Full Text
- View/download PDF
8. Early Detection of Autism Spectrum Disorder in Children Using Supervised Machine Learning
- Author
-
Vakadkar, Kaushik, Purkayastha, Diya, Krishnan, Deepa, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Singh, Mayank, editor, Gupta, P. K., editor, Tyagi, Vipin, editor, Flusser, Jan, editor, Ören, Tuncer, editor, and Valentino, Gianluca, editor
- Published
- 2020
- Full Text
- View/download PDF
9. A New Encoding Method for Graph Clustering Problem
- Author
-
Tabrizi, Amir Hossein Farajpour, Izadkhah, Habib, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Grandinetti, Lucio, editor, Mirtaheri, Seyedeh Leili, editor, and Shahbazian, Reza, editor
- Published
- 2019
- Full Text
- View/download PDF
10. Reference Values Based Hardening for Bloom Filters Based Privacy-Preserving Record Linkage
- Author
-
Vaiwsri, Sirintra, Ranbaduge, Thilina, Christen, Peter, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sivalingam, Krishna M., Founding Editor, Washio, Takashi, Founding Editor, Yuan, Junsong, Founding Editor, Islam, Rafiqul, editor, Koh, Yun Sing, editor, Zhao, Yanchang, editor, Warwick, Graco, editor, Stirling, David, editor, Li, Chang-Tsun, editor, and Islam, Zahidul, editor
- Published
- 2019
- Full Text
- View/download PDF
11. A Method of Encoding Coordinates on the Paper for Digitizing Handwriting.
- Author
-
Qingcheng Li, Guangming Zheng, Ye Lu, and Heng Cao
- Subjects
PAPER ,WRITING ,ENCODING ,DECODERS & decoding ,CIPHERS - Abstract
We have been using paper for more than a thousand years and we are so accustomed to using it. However, digital information is easy to share and manage. It is necessary to digitize the handwriting. This article proposes a method of encoding coordinates on a twodimensional page for digitizing hand-writing, combining Anoto encoding and nCode encoding. The coding scheme is calibrated based on coordinate relation, and the feasibility of the scheme is verified through experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Multi-party (Leveled) Homomorphic Encryption on Identity-Based and Attribute-Based Settings
- Author
-
Kuchta, Veronika, Sharma, Gaurav, Sahu, Rajeev Anand, Markowitch, Olivier, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Kim, Howon, editor, and Kim, Dong-Chan, editor
- Published
- 2018
- Full Text
- View/download PDF
13. (Finite) Field Work: Choosing the Best Encoding of Numbers for FHE Computation
- Author
-
Jäschke, Angela, Armknecht, Frederik, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Capkun, Srdjan, editor, and Chow, Sherman S. M., editor
- Published
- 2018
- Full Text
- View/download PDF
14. An Entropy Based Encrypted Traffic Classifier
- Author
-
Mamun, Mohammad Saiful Islam, Ghorbani, Ali A., Stakhanova, Natalia, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Qing, Sihan, editor, Okamoto, Eiji, editor, Kim, Kwangjo, editor, and Liu, Dongmei, editor
- Published
- 2016
- Full Text
- View/download PDF
15. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Paper Bodies: Data and Embodiment in the Sisterhood of Slade's Commonplace Books.
- Author
-
Hess, Jillian M.
- Subjects
- *
COMMONPLACE-books , *ROMANTICISM , *ENCODING , *PARATEXT , *ARCHIVES - Abstract
The article introduces a small bit of data for Romanticists' consideration, a collection of seventeen commonplace books kept from 1814 to 1817. It explores how Mary and Sarah Leigh and their cousin Maria Leigh used their commonplace books as archives of shared intimacy. The strategies the Leigh sisters used to encode embodied data include linking immaterial ideas with the materiality of the notebook and paratext that teaches how to read the verse in the context of the sisters' lived experience.
- Published
- 2022
- Full Text
- View/download PDF
17. A commentary on the NIMA paper by J. Brennan et al. on the demonstration of two-dimensional time encoded imaging of fast neutrons.
- Author
-
Wehe, David
- Subjects
- *
FAST neutrons , *ENCODING , *ARMS control - Published
- 2024
- Full Text
- View/download PDF
18. On the Capacity of the Carbon Copy onto Dirty Paper Channel.
- Author
-
Rini, Stefano and Shamai Shitz, Shlomo
- Subjects
- *
RADIO transmitter fading , *TRANSMITTERS (Communication) , *QUASISTATIC processes , *RANDOM noise theory , *ENCODING - Abstract
The “carbon copy onto dirty paper” (CCDP) channel is the compound “writing on dirty paper” channel in which the channel output is obtained as the sum of the channel input, white Gaussian noise and a Gaussian state sequence randomly selected among a set possible realizations. The transmitter has non-causal knowledge of the set of possible state sequences but does not know which sequence is selected to produce the channel output. We study the capacity of the CCDP channel for two scenarios: 1) the state sequences are independent and identically distributed; and 2) the state sequences are scaled versions of the same sequence. In the first scenario, we show that a combination of superposition coding, time-sharing, and Gel’fand-Pinsker binning is sufficient to approach the capacity to within 3 bits per channel use for any number of possible state realizations. In the second scenario, we derive capacity to within 4 bits per channel use for the case of two possible state sequences. This result is extended to the CCDP channel with any number of possible state sequences under certain conditions on the scaling parameters, which we denote as “strong fading” regime. We conclude by providing some remarks on the capacity of the CCDP channel in which the state sequences have any jointly Gaussian distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Proper multi-layer coding in fading dirty-paper channel.
- Author
-
Hoseini, Sayed Ali Khodam and Akhlaghi, Soroush
- Subjects
- *
CHANNEL coding , *RADIO transmitter fading , *ADDITIVE white Gaussian noise channels , *RADIO transmitters & transmission , *ENCODING - Abstract
This study investigates multi-layer coding over a dirty-paper channel. First, it is demonstrated that superposition coding in such channel still achieves the capacity of interference-free additive white Gaussian noise channel when the transmitter is non-causally aware of interference signal. Then, the problem is extended to the dirty-paper block fading channel, where it is shown that in the lack of channel information at the transmitter, the so-called broadcast approach maximises the average achievable rate of such channel. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Ergodic Fading MIMO Dirty Paper and Broadcast Channels: Capacity Bounds and Lattice Strategies.
- Author
-
Hindy, Ahmed and Nosratinia, Aria
- Abstract
A multiple-input multiple-output (MIMO) version of the dirty paper channel is studied, where the channel input and the dirt experience the same fading process, and the fading channel state is known at the receiver. This represents settings where signal and interference sources are co-located, such as in the broadcast channel. First, a variant of Costa’s dirty paper coding is presented, whose achievable rates are within a constant gap to capacity for all signal and dirt powers. In addition, a lattice coding and decoding scheme is proposed, whose decision regions are independent of the channel realizations. Under Rayleigh fading, the gap to capacity of the lattice coding scheme vanishes with the number of receive antennas, even at finite Signal-to-Noise Ratio (SNR). Thus, although the capacity of the fading dirty paper channel remains unknown, this paper shows it is not far from its dirt-free counterpart. The insights from the dirty paper channel directly lead to transmission strategies for the two-user MIMO broadcast channel, where the transmitter emits a superposition of desired and undesired (dirt) signals with respect to each receiver. The performance of the lattice coding scheme is analyzed under different fading dynamics for the two users, showing that high-dimensional lattices achieve rates close to capacity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
21. Dirty-Paper Coding Based Secure Transmission for Multiuser Downlink in Cellular Communication Systems.
- Author
-
Wang, Bo and Mu, Pengcheng
- Subjects
- *
MULTIUSER channels , *LINEAR network coding , *WIRELESS communications , *BROADCAST channels , *COVARIANCE matrices , *PROBABILITY theory - Abstract
This paper studies the secure transmission in a multiuser broadcast channel where only the statistical channel state information of the eavesdropper is available. We propose to apply secret dirty-paper coding (S-DPC) in this scenario to support the secure transmission of one user and the normal (unclassified) transmission of the other users. By adopting the S-DPC and encoding the secret message in the first place, all the information-bearing signals of the normal transmission are treated as noise by potential eavesdroppers and thus provide secrecy for the secure transmission. In this way, the proposed approach exploits the intrinsic secrecy of multiuser broadcasting and can serve as an energy-efficient alternative to the traditional artificial noise (AN) scheme. To evaluate the secrecy performance of this approach and compare it with the AN scheme, we propose two S-DPC-based secure transmission schemes for maximizing the secrecy rate under constraints on the secrecy outage probability (SOP) and the normal transmission rates. The first scheme directly optimizes the covariance matrices of the transmit signals, and a novel approximation of the intractable SOP constraint is derived to facilitate the optimization. The second scheme combines zero-forcing dirty-paper coding and AN, and the optimization involves only power allocation. We establish efficient numerical algorithms to solve the optimization problems for both schemes. Theoretical and simulation results confirm that, in addition to supporting the normal transmission, the achievable secrecy rates of the proposed schemes can be close to that of the traditional AN scheme, which supports only the secure transmission of one user. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
22. Text/Conference Paper
- Author
-
Cayoglu, Ugur, Tristram, Frank, Meyer, Jörg, Kerzenmacher, Tobias, Braesicke, Peter, and Streit, Achim
- Subjects
Data_CODINGANDINFORMATIONTHEORY ,compression algorithms ,meteorology ,prediction-based compression ,encoding ,information spaces - Abstract
One of the scientific communities that generate the largest amounts of data today are the climate sciences. New climate models enable model integrations at unprecedented resolution, simulating timescales from decades to centuries of climate change. Nowadays, limited storage space and ever increasing model output is a big challenge. For this reason, we look at lossless compression using prediction-based data compression. We show that there is a significant dependence of the compression rate on the chosen traversal method and the underlying data model. We examine the influence of this structural dependency on prediction-based compression algorithms and explore possibilities to improve compression rates. We introduce the concept of Information Spaces (IS), which help to improve the accuracy of predictions by nearly 10% and decrease the standard deviation of the compression results by 20% on average.
- Published
- 2019
- Full Text
- View/download PDF
23. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping.
- Author
-
Sener, M. Yusuf, Bohnke, Ronald, Xu, Wen, and Kramer, Gerhard
- Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. A Review of Affective Computing Research Based on Function-Component-Representation Framework.
- Author
-
Ma, Haiwei and Yarosh, Svetlana
- Abstract
Affective computing (AC), a field that bridges the gap between human affect and computational technology, has witnessed remarkable technical advancement. However, theoretical underpinnings of affective computing are rarely discussed and reviewed. This paper provides a thorough conceptual analysis of the literature to understand theoretical questions essential to affective computing and current answers. Inspired by emotion theories, we proposed the function-component-representation (FCR) framework to organize different conceptions of affect along three dimensions that each address an important question: function of affect (why compute affect), component of affect (how to compute affect), and representation of affect (what affect to compute). We coded each paper by its underlying conception of affect and found preferences towards affect detection, behavioral component, and categorical representation. We also observed coupling of certain conceptions. For example, papers using the behavioral component tend to adopt the categorical representation, whereas papers using the physiological component tend to adopt the dimensional representation. The FCR framework is not only the first attempt to organize different theoretical perspectives in a systematic and quantitative way, but also a blueprint to help conceptualize an AC project and pinpoint new possibilities. Future work may explore how the identified frequencies of FCR framework combinations may be applied in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Channel Capacity Analysis for Dirty Paper Coding With the Binary Codeword and Interference.
- Author
-
Xu, Zhengguang and Xie, Yongbiao
- Abstract
Dirty paper coding is an interference pre-cancellation method for known interference at the transmitter and serves as a basic building block in the digital watermarking system. In this letter, we investigate the dirty paper model in the simplest digital communication system, where both the codeword and the interference are binary. For watermark embedment, we derive the relevant coding, the constant coding, and the symmetric relevant coding when the encoder focuses on the binary codeword and interference. The channel capacity is analyzed and the optimal parameter is discussed in the case. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Enhancing LS-PIE's Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation.
- Author
-
Stevens, Jesse, Wilke, Daniel N., and Setshedi, Isaac I.
- Subjects
SINGULAR value decomposition ,COMPACT spaces (Topology) ,LATENT variables ,PRINCIPAL components analysis ,CONDENSATION - Abstract
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares these methods across multiple datasets, including a simple toy problem, mixed signals, ECG data, and simulated vibrational data. LEXP can accelerate the discovery of optimal latent spaces and may yield different compact spaces from LCON, depending on the LVM. This paper highlights the LS-PIE algorithm's applications and compares LCON and LEXP in organising, ranking, and scoring latent components akin to principal component analysis or singular value decomposition. This paper shows clear improvements in the interpretability of the resulting latent representations allowing for clearer and more focused analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Light drive reversible color switching for rewritable media and encoding.
- Author
-
Ren, Qiaoli, Aodeng, Gerile, Ga, Lu, and Ai, Jun
- Subjects
- *
ELECTRONIC paper , *REDUCING agents , *CATALYSTS , *ENCODING , *INDUSTRIAL costs , *COLOR - Abstract
[Display omitted] • Photoreversible color switching systems have been integrated by reducing agent of triethanolamine, catalyst of β-FeOOH nanorods. • With high switching rate, high reversibility (>10 cycle), the new system could be broad use in rewritable paper. • The rewritable paper is highly applicable as self-erasing rewritable media for printing. • The rewritable paper can be applied for a data encoding and reading strategy. Nowadays, photo reversible color switching systems (PCSS) are always limited by some requirements, such as good stability, low toxicity, fast light response, long cycling performance and low production cost, therefore, it is a huge challenge to develop such a system that integrates beneficial features. Herein, a new type of PCSS have been demonstrated, which integrated reducing agent of triethanolamine (TEOA), catalyst of β-FeOOH nanorods and the redox driven color conversion characteristics of redox dyes. The system has the advantages of high switching rate, high reversibility (>10 cycles), wavelength selective response, safety and less light damage, which can be widely used in rewritable paper. As-prepared rewritable paper has high contrast, high resolution, suitable printing time and good reversibility, which is in line with the environmental protection concept of green printing. Rewritable paper is a kind of self-erasable rewritable medium which is highly suitable for printing. Environmental protection film has the advantages of low cost, convenient preparation and recycling. It is expected to replace the traditional writing, printing paper, existing systems and make a big step forward to practical application. Even more surprising, it also can be applied for a data encoding and reading strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Near-Field Chipless-RFID System With Erasable/Programmable 40-bit Tags Inkjet Printed on Paper Substrates.
- Author
-
Herrojo, Cristian, Mata-Contreras, Javier, Paredes, Ferran, Nunez, Alba, Ramon, Eloi, and Martin, Ferran
- Abstract
In this letter, a chipless radio frequency identification (chipless-RFID) system with erasable/programmable 40-bit tags inkjet printed on paper substrates, where tag reading proceeds sequentially through near-field coupling, is presented for the first time. The tags consist of a linear chain of identical split ring resonators (SRRs) printed at predefined and equidistant positions on a paper substrate, and each resonant element provides a bit of information. Tag programming is achieved by cutting certain resonant elements, providing the logic state “0” to the corresponding bit. Conversely, tags can be erased (all bits set to “1”) by short circuiting those previously cut resonant elements through inkjet. An important feature of the proposed system is the fact that tag reading is possible either with the SRR chain faced up or faced down (with regard to the reader). To this end, two pairs of header bits (resonators), with different sequences, have been added at the beginning and at the end of the tag identification chain. Moreover, tag data storage capacity (number of bits) is only limited by the space occupied by the linear chain. The implementation of tags on paper substrates demonstrates the potential of the proposed chipless-RFID system in secure paper applications, where the necessary proximity between the reader and the tag, inherent to near-field reading, is not an issue. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
31. HME-KG: A method of constructing the human motion encoding knowledge graph based on a hierarchical motion model.
- Author
-
Liu, Qi, Huang, Tianyu, and Li, Xiangchen
- Subjects
KNOWLEDGE graphs ,MOTION capture (Human mechanics) ,POSTURE ,ENCODING ,VISUALIZATION - Abstract
The diversity, infinity, and nonuniform description of human motion make it challenging for computers to understand human activities. To explore and reuse captured human motion data, this work defines a more comprehensive hierarchical theoretical model of human motion and proposes a standard human posture encoding scheme. We construct a domain knowledge graph (DKG) named the human motion encoding knowledge graph (HME-KG) based on posture codes and action labels. Community detection, similarity analysis, and centrality analysis are used to explore the potential value of motion data. This paper conducts an evaluation and visualization of HME-KG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Han–Kobayashi and Dirty-Paper Coding for Superchannel Optical Communications.
- Author
-
Koike-Akino, Toshiaki, Kojima, Keisuke, Millar, David S., Parsons, Kieran, Kametani, Soichiro, Sugihara, Takashi, Yoshida, Tsuyoshi, Ishida, Kazuyuki, Miyata, Yoshikuni, Matsumoto, Wataru, and Mizuochi, Takashi
- Abstract
Superchannel transmission is a candidate to realize Tb/s-class high-speed optical communications. In order to achieve higher spectrum efficiency, the channel spacing shall be as narrow as possible. However, densely allocated channels can cause non-negligible inter-channel interference (ICI) especially when the channel spacing is close to or below the Nyquist bandwidth. In this paper, we consider joint decoding to cancel the ICI in dense superchannel transmission. To further improve the spectrum efficiency, we propose the use of Han–Kobayashi superposition coding. In addition, for the case when neighboring subchannel transmitters can share data, we introduce dirty-paper coding for pre-cancelation of the ICI. We analytically evaluate the potential gains of these methods when ICI is present for sub-Nyquist channel spacing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
33. Long Short-Term Memory-Based Non-Uniform Coding Transmission Strategy for a 360-Degree Video.
- Author
-
Guo, Jia, Li, Chengrui, Zhu, Jinqi, Li, Xiang, Gao, Qian, Chen, Yunhe, and Feng, Weijia
- Subjects
PREDICTION models ,TILES ,VIDEOS ,ALGORITHMS ,VIDEO coding ,ENCODING - Abstract
This paper studies an LSTM-based adaptive transmission method for a 360-degree video and proposes a non-uniform encoding transmission strategy based on LSTM. Our goal is to maximize the user's video experience by dynamically dividing the 360-degree video into tiles of different numbers and sizes, and selecting different bitrates for each tile. This aims to reduce buffering events and video jitter. To determine the optimal number and size of tiles at the current moment, we constructed a dual-layer stacked LSTM network model. This model predicts, in real-time, the number, size, and bitrate of the tiles needed for the next moment of the 360-degree video based on the distance between the user's eyes and the screen. In our experiments, we used an exhaustive algorithm to calculate the optimal tile division and bitrate selection scheme for a 360-degree video under different network conditions, and used this dataset to train our prediction model. Finally, by comparing with other advanced algorithms, we demonstrated the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Fd-CasBGRel: A Joint Entity–Relationship Extraction Model for Aquatic Disease Domains.
- Author
-
Ye, Hongbao, Lv, Lijian, Zhou, Chengquan, and Sun, Dawei
- Subjects
KNOWLEDGE graphs ,CORPORA ,WEBSITES ,GENERALIZATION ,ENCODING - Abstract
Featured Application: The model is primarily utilized for the task of entity relationship extraction during the construction process of an aquatic disease knowledge graph. Entity–relationship extraction plays a pivotal role in the construction of domain knowledge graphs. For the aquatic disease domain, however, this relationship extraction is a formidable task because of overlapping relationships, data specialization, limited feature fusion, and imbalanced data samples, which significantly weaken the extraction's performance. To tackle these challenges, this study leverages published books and aquatic disease websites as data sources to compile a text corpus, establish datasets, and then propose the Fd-CasBGRel model specifically tailored to the aquatic disease domain. The model uses the Casrel cascading binary tagging framework to address relationship overlap; utilizes task fine-tuning for better performance on aquatic disease data; trains on specialized aquatic disease corpora to improve adaptability; and integrates the BRC feature fusion module—which incorporates self-attention mechanisms, BiLSTM, relative position encoding, and conditional layer normalization—to leverage entity position and context for enhanced fusion. Further, it replaces the traditional cross-entropy loss function with the GHM loss function to mitigate category imbalance issues. The experimental results indicate that the F1 score of the Fd-CasBGRel on the aquatic disease dataset reached 84.71%, significantly outperforming several benchmark models. This model effectively addresses the challenges of ternary extraction's low performance caused by high data specialization, insufficient feature integration, and data imbalances. The model achieved the highest F1 score of 86.52% on the overlapping relationship category dataset, demonstrating its robust capability in extracting overlapping data. Furthermore, We also conducted comparative experiments on the publicly available dataset WebNLG, and the model in this paper obtained the best performance metrics compared to the rest of the comparative models, indicating that the model has good generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Binding in Najdi Arabic: Types of Reflexives, the Argument Structure of Reflexive Constructions and Possessive Reflexives.
- Author
-
Alowayed, Asma I. and Albaty, Yasser A.
- Subjects
ARGUMENT ,REFLEXIVITY ,ENCODING ,SYNTAX (Grammar) - Abstract
The present paper investigates reflexives in Najdi Arabic (NA). We start by examining how the encoding of reflexivity in NA can be attained lexically, morphologically, and syntactically. We also investigate the argument structure of reflexive constructions in NA in accordance with Reinhart and Siloni’s (2005) bundling approach. Finally, possessive reflexives and their cross-linguistic distribution with definiteness marking are examined, providing empirical coverage to this area in NA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. 48‐1: Invited Paper: Holographic Display Based on Complex‐Amplitude Encoding with Phase‐Only SLMs.
- Author
-
Sui, Xiaomeng, Cao, Liangcai, and Jin, Guofan
- Subjects
HOLOGRAPHIC displays ,HOLOGRAPHY ,LIGHT filters ,IMAGE reconstruction ,DIGITAL holographic microscopy ,ENCODING - Abstract
Double‐phase holograms enable the holographic reconstructions with improved image quality but still suffer from the spatial shifting noises generated from the complex‐amplitude wavefront encoding. The band‐limited double‐phase method could suppress the spatial shifting noise by the band limitation. A multi‐plane complex‐amplitude holographic display is implemented based on band‐limited double‐phase hologram. High‐sharpness reconstructions free of spatial‐shifting noise are realized with the numerical band limitation and optical filtering. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. A multi-scale residual encoding network for concrete crack segmentation.
- Author
-
Liu, Die, Xu, MengDie, Li, ZhiTing, He, Yingying, Zheng, Long, Xue, Pengpeng, and Wu, Xiaodong
- Subjects
CRACKING of concrete ,LINEAR network coding ,SURFACE cracks ,ENCODING - Abstract
Concrete surface crack detection plays a crucial role in ensuring concrete safety. However, manual crack detection is time-consuming, necessitating the development of an automatic method to streamline the process. Nonetheless, detecting concrete cracks automatically remains challenging due to the heterogeneous strength of cracks and the complex background. To address this issue, we propose a multi-scale residual encoding network for concrete crack segmentation. This network leverages the U-NET basic network structure to merge feature maps from different levels into low-level features, thus enhancing the utilization of predicted feature maps. The primary contribution of this research is the enhancement of the U-NET coding network through the incorporation of a residual structure. This modification improves the coding network's ability to extract features related to small cracks. Furthermore, an attention mechanism is utilized within the network to enhance the perceptual field information of the crack feature map. The integration of this mechanism enhances the accuracy of crack detection across various scales. Furthermore, we introduce a specially designed loss function tailored to crack datasets to tackle the problem of imbalanced positive and negative samples in concrete crack images caused by data imbalance. This loss function helps improve the prediction accuracy of crack pixels. To demonstrate the superiority and universality of our proposed method, we conducted a comparative evaluation against state-of-the-art edge detection and semantic segmentation methods using a standardized evaluation approach. Experimental results on the SDNET2018 dataset demonstrate the effectiveness of our method, achieving mIOU, F1-score, Precision, and Recall scores of 0.862, 0.941, 0.945, and 0.9394, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Tutorial - Collaborative approaches to discourse: Music scholarship using performance recordings and Linked Data annotations
- Author
-
Lewis, David, Page, Kevin, VanderHart, Chanda, Weigl, David M., Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,and methods ,Digital Musicology ,Annotation ,Musicology ,Media studies ,Data Modelling ,annotation structures ,Music Performance ,encoding ,Computer science ,Pre-Conference Workshop and Tutorial ,Multimedia ,Humanities computing ,systems ,and analysis ,data modeling ,linked (open) data ,music and sound digitization - Abstract
Participants are invited to explore modelling and annotation through exercises demonstrating music research conducted in Oxford and Vienna. After hands-on ontology design exercises with pen and paper, they are introduced to cutting-edge digital tooling and led through research processes of an ongoing project investigating the Vienna Philharmonic's New Year's Concerts.
- Published
- 2023
- Full Text
- View/download PDF
39. Coding With Noiseless Feedback Over the Z-Channel.
- Author
-
Deppe, Christian, Lebedev, Vladimir, Maringer, Georg, and Polyanskii, Nikita
- Subjects
ERROR-correcting codes ,BOUND states ,PARALLEL algorithms - Abstract
In this paper, we consider encoding strategies for the Z-channel with noiseless feedback. We analyze the combinatorial setting where the maximum number of errors inflicted by an adversary is proportional to the number of transmissions, which goes to infinity. Without feedback, it is known that the rate of optimal asymmetric-error-correcting codes for the error fraction $\tau \ge 1/4$ vanishes as the blocklength grows. In this paper, we give an efficient feedback encoding scheme with $n$ transmissions that achieves a positive rate for any fraction of errors $\tau < 1$ and $n\to \infty $. Additionally, we state an upper bound on the rate of asymptotically long feedback asymmetric error-correcting codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Rhythmic, Melodic and Vertical N-Gram Features as a Means of Studying Symbolic Music Computationally
- Author
-
McKay, Cory, Cumming, Julie, Fujinaga, Ichiro, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,attribution studies and stylometric analysis ,Music theory ,representation ,Library & information science ,Musicology ,Statistics ,Automated analysis ,encoding ,N-grams, Music classification, jSymbolic ,Computer science ,manuscripts description ,Short Presentation ,Machine learning ,FOS: Mathematics ,and analysis ,artificial intelligence and machine learning ,Features ,music and sound digitization - Abstract
This presentation explores how n-grams can be used to automatically classify and learn about music. An overall discussion is provided of various ways in which n-grams can be adapted for use with digital scores, and of how musically meaningful features can be extracted from them. The jSymbolic 3.0 alpha prototype feature extractor is then used in three sets of music classification experiments investigating how n-gram features perform relative to and combined with other types of features extracted from symbolic music files., Funded by the FRQSC and SSHRC.
- Published
- 2023
- Full Text
- View/download PDF
41. DNA encoding schemes herald a new age in cybersecurity for safeguarding digital assets.
- Author
-
Aqeel, Sehrish, Khan, Sajid Ullah, Khan, Adnan Shahid, Alharbi, Meshal, Shah, Sajid, Affendi, Mohammed EL, and Ahmad, Naveed
- Subjects
ARTIFICIAL chromosomes ,DNA ,INTERNET security ,ENCODING ,ASSETS (Accounting) - Abstract
With the urge to secure and protect digital assets, there is a need to emphasize the immediacy of taking measures to ensure robust security due to the enhancement of cyber security. Different advanced methods, like encryption schemes, are vulnerable to putting constraints on attacks. To encode the digital data and utilize the unique properties of DNA, like stability and durability, synthetic DNA sequences are offered as a promising alternative by DNA encoding schemes. This study enlightens the exploration of DNA's potential for encoding in evolving cyber security. Based on the systematic literature review, this paper provides a discussion on the challenges, pros, and directions for future work. We analyzed the current trends and new innovations in methodology, security attacks, the implementation of tools, and different metrics to measure. Various tools, such as Mathematica, MATLAB, NIST test suite, and Coludsim, were employed to evaluate the performance of the proposed method and obtain results. By identifying the strengths and limitations of proposed methods, the study highlights research challenges and offers future scope for investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A clinical trial termination prediction model based on denoising autoencoder and deep survival regression.
- Author
-
Qi, Huamei, Yang, Wenhui, Zou, Wenqin, and Hu, Yuxuan
- Subjects
SIGNAL denoising ,PREDICTION models ,REGRESSION analysis ,ENCODING ,PREGNANT women - Abstract
Effective clinical trials are necessary for understanding medical advances but early termination of trials can result in unnecessary waste of resources. Survival models can be used to predict survival probabilities in such trials. However, survival data from clinical trials are sparse, and DeepSurv cannot accurately capture their effective features, making the models weak in generalization and decreasing their prediction accuracy. In this paper, we propose a survival prediction model for clinical trial completion based on the combination of denoising autoencoder (DAE) and DeepSurv models. The DAE is used to obtain a robust representation of features by breaking the loop of raw features after autoencoder training, and then the robust features are provided to DeepSurv as input for training. The clinical trial dataset for training the model was obtained from the ClinicalTrials.gov dataset. A study of clinical trial completion in pregnant women was conducted in response to the fact that many current clinical trials exclude pregnant women. The experimental results showed that the denoising autoencoder and deep survival regression (DAE‐DSR) model was able to extract meaningful and robust features for survival analysis; the C‐index of the training and test datasets were 0.74 and 0.75 respectively. Compared with the Cox proportional hazards model and DeepSurv model, the survival analysis curves obtained by using DAE‐DSR model had more prominent features, and the model was more robust and performed better in actual prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing.
- Author
-
Hu, Huiling, Liu, Chunyu, Liu, Shuai, Ying, Shipeng, Wang, Chen, and Ding, Yi
- Subjects
IMAGE compression ,REMOTE sensing ,COMPRESSED sensing ,IMAGE reconstruction ,ENCODING ,FEATURE extraction ,IMAGE segmentation - Abstract
Faced with the problem of incompatibility between traditional information acquisition mode and spaceborne earth observation tasks, starting from the general mathematical model of compressed sensing, a theoretical model of block compressed sensing was established, and a full-process adaptive coding and decoding compressed sensing framework for remote sensing images was proposed, which includes five parts: mode selection, feature factor extraction, adaptive shape segmentation, adaptive sampling rate allocation and image reconstruction. Unlike previous semi-adaptive or local adaptive methods, the advantages of the adaptive encoding and decoding method proposed in this paper are mainly reflected in four aspects: (1) Ability to select encoding modes based on image content, and maximizing the use of the richness of the image to select appropriate sampling methods; (2) Capable of utilizing image texture details for adaptive segmentation, effectively separating complex and smooth regions; (3) Being able to detect the sparsity of encoding blocks and adaptively allocate sampling rates to fully explore the compressibility of images; (4) The reconstruction matrix can be adaptively selected based on the size of the encoding block to alleviate block artifacts caused by non-stationary characteristics of the image. Experimental results show that the method proposed in this article has good stability for remote sensing images with complex edge textures, with the peak signal-to-noise ratio and structural similarity remaining above 35 dB and 0.8. Moreover, especially for ocean images with relatively simple image content, when the sampling rate is 0.26, the peak signal-to-noise ratio reaches 50.8 dB, and the structural similarity is 0.99. In addition, the recovered images have the smallest BRISQUE value, with better clarity and less distortion. In the subjective aspect, the reconstructed image has clear edge details and good reconstruction effect, while the block effect is effectively suppressed. The framework designed in this paper is superior to similar algorithms in both subjective visual and objective evaluation indexes, which is of great significance for alleviating the incompatibility between traditional information acquisition methods and satellite-borne earth observation missions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. FS-GDI Based Area Efficient Hamming (11, 7) Encoding.
- Author
-
El-Bendary, Mohsen A. M. and El-Badry, O.
- Subjects
HAMMING codes ,TELECOMMUNICATION systems ,ENCODING ,DELAY lines ,VIDEO coding ,DIGITAL signal processing ,TRANSISTORS - Abstract
This paper proposes an efficient design of Hamming (11, 7) encoder utilising Full Swing-Gate Diffusion Input (FS-GDI) approach in 65 nm technology nano-size node. The proposed design of Hamming codes aims to improve the power and area efficiency through reducing of transistors count by employing power-efficient logic style. Encoding circuits of Hamming code (11, 7) and (7, 4) are designed using the various traditional and proposed approaches. The amount of consumed power, delay time, Power Delay Product (PDP) and hardware simplicity are employed as a metrics for evaluating the efficiency of the proposed designs of encoding circuits. The simulation experiments are executed utilising Cadence Virtuoso simulator package. These experiments revealed that the proposed designs of Hamming encoding circuits achieve delay time reduction by 50.91% and 20% for Hamming codes (7, 4) and (11, 7), respectively. Also, hardware (H/W) simplicity and area efficiency of the circuits are improved by 50% compared to CMOS-based circuits. From the results analysis, the proposed FS-GDI based Hamming encoding circuits achieve efficient power and delay optimising. Hence, the power consumption, delay and area in communications systems and DSP circuits due to encoding process are reduced. The whole performance of DSP circuits can be more power/area efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding.
- Author
-
Chunming Wu, Wukai Liu, and Xin Ma
- Subjects
IMAGE fusion ,INFRARED imaging ,FEATURE extraction ,TRANSFORMER models ,ENCODING - Abstract
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy is introduced to enhance the adaptive representation of information in various regions of the source image, thereby enhancing the contrast of the fused image. The encoder and the EEM module extract features, which are then combined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test the algorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preserves background and detail information in both infrared and visible images, yielding superior outcomes in subjective and objective evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Performance Analysis of New 2D Spatial OCDMA Encoding based on HG Modes in Multicore Fiber.
- Author
-
Sahraoui, Walid, Amphawan, Angela, Jasser, Muhammed Basheer, and Tse-Kian Neo
- Subjects
CODE division multiple access ,CROSS correlation ,ENCODING ,VIDEO coding - Abstract
This paper presents a pioneering 2D spatial Optical Code-Division Multiple Access (OCDMA) encoding system that exploits Mode Division Multiplexing (MDM) and Multicore Fiber (MCF) technologies. This innovative approach utilizes two spatial dimensions to enhance the performance and security of OCDMA systems. In the first dimension, we employ Hermite-Gaussian modes (HG00, HG01, HG11) to modulate each user's signal individually. This unique approach offers a robust means of data transmission while ensuring minimal interference among users. The second-dimension leverages MCF encoding, introducing two incoherent OCDMA codes: the Zero Cross Correlation (ZCC) code (λc=0) and the ZFD code (λc=1). These codes are thoughtfully designed and simulated, taking into account their cross-correlation properties to guarantee minimal interference and heightened data security. To assess the efficiency of this novel OCDMA encoding system, we implemented simulations with three active users using the Opti system software. At the transmitter end, each user's signal is modulated individually by their designated HG mode (HG00, HG01, HG11), resulting in separate channels. Subsequently, at the multicore fiber, each user's data is encoded with a unique code-word, and they are directed through specific core groups, ensuring data isolation and integrity. In this paper, the BER and eye pattern are examined with respect to different parameters such as data rate and distance. At a distance of 5 km and data rate of 10 Gbit/s, a BER value around 10-70 is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. A Multi-Modal Entity Alignment Method with Inter-Modal Enhancement.
- Author
-
Yuan, Song, Lu, Zexin, Li, Qiyuan, and Gu, Jinguang
- Subjects
ENCODING - Abstract
Due to inter-modal effects hidden in multi-modalities and the impact of weak modalities on multi-modal entity alignment, a Multi-modal Entity Alignment Method with Inter-modal Enhancement (MEAIE) is proposed. This method introduces a unique modality called numerical modality in the modal aspect and applies a numerical feature encoder to encode it. In the feature embedding stage, this paper utilizes visual features to enhance entity relation representation and influence entity attribute weight distribution. Then, this paper introduces attention layers and contrastive learning to strengthen inter-modal effects and mitigate the impact of weak modalities. In order to evaluate the performance of the proposed method, experiments are conducted on three public datasets: FB15K, DB15K, and YG15K. By combining the datasets in pairs, compared with the current state-of-the-art multi-modal entity alignment models, the proposed model achieves a 2% and 3% improvement in Top-1 Hit Rate(Hit@1) and Mean Reciprocal Rank (MRR), demonstrating its feasibility and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Hill Matrix and Radix-64 Bit Algorithm to Preserve Data Confidentiality.
- Author
-
Arshad, Ali, Nadeem, Muhammad, Riaz, Saman, Zahra, Syeda Wajiha, Dutta, Ashit Kumar, Alzaid, Zaid, Alabdan, Rana, Almutairi, Badr, and Almotairi, Sultan
- Subjects
DATA encryption ,DATA security ,DATA protection ,ALGORITHMS ,CONFIDENTIAL communications - Abstract
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Nineteenth-century adaptations of concert music for domestic use as seen in contemporary periodicals: digital scholarship built on the foundations of IIIF, MEI and Linked Data
- Author
-
Lewis, David, Page, Kevin R, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,and methods ,analysis ,Musicology ,annotation structures ,scholarly editing and editions development ,MEI ,encoding ,IIIF ,domestic music ,Linked Data ,systems ,Poster ,and analysis ,linked (open) data ,music and sound digitization - Abstract
We present a study of musical arrangements oif concert music for domestic performance through the lens of an English monthly music journal (The Harmonicon). The study is supported by digital annotation tooling built on IIIF, MEI and Linked Data.
- Published
- 2023
- Full Text
- View/download PDF
50. Distortion: Authority, Authenticity, and Agency in Zora Neale Hurston's Black Folk Recordings
- Author
-
Clement, Tanya, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,Sound Studies ,Long Presentation ,Archives ,Media studies ,encoding ,digital libraries creation ,mixed-media analysis ,digital research infrastructures development and analysis ,American Studies ,and analysis ,management ,music and sound digitization ,African and African American Studies - Abstract
In Digital Humanities "Distant Listening" scholarship with sound, the expectation is that the data set will be clean and audible and that its metadata will be descriptive and informative.[1] In contrast to this notion, this talk demonstrates the importance of distortions in approximately seventy-five brief "tracks" or recorded songs, stories, and explanations from a folklore recording trip Zora Neale Hurston took in 1935 to Florida and Georgia with Alan Lomax and Mary Elizabeth Barnicle for the Library of Congress. Close listening to her 1935 recordings reveals that social and technical distortions are in line with how Hurston expresses the complexities of authority, authenticity, and subjectivity in her writings, which amplifying black epistemologies of self-making and creating resonant possibilities for imagining new transgressive formulations of cultural identity. This talk will consider how distortions play an important in large-scale digital projects with sound in the humanities.
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.