271 results
Search Results
2. An Optimal Clustering Approach Applying to Asynchronous Finite-State Machine Design
- Author
-
Bychko, Volodymyr A., Yershov, Roman D., Bryukhovetsky, Vasyl V., Bychko, Kyrylo V., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kazymyr, Volodymyr, editor, Morozov, Anatoliy, editor, Palagin, Alexander, editor, Shkarlet, Serhiy, editor, Stoianov, Nikolai, editor, Vinnikov, Dmitri, editor, and Zheleznyak, Mark, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Machine Printed Page Number Anomaly Detection Method Based on Multi-scale Self Attention Encoding Decoding
- Author
-
Shao, Xiangchao, Xiao, Xueli, Leng, Yingxiong, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Jin, Hai, editor, Pan, Yi, editor, and Lu, Jianfeng, editor
- Published
- 2024
- Full Text
- View/download PDF
4. The Random Fault Model
- Author
-
Dhooghe, Siemen, Nikova, Svetla, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Carlet, Claude, editor, Mandal, Kalikinkar, editor, and Rijmen, Vincent, editor
- Published
- 2024
- Full Text
- View/download PDF
5. A commentary on the NIMA paper by J. Brennan et al. on the demonstration of two-dimensional time encoded imaging of fast neutrons.
- Author
-
Wehe, David
- Subjects
- *
FAST neutrons , *ENCODING , *ARMS control - Published
- 2024
- Full Text
- View/download PDF
6. Enhancing LS-PIE's Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation.
- Author
-
Stevens, Jesse, Wilke, Daniel N., and Setshedi, Isaac I.
- Subjects
SINGULAR value decomposition ,COMPACT spaces (Topology) ,LATENT variables ,PRINCIPAL components analysis ,CONDENSATION - Abstract
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares these methods across multiple datasets, including a simple toy problem, mixed signals, ECG data, and simulated vibrational data. LEXP can accelerate the discovery of optimal latent spaces and may yield different compact spaces from LCON, depending on the LVM. This paper highlights the LS-PIE algorithm's applications and compares LCON and LEXP in organising, ranking, and scoring latent components akin to principal component analysis or singular value decomposition. This paper shows clear improvements in the interpretability of the resulting latent representations allowing for clearer and more focused analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. HME-KG: A method of constructing the human motion encoding knowledge graph based on a hierarchical motion model.
- Author
-
Liu, Qi, Huang, Tianyu, and Li, Xiangchen
- Subjects
KNOWLEDGE graphs ,MOTION capture (Human mechanics) ,POSTURE ,ENCODING ,VISUALIZATION - Abstract
The diversity, infinity, and nonuniform description of human motion make it challenging for computers to understand human activities. To explore and reuse captured human motion data, this work defines a more comprehensive hierarchical theoretical model of human motion and proposes a standard human posture encoding scheme. We construct a domain knowledge graph (DKG) named the human motion encoding knowledge graph (HME-KG) based on posture codes and action labels. Community detection, similarity analysis, and centrality analysis are used to explore the potential value of motion data. This paper conducts an evaluation and visualization of HME-KG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Long Short-Term Memory-Based Non-Uniform Coding Transmission Strategy for a 360-Degree Video.
- Author
-
Guo, Jia, Li, Chengrui, Zhu, Jinqi, Li, Xiang, Gao, Qian, Chen, Yunhe, and Feng, Weijia
- Subjects
PREDICTION models ,TILES ,VIDEOS ,ALGORITHMS ,VIDEO coding ,ENCODING - Abstract
This paper studies an LSTM-based adaptive transmission method for a 360-degree video and proposes a non-uniform encoding transmission strategy based on LSTM. Our goal is to maximize the user's video experience by dynamically dividing the 360-degree video into tiles of different numbers and sizes, and selecting different bitrates for each tile. This aims to reduce buffering events and video jitter. To determine the optimal number and size of tiles at the current moment, we constructed a dual-layer stacked LSTM network model. This model predicts, in real-time, the number, size, and bitrate of the tiles needed for the next moment of the 360-degree video based on the distance between the user's eyes and the screen. In our experiments, we used an exhaustive algorithm to calculate the optimal tile division and bitrate selection scheme for a 360-degree video under different network conditions, and used this dataset to train our prediction model. Finally, by comparing with other advanced algorithms, we demonstrated the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Fd-CasBGRel: A Joint Entity–Relationship Extraction Model for Aquatic Disease Domains.
- Author
-
Ye, Hongbao, Lv, Lijian, Zhou, Chengquan, and Sun, Dawei
- Subjects
KNOWLEDGE graphs ,CORPORA ,WEBSITES ,GENERALIZATION ,ENCODING - Abstract
Featured Application: The model is primarily utilized for the task of entity relationship extraction during the construction process of an aquatic disease knowledge graph. Entity–relationship extraction plays a pivotal role in the construction of domain knowledge graphs. For the aquatic disease domain, however, this relationship extraction is a formidable task because of overlapping relationships, data specialization, limited feature fusion, and imbalanced data samples, which significantly weaken the extraction's performance. To tackle these challenges, this study leverages published books and aquatic disease websites as data sources to compile a text corpus, establish datasets, and then propose the Fd-CasBGRel model specifically tailored to the aquatic disease domain. The model uses the Casrel cascading binary tagging framework to address relationship overlap; utilizes task fine-tuning for better performance on aquatic disease data; trains on specialized aquatic disease corpora to improve adaptability; and integrates the BRC feature fusion module—which incorporates self-attention mechanisms, BiLSTM, relative position encoding, and conditional layer normalization—to leverage entity position and context for enhanced fusion. Further, it replaces the traditional cross-entropy loss function with the GHM loss function to mitigate category imbalance issues. The experimental results indicate that the F1 score of the Fd-CasBGRel on the aquatic disease dataset reached 84.71%, significantly outperforming several benchmark models. This model effectively addresses the challenges of ternary extraction's low performance caused by high data specialization, insufficient feature integration, and data imbalances. The model achieved the highest F1 score of 86.52% on the overlapping relationship category dataset, demonstrating its robust capability in extracting overlapping data. Furthermore, We also conducted comparative experiments on the publicly available dataset WebNLG, and the model in this paper obtained the best performance metrics compared to the rest of the comparative models, indicating that the model has good generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Binding in Najdi Arabic: Types of Reflexives, the Argument Structure of Reflexive Constructions and Possessive Reflexives.
- Author
-
Alowayed, Asma I. and Albaty, Yasser A.
- Subjects
ARGUMENT ,REFLEXIVITY ,ENCODING ,SYNTAX (Grammar) - Abstract
The present paper investigates reflexives in Najdi Arabic (NA). We start by examining how the encoding of reflexivity in NA can be attained lexically, morphologically, and syntactically. We also investigate the argument structure of reflexive constructions in NA in accordance with Reinhart and Siloni’s (2005) bundling approach. Finally, possessive reflexives and their cross-linguistic distribution with definiteness marking are examined, providing empirical coverage to this area in NA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A multi-scale residual encoding network for concrete crack segmentation.
- Author
-
Liu, Die, Xu, MengDie, Li, ZhiTing, He, Yingying, Zheng, Long, Xue, Pengpeng, and Wu, Xiaodong
- Subjects
CRACKING of concrete ,LINEAR network coding ,SURFACE cracks ,ENCODING - Abstract
Concrete surface crack detection plays a crucial role in ensuring concrete safety. However, manual crack detection is time-consuming, necessitating the development of an automatic method to streamline the process. Nonetheless, detecting concrete cracks automatically remains challenging due to the heterogeneous strength of cracks and the complex background. To address this issue, we propose a multi-scale residual encoding network for concrete crack segmentation. This network leverages the U-NET basic network structure to merge feature maps from different levels into low-level features, thus enhancing the utilization of predicted feature maps. The primary contribution of this research is the enhancement of the U-NET coding network through the incorporation of a residual structure. This modification improves the coding network's ability to extract features related to small cracks. Furthermore, an attention mechanism is utilized within the network to enhance the perceptual field information of the crack feature map. The integration of this mechanism enhances the accuracy of crack detection across various scales. Furthermore, we introduce a specially designed loss function tailored to crack datasets to tackle the problem of imbalanced positive and negative samples in concrete crack images caused by data imbalance. This loss function helps improve the prediction accuracy of crack pixels. To demonstrate the superiority and universality of our proposed method, we conducted a comparative evaluation against state-of-the-art edge detection and semantic segmentation methods using a standardized evaluation approach. Experimental results on the SDNET2018 dataset demonstrate the effectiveness of our method, achieving mIOU, F1-score, Precision, and Recall scores of 0.862, 0.941, 0.945, and 0.9394, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. DNA encoding schemes herald a new age in cybersecurity for safeguarding digital assets.
- Author
-
Aqeel, Sehrish, Khan, Sajid Ullah, Khan, Adnan Shahid, Alharbi, Meshal, Shah, Sajid, Affendi, Mohammed EL, and Ahmad, Naveed
- Subjects
ARTIFICIAL chromosomes ,DNA ,INTERNET security ,ENCODING ,ASSETS (Accounting) - Abstract
With the urge to secure and protect digital assets, there is a need to emphasize the immediacy of taking measures to ensure robust security due to the enhancement of cyber security. Different advanced methods, like encryption schemes, are vulnerable to putting constraints on attacks. To encode the digital data and utilize the unique properties of DNA, like stability and durability, synthetic DNA sequences are offered as a promising alternative by DNA encoding schemes. This study enlightens the exploration of DNA's potential for encoding in evolving cyber security. Based on the systematic literature review, this paper provides a discussion on the challenges, pros, and directions for future work. We analyzed the current trends and new innovations in methodology, security attacks, the implementation of tools, and different metrics to measure. Various tools, such as Mathematica, MATLAB, NIST test suite, and Coludsim, were employed to evaluate the performance of the proposed method and obtain results. By identifying the strengths and limitations of proposed methods, the study highlights research challenges and offers future scope for investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A clinical trial termination prediction model based on denoising autoencoder and deep survival regression.
- Author
-
Qi, Huamei, Yang, Wenhui, Zou, Wenqin, and Hu, Yuxuan
- Subjects
SIGNAL denoising ,PREDICTION models ,REGRESSION analysis ,ENCODING ,PREGNANT women - Abstract
Effective clinical trials are necessary for understanding medical advances but early termination of trials can result in unnecessary waste of resources. Survival models can be used to predict survival probabilities in such trials. However, survival data from clinical trials are sparse, and DeepSurv cannot accurately capture their effective features, making the models weak in generalization and decreasing their prediction accuracy. In this paper, we propose a survival prediction model for clinical trial completion based on the combination of denoising autoencoder (DAE) and DeepSurv models. The DAE is used to obtain a robust representation of features by breaking the loop of raw features after autoencoder training, and then the robust features are provided to DeepSurv as input for training. The clinical trial dataset for training the model was obtained from the ClinicalTrials.gov dataset. A study of clinical trial completion in pregnant women was conducted in response to the fact that many current clinical trials exclude pregnant women. The experimental results showed that the denoising autoencoder and deep survival regression (DAE‐DSR) model was able to extract meaningful and robust features for survival analysis; the C‐index of the training and test datasets were 0.74 and 0.75 respectively. Compared with the Cox proportional hazards model and DeepSurv model, the survival analysis curves obtained by using DAE‐DSR model had more prominent features, and the model was more robust and performed better in actual prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing.
- Author
-
Hu, Huiling, Liu, Chunyu, Liu, Shuai, Ying, Shipeng, Wang, Chen, and Ding, Yi
- Subjects
IMAGE compression ,REMOTE sensing ,COMPRESSED sensing ,IMAGE reconstruction ,ENCODING ,FEATURE extraction ,IMAGE segmentation - Abstract
Faced with the problem of incompatibility between traditional information acquisition mode and spaceborne earth observation tasks, starting from the general mathematical model of compressed sensing, a theoretical model of block compressed sensing was established, and a full-process adaptive coding and decoding compressed sensing framework for remote sensing images was proposed, which includes five parts: mode selection, feature factor extraction, adaptive shape segmentation, adaptive sampling rate allocation and image reconstruction. Unlike previous semi-adaptive or local adaptive methods, the advantages of the adaptive encoding and decoding method proposed in this paper are mainly reflected in four aspects: (1) Ability to select encoding modes based on image content, and maximizing the use of the richness of the image to select appropriate sampling methods; (2) Capable of utilizing image texture details for adaptive segmentation, effectively separating complex and smooth regions; (3) Being able to detect the sparsity of encoding blocks and adaptively allocate sampling rates to fully explore the compressibility of images; (4) The reconstruction matrix can be adaptively selected based on the size of the encoding block to alleviate block artifacts caused by non-stationary characteristics of the image. Experimental results show that the method proposed in this article has good stability for remote sensing images with complex edge textures, with the peak signal-to-noise ratio and structural similarity remaining above 35 dB and 0.8. Moreover, especially for ocean images with relatively simple image content, when the sampling rate is 0.26, the peak signal-to-noise ratio reaches 50.8 dB, and the structural similarity is 0.99. In addition, the recovered images have the smallest BRISQUE value, with better clarity and less distortion. In the subjective aspect, the reconstructed image has clear edge details and good reconstruction effect, while the block effect is effectively suppressed. The framework designed in this paper is superior to similar algorithms in both subjective visual and objective evaluation indexes, which is of great significance for alleviating the incompatibility between traditional information acquisition methods and satellite-borne earth observation missions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. FS-GDI Based Area Efficient Hamming (11, 7) Encoding.
- Author
-
El-Bendary, Mohsen A. M. and El-Badry, O.
- Subjects
HAMMING codes ,TELECOMMUNICATION systems ,ENCODING ,DELAY lines ,VIDEO coding ,DIGITAL signal processing ,TRANSISTORS - Abstract
This paper proposes an efficient design of Hamming (11, 7) encoder utilising Full Swing-Gate Diffusion Input (FS-GDI) approach in 65 nm technology nano-size node. The proposed design of Hamming codes aims to improve the power and area efficiency through reducing of transistors count by employing power-efficient logic style. Encoding circuits of Hamming code (11, 7) and (7, 4) are designed using the various traditional and proposed approaches. The amount of consumed power, delay time, Power Delay Product (PDP) and hardware simplicity are employed as a metrics for evaluating the efficiency of the proposed designs of encoding circuits. The simulation experiments are executed utilising Cadence Virtuoso simulator package. These experiments revealed that the proposed designs of Hamming encoding circuits achieve delay time reduction by 50.91% and 20% for Hamming codes (7, 4) and (11, 7), respectively. Also, hardware (H/W) simplicity and area efficiency of the circuits are improved by 50% compared to CMOS-based circuits. From the results analysis, the proposed FS-GDI based Hamming encoding circuits achieve efficient power and delay optimising. Hence, the power consumption, delay and area in communications systems and DSP circuits due to encoding process are reduced. The whole performance of DSP circuits can be more power/area efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding.
- Author
-
Chunming Wu, Wukai Liu, and Xin Ma
- Subjects
IMAGE fusion ,INFRARED imaging ,FEATURE extraction ,TRANSFORMER models ,ENCODING - Abstract
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy is introduced to enhance the adaptive representation of information in various regions of the source image, thereby enhancing the contrast of the fused image. The encoder and the EEM module extract features, which are then combined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test the algorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preserves background and detail information in both infrared and visible images, yielding superior outcomes in subjective and objective evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. The Relationship between Notetaking, Revision, and Learning in Tertiary Education: A Review of Literature, 1970 - 2023.
- Author
-
Carroll, Kathleen
- Subjects
LITERATURE reviews ,NOTETAKING ,EDUCATIONAL literature ,POSTSECONDARY education ,COGNITIVE ability ,READING comprehension - Abstract
The aim of this paper is to highlight the complexity and the central importance to academic achievement of taking and reviewing notes at third level. It is based on a review of international literature on the notetaking process between 1970 and 2023. The paper describes notetaking and reviewing as the method of encoding and externally storing new material, for the purpose of advancement in learning and attainment in assessment. It outlines research on the benefits of typed versus handwritten methods of notetaking. The overriding outcome demonstrates that taking notes, either by longhand or typing, produces superior results than not taking and reviewing notes. The remainder of the review focuses on the status of notetaking instruction in third level colleges and universities. It is observed that despite the centrality of notetaking to educational success, and the positive impact of instruction on taking notes, skills training and modelling are generally not taught or embedded in the curricula in tertiary education. Furthermore, the paper describes teaching strategies alongside linear and non-linear notetaking methods that have been shown to encourage students to take and revise notes which has, in turn, led to the enhancement of learning. The conclusion reviews the main points of the article and its limitations. A further review of literature on the examination of cognitive and metacognitive functions on notetaking would contribute to the understanding of how notetaking and revision operate to increase students' capacity for recall, comprehension, and knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
18. Data encoding for healthcare data democratization and information leakage prevention.
- Author
-
Thakur, Anshul, Zhu, Tingting, Abrol, Vinayak, Armstrong, Jacob, Wang, Yujiang, and Clifton, David A.
- Subjects
DEEP learning ,DEMOCRATIZATION ,ENCODING ,LEAKAGE ,MEDICAL care - Abstract
The lack of data democratization and information leakage from trained models hinder the development and acceptance of robust deep learning-based healthcare solutions. This paper argues that irreversible data encoding can provide an effective solution to achieve data democratization without violating the privacy constraints imposed on healthcare data and clinical models. An ideal encoding framework transforms the data into a new space where it is imperceptible to a manual or computational inspection. However, encoded data should preserve the semantics of the original data such that deep learning models can be trained effectively. This paper hypothesizes the characteristics of the desired encoding framework and then exploits random projections and random quantum encoding to realize this framework for dense and longitudinal or time-series data. Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models. Healthcare data democratization is often hampered by privacy constraints governing the sensitive healthcare data. Here, the authors show that encoding healthcare data could be a potential solution for achieving healthcare democratization within the context of deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Self-Bilinear Map from One Way Encoding System and i.
- Author
-
Zhang, Huang, Huang, Ting, Zhang, Fangguo, Wei, Baodian, and Du, Yusong
- Subjects
CYCLIC groups ,CONCRETE construction ,KEY agreement protocols (Computer network protocols) ,ENCODING - Abstract
A bilinear map whose domain and target sets are identical is called a self-bilinear map. Original self-bilinear maps are defined over cyclic groups. Since the map itself reveals information about the underlying cyclic group, the Decisional Diffie–Hellman Problem (DDH) and the computational Diffie–Hellman (CDH) problem may be solved easily in some specific groups. This brings a lot of limitations to constructing secure self-bilinear schemes. As a compromise, a self-bilinear map with auxiliary information was proposed in CRYPTO'2014. In this paper, we construct this weak variant of a self-bilinear map from generic sets and indistinguishable obfuscation. These sets should own several properties. A new notion, One Way Encoding System (OWES), is proposed to summarize these properties. The new Encoding Division Problem (EDP) is defined to complete the security proof. The OWES can be built by making use of one level of graded encoding systems (GES). To construct a concrete self-bilinear map scheme, Garg, Gentry, and Halvei(GGH13) GES is adopted in our work. Even though the security of GGH13 was recently broken by Hu et al., their algorithm does not threaten our applications. At the end of this paper, some further considerations for the EDP for concrete construction are given to improve the confidence that EDP is indeed hard. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Asymmetric solid burst correcting integer codes.
- Author
-
Das, Pankaj Kumar and Pokhrel, Nabin Kumar
- Subjects
PROBABILITY theory ,MEMORY ,NOISE ,ENCODING ,MOTIVATION (Psychology) - Abstract
With the development of technology, communication channels are increasingly experiencing burst faults of various forms caused by noise elements. To get around this, an appropriate encoding and decoding mechanism should be designed while taking into account things like redundancy, memory usage, efficiency, etc. Motivated by these facts, in this paper, we present a class of integer codes capable of correct asymmetric solid burst errors. In addition to the theoretical foundations, the paper also derives the expressions for the probabilities of incorrect and correct decoding for the proposed codes. Lastly, we compare the proposed codes with other similar codes in terms of code rate and memory consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Efficient DNA-based data storage using shortmer combinatorial encoding.
- Author
-
Preuss, Inbal, Rosenberg, Michael, Yakhini, Zohar, and Anavy, Leon
- Subjects
DATA warehousing ,COMBINATORIAL chemistry ,DIGITAL technology ,ENCODING ,RESEARCH questions ,DNA synthesis - Abstract
Data storage in DNA has recently emerged as a promising archival solution, offering space-efficient and long-lasting digital storage solutions. Recent studies suggest leveraging the inherent redundancy of synthesis and sequencing technologies by using composite DNA alphabets. A major challenge of this approach involves the noisy inference process, obstructing large composite alphabets. This paper introduces a novel approach for DNA-based data storage, offering, in some implementations, a 6.5-fold increase in logical density over standard DNA-based storage systems, with near-zero reconstruction error. Combinatorial DNA encoding uses a set of clearly distinguishable DNA shortmers to construct large combinatorial alphabets, where each letter consists of a subset of shortmers. We formally define various combinatorial encoding schemes and investigate their theoretical properties. These include information density and reconstruction probabilities, as well as required synthesis and sequencing multiplicities. We then propose an end-to-end design for a combinatorial DNA-based data storage system, including encoding schemes, two-dimensional (2D) error correction codes, and reconstruction algorithms, under different error regimes. We performed simulations and show, for example, that the use of 2D Reed-Solomon error correction has significantly improved reconstruction rates. We validated our approach by constructing two combinatorial sequences using Gibson assembly, imitating a 4-cycle combinatorial synthesis process. We confirmed the successful reconstruction, and established the robustness of our approach for different error types. Subsampling experiments supported the important role of sampling rate and its effect on the overall performance. Our work demonstrates the potential of combinatorial shortmer encoding for DNA-based data storage and describes some theoretical research questions and technical challenges. Combining combinatorial principles with error-correcting strategies, and investing in the development of DNA synthesis technologies that efficiently support combinatorial synthesis, can pave the way to efficient, error-resilient DNA-based storage solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Knowledge and separating soft verbalizer based prompt-tuning for multi-label short text classification.
- Author
-
Chen, Zhanwang, Li, Peipei, and Hu, Xuegang
- Subjects
LANGUAGE models ,FAILURE (Psychology) ,LEARNING ability ,CLASSIFICATION ,ENCODING - Abstract
Multi-label Short Text Classification (MSTC) is a challenging subtask of Multi-Label Text Classification (MLTC) for tagging a short text with the most relevant subset of labels from a given set of labels. Recent studies have attempted to address MSTC task using MLTC methods and Pre-trained Language Models (PLM) based fine-tuning approaches, but suffering the low performance from the following three reasons, 1) failure to address the issue of data sparsity of short texts, 2) lack of adaptation to the long-tail distribution of labels in multi-label scenarios and 3) an implicit weakness in the encoding length for PLM, which limits the ability of the prompt learning paradigm. Therefore, in this paper, we propose KSSVPT, a Knowledge and Separating Soft Verbalizer based Prompt Tuning method for MSTC to address the above challenges. Firstly, to mitigate the sparsity issue in short texts, we propose a novel approach that enhances the semantic information of short texts by integrating external knowledge into the soft prompt template. Secondly, we construct a new soft prompt verbalizer for MSTC, called separating soft prompt verbalizer, to adapt to the long-tail distribution issue aggravated by multiple labels. Thirdly, we propose a mechanism of label cluster grouping in building a prompt template to directly alleviate limited encoding length and capture the label correlation. Extensive experiments conducted on six benchmark datasets demonstrate the superiority of our model compared to all competing models for MLTC and MSTC in the tackling of MSTC task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. CGJO: a novel complex-valued encoding golden jackal optimization.
- Author
-
Zhang, Jinzhong, Zhang, Gang, Kong, Min, Zhang, Tan, and Wang, Duansong
- Subjects
OPTIMIZATION algorithms ,ENCODING ,ENGINEERING design ,INFORMATION sharing - Abstract
Golden jackal optimization (GJO) is inspired by mundane characteristics and collaborative hunting behaviour, which mimics foraging, trespassing and encompassing, and capturing prey to refresh a jackal's position. However, the GJO has several limitations, such as a slow convergence rate, low computational accuracy, premature convergence, poor solution efficiency, and weak exploration and exploitation. To enhance the global detection ability and solution accuracy, this paper proposes a novel complex-valued encoding golden jackal optimization (CGJO) to achieve function optimization and engineering design. The complex-valued encoding strategy deploys a dual-diploid organization to encode the real and imaginary portions of the golden jackal and converts the dual-dimensional encoding region to the single-dimensional manifestation region, which increases population diversity, restricts search stagnation, expands the exploration area, promotes information exchange, fosters collaboration efficiency and improves convergence accuracy. CGJO not only exhibits strong adaptability and robustness to achieve supplementary advantages and enhance optimization efficiency but also balances global exploration and local exploitation to promote computational precision and determine the best solution. The CEC 2022 test suite and six real-world engineering designs are utilized to evaluate the effectiveness and feasibility of CGJO. CGJO is compared with three categories of existing optimization algorithms: (1) WO, HO, NRBO and BKA are recently published algorithms; (2) SCSO, GJO, RGJO and SGJO are highly cited algorithms; and (3) L-SHADE, LSHADE-EpsSin and CMA-ES are highly performing algorithms. The experimental results reveal that the effectiveness and feasibility of CGJO are superior to those of other algorithms. The CGJO has strong superiority and reliability to achieve a quicker convergence rate, greater computation precision, and greater stability and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Real-Time Dense Visual SLAM with Neural Factor Representation.
- Author
-
Wei, Weifeng, Wang, Jie, Xie, Xiaolong, Liu, Jie, and Su, Pengxiang
- Subjects
COMPUTER vision ,VISUAL fields ,SPEED ,ENCODING ,GEOMETRY - Abstract
Developing a high-quality, real-time, dense visual SLAM system poses a significant challenge in the field of computer vision. NeRF introduces neural implicit representation, marking a notable advancement in visual SLAM research. However, existing neural implicit SLAM methods suffer from long runtimes and face challenges when modeling complex structures in scenes. In this paper, we propose a neural implicit dense visual SLAM method that enables high-quality real-time reconstruction even on a desktop PC. Firstly, we propose a novel neural scene representation, encoding the geometry and appearance information of the scene as a combination of the basis and coefficient factors. This representation allows for efficient memory usage and the accurate modeling of high-frequency detail regions. Secondly, we introduce feature integration rendering to significantly improve rendering speed while maintaining the quality of color rendering. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves an average improvement of more than 60% for Depth L1 and ATE RMSE compared to existing state-of-the-art methods when running at 9.8 Hz on a desktop PC with a 3.20 GHz Intel Core i9-12900K CPU and a single NVIDIA RTX 3090 GPU. This remarkable advancement highlights the crucial importance of our approach in the field of dense visual SLAM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Stage-Aware Interaction Network for Point Cloud Completion.
- Author
-
Wu, Hang and Miao, Yubin
- Subjects
POINT cloud ,DEEP learning ,NETWORK performance ,SEMANTICS ,ENCODING - Abstract
Point cloud completion aims to restore full shapes of objects from partial scans, and a typical network pipeline is AutoEncoder, which has coarse-to-fine refinement modules. Although existing approaches using this kind of architecture achieve promising results, they usually neglect the usage of shallow geometry features in partial inputs and the fusion of multi-stage features in the upsampling process, which prevents network performances from further improving. Therefore, in this paper, we propose a new method with dense interactions between different encoding and decoding steps. First, we introduce the Decoupled Multi-head Transformer (DMT), which implements and integrates semantic prediction and resolution upsampling in a unified network module, which serves as a primary ingredient in our pipeline. Second, we propose an Encoding-aware Coarse Decoder (ECD) that compactly makes the top–down shape-decoding process interact with the bottom–up feature-encoding process to utilize both shallow and deep features of partial inputs for coarse point cloud generation. Third, we design a Stage-aware Refinement Group (SRG), which comprehensively understands local semantics from densely connected features across different decoding stages and gradually upsamples point clouds based on them. In general, the key contributions of our method are the DMT for joint semantic-resolution generation, the ECD for multi-scale feature fusion-based shape decoding, and the SRG for stage-aware shape refinement. Evaluations on two synthetic and three real-world datasets illustrate that our method achieves competitive performances compared with existing approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A Multi-Hop Reasoning Knowledge Selection Module for Dialogue Generation.
- Author
-
Ma, Zhiqiang, Liu, Jia, Xu, Biqi, Lv, Kai, and Guo, Siyuan
- Subjects
KNOWLEDGE representation (Information theory) ,SUBGRAPHS ,ENCODING - Abstract
Knowledge selection plays a crucial role in knowledge-driven dialogue generation methods, directly influencing the accuracy, relevance, and coherence of generated responses. Existing research often overlooks the handling of disparities between dialogue statements and external knowledge, leading to inappropriate knowledge representation in dialogue generation. To overcome this limitation, this paper proposes an innovative Multi-hop Reasoning Knowledge Selection Module (KMRKSM). Initially, multi-relational graphs containing rich composite operations are encoded to capture graph-aware representations of concepts and relationships. Subsequently, the multi-hop reasoning module dynamically infers along multiple relational paths, aggregating triple evidence to generate knowledge subgraphs closely related to dialogue history. Finally, these generated knowledge subgraphs are combined with dialogue history features and synthesized into comprehensive knowledge features by a decoder. Through automated and manual evaluations, the exceptional performance of KMRKSM in selecting appropriate knowledge is validated. This module efficiently selects knowledge matching the dialogue context through multi-hop reasoning, significantly enhancing the appropriateness of knowledge representation and providing technical support for achieving more natural and human-like dialogue systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Impact of Video Motion Content on HEVC Coding Efficiency.
- Author
-
Salih, Khalid A. M., Ali, Ismail Amin, and Mstafa, Ramadhan J.
- Subjects
DIGITAL video ,RANGE of motion of joints ,VIDEO compression ,VIDEO coding ,VIDEOS ,ENCODING - Abstract
Digital video coding aims to reduce the bitrate and keep the integrity of visual presentation. High-Efficiency Video Coding (HEVC) can effectively compress video content to be suitable for delivery over various networks and platforms. Finding the optimal coding configuration is challenging as the compression performance highly depends on the complexity of the encoded video sequence. This paper evaluates the effects of motion content on coding performance and suggests an adaptive encoding scheme based on the motion content of encoded video. To evaluate the effects of motion content on the compression performance of HEVC, we tested three coding configurations with different Group of Pictures (GOP) structures and intra refresh mechanisms. Namely, open GOP IPPP, open GOP Periodic-I, and closed GOP periodic-IDR coding structures were tested using several test sequences with a range of resolutions and motion activity. All sequences were first tested to check their motion activity. The rate–distortion curves were produced for all the test sequences and coding configurations. Our results show that the performance of IPPP coding configuration is significantly better (up to 4 dB) than periodic-I and periodic-IDR configurations for sequences with low motion activity. For test sequences with intermediate motion activity, IPPP configuration can still achieve a reasonable quality improvement over periodic-I and periodic-IDR configurations. However, for test sequences with high motion activity, IPPP configuration has a very small performance advantage over periodic-I and periodic-IDR configurations. Our results indicate the importance of selecting the appropriate coding structure according to the motion activity of the video being encoded. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. FPO++: efficient encoding and rendering of dynamic neural radiance fields by analyzing and enhancing Fourier PlenOctrees.
- Author
-
Rabich, Saskia, Stotko, Patrick, and Klein, Reinhard
- Subjects
RADIANCE ,ARCHAEOLOGY methodology ,ENCODING ,TRANSFER functions ,CHARACTERISTIC functions - Abstract
Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic neural radiance fields (NeRF). Despite its many advantages, this method suffers from artifacts introduced by the involved compression when combining it with recent state-of-the-art techniques for training the static per-frame NeRF models. In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation. In particular, we present a novel density encoding that adapts the Fourier-based compression to the characteristics of the transfer function used by the underlying volume rendering procedure and leads to a substantial reduction of artifacts in the dynamic model. We demonstrate the effectiveness of our enhanced Fourier PlenOctrees in the scope of quantitative and qualitative evaluations on synthetic and real-world scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. 条件性恐惧消退记忆的编码、巩固、提取及其干预.
- Author
-
黄益霞, 王金霞, and 雷 怡
- Subjects
RECOLLECTION (Psychology) ,ORAL drug administration ,NEURAL circuitry ,PREFRONTAL cortex ,EXTINCTION (Psychology) ,ANXIETY disorders - Abstract
Copyright of Psychological Science is the property of Psychological Science Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
30. Reaction mining for reaction systems.
- Author
-
Męski, Artur, Koutny, Maciej, Mikulski, Łukasz, and Penczek, Wojciech
- Subjects
- *
DISCRETE systems , *PROBLEM solving , *LOGIC , *ENCODING , *POSSIBILITY - Abstract
Reaction systems are a formal model for computational processing in which reactions operate on sets of entities (molecules) providing a framework for dealing with qualitative aspects of biochemical systems. This paper is concerned with reaction systems in which entities can have discrete concentrations, and so reactions operate on multisets rather than sets of entities. The resulting framework allows one to deal with quantitative aspects of reaction systems, and a bespoke linear-time temporal logic allows one to express and verify a wide range of key behavioural system properties. In practical applications, a reaction system with discrete concentrations may only be partially specified, and the possibility of an effective automated calculation of the missing details provides an attractive design approach. With this idea in mind, the current paper discusses parametric reaction systems with parameters representing unknown parts of hypothetical reactions. The main result is a method aimed at replacing the parameters in such a way that the resulting reaction system operating in a specified external environment satisfies a given temporal logic formula.This paper provides an encoding of parametric reaction systems in smt, and outlines a synthesis procedure based on bounded model checking for solving the synthesis problem. It also reports on the initial experimental results demonstrating the feasibility of the novel synthesis method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Accelerating Legislation Processes through Semantic Similarity Analysis with BERT-based Deep Learning.
- Author
-
Naseri, J., Hasanpour, H., and Sorkhi, A. Ghanbari
- Subjects
DEEP learning ,SEMANTICS ,LANGUAGE models ,ENCODING ,VECTOR spaces - Abstract
Countries are managed based on accurate and precise laws. Enacting appropriate and timely laws can cause national progress. Each law is a textual term that is added to the set of existing laws after passing a process with the approval of the assembly. In the review of each new law, the relevant laws are extracted and analyzed among the set of existing laws. This paper presents a new solution for extracting the relevant rules for a term from an existing set of rules using semantic similarity and deep learning techniques based on the BERT model. The proposed method encodes sentences or paragraphs of text in a fixed-length vector (dense vector space). Thereafter, the vectors are utilized to evaluate and score the semantic similarity of the sentences with the cosine distance measurement scale. In the proposed method, the machine can understand the meaning and concept of the sentences by using the BERT model coding method. The BERT model considers the position of the entities in the sentences. Then the semantic similarities of documents, calculating the degree of similarity between their documents with a subject, and detecting their semantic similarity are done. The results obtained from the test dataset indicated the precision and accuracy of the method in detecting semantic similarities of legal documents related to the Islamic Consultative Assembly of Iran, as well as the precision and accuracy of performance above 90%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. 基于物方一致性的珞珈三号 01 星视频数据 在轨实时稳像.
- Author
-
张致齐, 王 密, 曹金山, 刘 闯, and 廖敦波
- Subjects
PARALLEL algorithms ,REMOTE-sensing images ,PIXELS ,VIDEOS ,ENCODING - Abstract
Copyright of Geomatics & Information Science of Wuhan University is the property of Geomatics & Information Science of Wuhan University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
33. Enacting Algorithms Through Encoding and Decoding Practices.
- Author
-
Pronzato, Riccardo
- Subjects
CULTURAL studies ,SOCIAL role ,ALGORITHMS ,ENCODING ,SOCIOLOGY - Abstract
In the field of digital sociology, debates continue about the best strategies to analyse the social role of algorithms, their design and uses, as well as their implications. To contribute to this conversation, this paper bridges a practical approach to culture - which considers culture as an outcome of social activities - with the tradition of cultural studies - which frames culture as a set of practices in the construction and interpretation of media messages and technological artifacts. Specifically, I focus on how Nick Seaver's "algorithms as culture" approach intersects with Stuart Hall's "Encoding/Decoding" model and the following applications to algorithmic media of different authors. Through this analysis, I argue that algorithms are culturally enacted by the encoding and decoding practices of their producers and end users. Thus, algorithms are considered as brought into being by the activities underlying their design, as well as by their uses, analyses, and interpretations. Furthermore, I propose different methodological strategies to analyse how encoding/decoding activities culturally enact algorithms within the social realm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. New constructions of constant dimension subspace codes with large sizes.
- Author
-
Li, Yun, Liu, Hongwei, and Mesnager, Sihem
- Subjects
LINEAR network coding ,VIDEO coding ,ENCODING - Abstract
Subspace codes have important applications in random network coding. It is a classical problem to construct subspace codes where both their size and their minimum distance are as large as possible. In particular, cyclic constant dimension subspace codes have additional properties which can be used to make encoding and decoding more efficient. In this paper, we construct large cyclic constant dimension subspace codes with minimum distances 2 k - 2 and 2k. These codes are contained in G q (n , k) , where G q (n , k) denotes the set of all k-dimensional subspaces of the finite filed F q n of q n elements (q a prime power). Consequently, some results in [7, 15], and [23] are extended. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. An Objective Space Constraint-Based Evolutionary Method for High-Dimensional Feature Selection [Research Frontier].
- Author
-
Cheng, Fan, Zhang, Rui, Huang, Zhengfeng, Qiu, Jianfeng, Xia, Mingming, and Zhang, Lei
- Abstract
Evolutionary algorithms (EAs) have shown their competitiveness in solving the problem of feature selection. However, limited by their encoding scheme, most of them face the challenge of "curse of dimensionality". To address the issue, in this paper, an objective space constraint-based evolutionary algorithm, named OSC-EA, is proposed for high-dimensional feature selection (HDFS). Although the decision space of EAs for HDFS is very huge, its objective space is the same as that of the low-dimensional feature selection. Based on this fact, in the proposed OSC-EA, the HDFS is firstly modeled as a constrained problem, where a constraint of the objective space is introduced and used to partition the whole objective space into the "feasible region" and the "infeasible region". To handle the constrained problem, a two-stage $\varepsilon$ɛ constraint-based evolutionary scheme is designed. In the first stage, the value of $\varepsilon$ɛ is set to be very small, which ensures that the search concentrates on the "feasible region", and the latent high-quality feature subsets can be found quickly. Then, in the second stage, the value of $\varepsilon$ɛ increases gradually, so that more solutions in the "infeasible region" are considered. Until the end of the scheme, $\varepsilon \rightarrow \infty$ɛ→∞; all the solutions in the objective space are considered. By using the search in the second stage, the quality of the obtained feature subsets is further improved. The empirical results on different high-dimensional datasets demonstrate the effectiveness and efficiency of the proposed OSC-EA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. MULTIPLICITY AND LATTICE COHOMOLOGY OF PLANE CURVE SINGULARITIES.
- Author
-
KUBASCH, ALEXANDER A., NÉMETHI, ANDRÁS, and SCHEFLER, GERGŐ
- Subjects
MULTIPLICITY (Mathematics) ,MICROORGANISMS ,CONCRETE ,HOPE ,ENCODING - Abstract
The lattice cohomology and the graded root of an isolated curve singularity were recently introduced in [3]. The lattice cohomology is a categorification of the d-invariant. The hope is that for plane curve singularities it encodes subtle information about the analytic structure and concrete analytic invariants. The present paper is a positive result in this direction: we prove that the multiplicity of an irreducible plane curve singularity can be recovered from its lattice cohomology (or, from its graded root). In fact, we give four distinct proofs of this statement, each of them emphasizing a rather different aspect of the theory of plane curve germs. With these proofs, we also create new bridges between the abstract analytic type and the embedded topological type of the germ. In particular, we provide a new characterization of the Ap'ery set of the semigroup of the germ in terms of embedded data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Spatial-temporal episodic memory modeling for ADLs: encoding, retrieval, and prediction.
- Author
-
Song, Xinjing, Wang, Di, Quek, Chai, Tan, Ah-Hwee, and Wang, Yanjiang
- Subjects
EPISODIC memory ,ARTIFICIAL neural networks ,ENCODING ,ACTIVITIES of daily living - Abstract
Activities of daily living (ADLs) relate to people's daily self-care activities, which reflect their living habits and lifestyle. A prior study presented a neural network model called STADLART for ADL routine learning. In this paper, we propose a cognitive model named Spatial-Temporal Episodic Memory for ADL (STEM-ADL), which extends STADLART to encode event sequences in the form of distributed episodic memory patterns. Specifically, STEM-ADL encodes each ADL and its associated contextual information as an event pattern and encodes all events in a day as an episode pattern. By explicitly encoding the temporal characteristics of events as activity gradient patterns, STEM-ADL can be suitably employed for activity prediction tasks. In addition, STEM-ADL can predict both the ADL type and starting time of the subsequent event in one shot. A series of experiments are carried out on two real-world ADL data sets: Orange4Home and OrdonezB, to estimate the efficacy of STEM-ADL. The experimental results indicate that STEM-ADL is remarkably robust in event retrieval using incomplete or noisy retrieval cues. Moreover, STEM-ADL outperforms STADLART and other state-of-the-art models in ADL retrieval and subsequent event prediction tasks. STEM-ADL thus offers a vast potential to be deployed in real-life healthcare applications for ADL monitoring and lifestyle recommendation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. CNF Encodings of Symmetric Functions
- Author
-
Emdin, Gregory, Kulikov, Alexander S., Mihajlin, Ivan, and Slezkin, Nikita
- Published
- 2024
- Full Text
- View/download PDF
39. Cloud media video encoding: review and challenges
- Author
-
Moina-Rivera, Wilmer, Garcia-Pineda, Miguel, Gutiérrez-Aguado, Juan, and Alcaraz-Calero, Jose M.
- Published
- 2024
- Full Text
- View/download PDF
40. An attention mechanism model based on positional encoding for the prediction of ship maneuvering motion in real sea state.
- Author
-
Dong, Lei, Wang, Hongdong, and Lou, Jiankun
- Subjects
- *
COSINE function , *SINE function , *ENCODING , *AUTONOMOUS vehicles , *ATTENTION , *MOTION - Abstract
This paper proposes an positional encoding-based attention mechanism model which can quantify the temporal correlation of ship maneuvering motion to predict the future ship motion in real sea state. To represent the temporal information of the sequential motion status, the positional encoding consisted by sine and cosine functions of different frequencies is chosen as the input of the model. First, the reasonableness of the improved architecture of the model is validated on the standard turning test datasets of an unmanned surface vehicle. Then, the absolute positional encoding based-scaled-dot product attention mechanism model is compared with other two attention mechanism models with different positional encoding and attention calculation methods and its superiority is verified. As demonstrated by exhaustive experiments, the model has the highest prediction accuracy when the input sequence length equals the output sequence length and the accuracy defined in this paper of the model will drop to less than 90% when the predicted length exceeds 45. Finally, the attention mechanism model is compared with the LSTM model with different lengths of input sequences to demonstrate that the attention mechanism model has a faster training speed when dealing with long sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Transforming Time-Series Data for Improved LLM-based Forecasting through Adaptive Encoding.
- Author
-
Ceperic, Vladimir and Markovic, Tomislav
- Subjects
LANGUAGE models ,PROCESS capability ,FORECASTING ,ENCODING - Abstract
The advent of Large Language Models (LLMs) has sparked significant interest in their application across various domains, including time-series forecasting. This paper introduces an encoding strategy designed to bridge the gap between the inherently quantitative nature of time-series data and the primarily textual processing capabilities of LLMs. By leveraging an innovative combination of adaptive segmentation and tokenization, inspired by the fast Brownian bridge-based aggregation (fABBA) algorithm, our method transforms time series data into a format conducive to LLM analysis. Through evaluation on diverse datasets (DARTS series), we demonstrate that our approach, on average, improves time-series forecasting accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Method for Generating Indoor 3D Scene Graphs Based on Instance Features and Relationship Encoding.
- Author
-
Du, Han, Cai, Benhe, Li, Xiaoming, Wang, Weixi, and Tang, Shengjun
- Subjects
POINT cloud ,GRAPH algorithms ,ENCODING - Abstract
A 3D scene graph is a compact and explicit representation in scene analysis. In today's 3D scene graph prediction methods, the feature encoding method of nodes and edges is relatively simple, which essentially hinders the network from fully learning 3D point cloud features. In this paper, we propose a 3D scene graph task framework that fully expresses node and edge features, trying to meet the requirements of fully utilizing point cloud features to achieve high-precision prediction. Experimental results show that with the help of the new representation method, the prediction performance of 3D scene graphs has been significantly improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A secure fingerprint hiding technique based on DNA sequence and mathematical function.
- Author
-
Al-Ahmadi, Wala'a Essa, Aljahdali, Asia Othman, Thabit, Fursan, and Munshi, Asmaa
- Subjects
NUCLEOTIDE sequence ,MATHEMATICAL functions ,DNA sequencing ,MATHEMATICAL sequences ,CRYPTOGRAPHY - Abstract
DNA steganography is a technique for securely transmitting important data using DNA sequences. It involves encrypting and hiding messages within DNA sequences to prevent unauthorized access and decoding of sensitive information. Biometric systems, such as fingerprinting and iris scanning, are used for individual recognition. Since biometric information cannot be changed if compromised, it is essential to ensure its security. This research aims to develop a secure technique that combines steganography and cryptography to protect fingerprint images during communication while maintaining confidentiality. The technique converts fingerprint images into binary data, encrypts them, and embeds them into the DNA sequence. It utilizes the Feistel network encryption process, along with a mathematical function and an insertion technique for hiding the data. The proposed method offers a low probability of being cracked, a high number of hiding positions, and efficient execution times. Four randomly chosen keys are used for hiding and decoding, providing a large key space and enhanced key sensitivity. The technique undergoes evaluation using the NIST statistical test suite and is compared with other research papers. It demonstrates resilience against various attacks, including known-plaintext and chosen-plaintext attacks. To enhance security, random ambiguous bits are introduced at random locations in the fingerprint image, increasing noise. However, it is important to note that this technique is limited to hiding small images within DNA sequences and cannot handle video, audio, or large images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression.
- Author
-
Adhuran, Jayasingam, Khan, Nabeel, and Martini, Maria G.
- Subjects
IMAGE sensors ,VIDEO coding ,ENCODING ,VIDEO compression ,CELL aggregation ,ACTION potentials - Abstract
Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. English of Tamil Multi-Modal Neural Machine Translation for Image Captioning.
- Author
-
A., Manoranjani and H. O., Lekshmy
- Subjects
MACHINE translating ,SHORT-term memory ,LONG-term memory - Abstract
Neural machine translation has made significant progress in automating language translation tasks. However, traditional approaches solely rely on textual data, neglecting the potential benefits of integrating visual information. In this paper, we propose a multi-modal neural machine translation system from English to Tamil that leverages both textual and visual modalities. By incorporating images alongside textual input, our system captures rich contextual information conveyed through visual cues, enhancing translation accuracy. We present the architecture and components of our multi-modal translation system, including data preprocessing, alignment, and tokenization techniques. Extensive experimentation demonstrates the superiority of our approach compared to text-only models, offering improved translation quality. Our thorough analysis evaluates the impact of image integration on translation performance, shedding light on system strengths and limitations. By combining textual and visual information, our multi-modal neural machine translation system effectively addresses syntactic and morphological differences in English-Tamil translation. It provides time and effort savings for both native and non-native Tamil speakers, fostering cross-cultural communication. Our research contributes to advancing multi-modal neural machine translation by leveraging the synergy between text and images, paving the way for more accurate and context-aware translation systems. The implications of automated English-to- Tamil translation extend to diverse linguistic backgrounds, facilitating effective communication and bridging language barriers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
46. Dictionary Encoding Based on Tagged Sentential Decision Diagrams.
- Author
-
Zhong, Deyuan, Fang, Liangda, and Guan, Quanlong
- Subjects
ENCYCLOPEDIAS & dictionaries ,BOOLEAN functions ,ENCODING ,DECODING algorithms ,VIDEO coding - Abstract
Encoding a dictionary into another representation means that all the words can be stored in the dictionary in a more efficient way. In this way, we can complete common operations in dictionaries, such as (1) searching for a word in the dictionary, (2) adding some words to the dictionary, and (3) removing some words from the dictionary, in a shorter time. Binary decision diagrams (BDDs) are one of the most famous representations of such encoding and are widely popular due to their excellent properties. Recently, some people have proposed encoding dictionaries into BDDs and some variants of BDDs and showed that it is feasible. Hence, we further investigate the topic of encoding dictionaries into decision diagrams. Tagged sentential decision diagrams (TSDDs), as one of these variants based on structured decomposition, exploit both the standard and zero-suppressed trimming rules. In this paper, we first introduce how to use Boolean functions to represent dictionary files and then design an algorithm that encodes dictionaries into TSDDs with the help of tries and a decoding algorithm that restores TSDDs to dictionaries. We utilize the help of tries in the encoding algorithm, which greatly accelerates the encoding process. Considering that TSDDs integrate two trimming rules, we believe that using TSDDs to represent dictionaries would be more effective, and the experiments also show this. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Bridge the gap between fixed-length and variable-length evolutionary neural architecture search algorithms.
- Author
-
Gong, Yunhong, Sun, Yanan, Peng, Dezhong, and Chen, Xiangru
- Subjects
NEURAL circuitry ,GENETIC algorithms ,CHROMOSOMES ,EVOLUTIONARY algorithms ,ENCODING - Abstract
Evolutionary neural architecture search (ENAS) aims to automate the architecture design of deep neural networks (DNNs). In recent years, various ENAS algorithms have been proposed, and their effectiveness has been demonstrated. In practice, most ENAS methods based on genetic algorithms (GAs) use fixed-length encoding strategies because the generated chromosomes can be directly processed by the standard genetic operators (especially the crossover operator). However, the performance of existing ENAS methods with fixed-length encoding strategies can also be improved because the optimal depth is regarded as a known priori. Although variable-length encoding strategies may alleviate this issue, the standard genetic operators are replaced by the developed operators. In this paper, we proposed a framework to bridge this gap and to improve the performance of existing ENAS methods based on GAs. First, the fixed-length chromosomes were transformed into variable-length chromosomes with the encoding rules of the original ENAS methods. Second, an encoder was proposed to encode variable-length chromosomes into fixed-length representations that can be efficiently processed by standard genetic operators. Third, a decoder cotrained with the encoder was adopted to decode those processed high-dimensional representations which cannot directly describe architectures into original chromosomal forms. Overall, the performances of existing ENAS methods with fixed-length encoding strategies and variable-length encoding strategies have both improved by the proposed framework, and the effectiveness of the framework was justified through experimental results. Moreover, ablation experiments were performed and the results showed that the proposed framework does not negatively affect the original ENAS methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Reversible data hiding in encrypted images with multi-prediction and adaptive huffman encoding.
- Author
-
Ren, Hua, Bai, Guang-rong, Chen, Tong-tong, Yue, Zhen, and Ren, Ru-yong
- Subjects
REVERSIBLE data hiding (Computer science) ,HUFFMAN codes ,IMAGE encryption ,PIXELS ,DATA mining ,ENCODING - Abstract
With the rapid development of multimedia technology and the massive accumulation of user data, a huge amount of data is rapidly generated and shared over the network, while the problems of inappropriate data access and abuse persist. Reversible data hiding in encrypted images (RDHEI) is a privacy-preserving method that embeds protected data in an encrypted domain and accurately extracts the embedded data without affecting the original content. However, the amount of embedded data has been one of the major limitations in the performance and application of RDHEI. Currently, the main approaches to improve the capacity of RDHEI are either to increase the overall capacity or to reduce the length of the auxiliary information. In this paper, we propose a novel RDHEI scheme based on multi-prediction and adaptive Huffman encoding. To increase the overall capacity, we propose a multi-prediction, called MED+GAP predictor, to generate the label map data of non-reference pixels prior to image encryption. Then, an adaptive Huffman coding is designed to compress the generated labels in order to reduce the embedding length of the auxiliary information used for the extraction and recovery. Experiments show that the proposed method with MED+GAP predictor and adaptive Huffman coding improves 0.052 bpp, 0.023 bpp, and 0.047 bpp on average over the other state-of-the-art methods on the BOSSBase, BOWS-2, and UCID datasets, respectively, while maintaining security and reversibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Comparing the Expressiveness of the π-calculus and CCS.
- Author
-
van Glabbeek, Rob
- Subjects
ENCODING - Abstract
This paper shows that the π-calculus with implicit matching is no more expressive than CCS
γ , a variant of CCS in which the result of a synchronisation of two actions is itself an action subject to relabelling or restriction, rather than the silent action τ. This is done by exhibiting a compositional translation from the π-calculus with implicit matching to CCSγ that is valid up to strong barbed bisimilarity. The full π-calculus can be similarly expressed in CCSγ enriched with the triggering operation of Meije. I also show that these results cannot be recreated with CCS in the rôle of CCSγ , not even up to reduction equivalence, and not even for the asynchronous π-calculus without restriction or replication. Finally, I observe that CCS cannot be encoded in the π-calculus. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
50. Inter-Frame Compression for Dynamic Point Cloud Geometry Coding.
- Author
-
Akhtar, Anique, Li, Zhu, and Van der Auwera, Geert
- Subjects
POINT cloud ,DEEP learning ,MIXED reality ,GEOMETRY ,VIRTUAL reality ,AUTONOMOUS vehicles ,LATENT variables - Abstract
Efficient point cloud compression is essential for applications like virtual and mixed reality, autonomous driving, and cultural heritage. This paper proposes a deep learning-based inter-frame encoding scheme for dynamic point cloud geometry compression. We propose a lossy geometry compression scheme that predicts the latent representation of the current frame using the previous frame by employing a novel feature space inter-prediction network. The proposed network utilizes sparse convolutions with hierarchical multiscale 3D feature learning to encode the current frame using the previous frame. The proposed method introduces a novel predictor network for motion compensation in the feature domain to map the latent representation of the previous frame to the coordinates of the current frame to predict the current frame’s feature embedding. The framework transmits the residual of the predicted features and the actual features by compressing them using a learned probabilistic factorized entropy model. At the receiver, the decoder hierarchically reconstructs the current frame by progressively rescaling the feature embedding. The proposed framework is compared to the state-of-the-art Video-based Point Cloud Compression (V-PCC) and Geometry-based Point Cloud Compression (G-PCC) schemes standardized by the Moving Picture Experts Group (MPEG). The proposed method achieves more than 88% BD-Rate (Bjøntegaard Delta Rate) reduction against G-PCCv20 Octree, more than 56% BD-Rate savings against G-PCCv20 Trisoup, more than 62% BD-Rate reduction against V-PCC intra-frame encoding mode, and more than 52% BD-Rate savings against V-PCC P-frame-based inter-frame encoding mode using HEVC. These significant performance gains are cross-checked and verified in the MPEG working group. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.