149 results
Search Results
2. AVI WIGDERSON RECEIVES ACM A.M. TURING AWARD FOR GROUNDBREAKING INSIGHTS ON RANDOMNESS
- Subjects
Defined contribution plans ,Scientists -- Achievements and awards ,Computer science ,Algorithms ,Algorithm ,Business ,News, opinion and commentary - Abstract
Leading Theoretical Computer Scientist Cited for Field-Defining Contributions NEW YORK, April 10, 2024 /PRNewswire/ -- ACM, the Association for Computing Machinery, today named Avi Wigderson as recipient of the 2023 [...]
- Published
- 2024
3. Genius Group Appoints Alan Turing AI as Chief AI Officer
- Subjects
Computer science ,Banking, finance and accounting industries ,Business - Abstract
SINGAPORE, April 24, 2024 (GLOBE NEWSWIRE) -- https://www.globenewswire.com/Tracker?data=u9eZoVm3bPT06mplN_O6OwuwfEdHhjcBAeHfov9jgq0AIHoDreKGFxwysMZMdosSUHCq4eQv7vBpo6RfFTIP--TEaPygFEwFNg9qlI6C5SQ= (NYSE American: GNS) ('Genius Group' or the 'Company'), a leading AI-powered education group, today announced the appointment of a purpose built and [...]
- Published
- 2024
4. NTT Research Funds New Program with Harvard Center for Brain Science
- Subjects
Brain ,Computer science ,Business ,Business, international ,Harvard University - Abstract
CBS-NTT Fellowship Program Supports Research in Emerging Field of Physics of Intelligence News Highlights: * NTT Research Foundation funds Fellowship Program in new field of Physics of Intelligence. * Field [...]
- Published
- 2024
5. Xiao-I Welcomes World-Renowned Scientist Professor Fangzhen Lin as Chief Scientific Advisor
- Subjects
Artificial intelligence ,Natural language interfaces ,Computational linguistics ,Language processing ,Scientists ,Computer science ,Artificial intelligence ,Business ,News, opinion and commentary - Abstract
SHANGHAI, Oct. 25, 2023 /PRNewswire/ -- Xiao-I Corporation (Nasdaq: AIXI) ('Xiao-I' or the 'Company'), a leading cognitive artificial intelligence ('AI') enterprise in China, is pleased to announce the appointment of [...]
- Published
- 2023
6. Building AI Better: Software Engineering Institute Introduces Three Pillars of AI Engineering
- Subjects
Artificial intelligence ,Software engineering ,Computer science ,Software development/engineering ,Artificial intelligence ,Business ,News, opinion and commentary ,Carnegie Mellon University. Software Engineering Institute - Abstract
PITTSBURGH, June 30, 2021 /PRNewswire/ -- The SEI today announced the release of white papers outlining the challenges and opportunities of three initial pillars of artificial intelligence (AI) engineering: human [...]
- Published
- 2021
7. NTT Research Appoints Sanjam Garg as Senior Scientist in Its CIS Lab
- Subjects
Electrical engineering ,Scientists -- Appointments, resignations and dismissals ,Computer science ,Electrical engineering ,Business ,Business, international ,Telecommunications industry ,University of California - Abstract
NTT Research, a division of NTT, has named Dr. Sanjam Garg a Senior Scientist in its Cryptography & Information Security (CIS) Lab. According to a media release, Dr. Garg is [...]
- Published
- 2021
8. Reinforcement learning based joint trajectory design and resource allocation for RIS-aided UAV multicast networks
- Author
-
Ji, Pengshuo, Jia, Jie, Chen, Jian, Guo, Liang, Du, An, and Wang, Xingwei
- Subjects
Drone aircraft ,Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Reconfigurable intelligent surface; UAV network; Multicast communication; Resource allocation; MP-DQN Abstract This paper investigates an unmanned aerial vehicle (UAV)-enabled multicast network, where the UAV serves as a mobile transmitter to send typical contents to its corresponding ground receivers. A reconfigurable intelligent surface (RIS) is deployed to enhance the service quality with a limited power supply in the UAV-enabled multicast network. It can reconfigure the signal propagation environment and improve the received power of ground receivers by adjusting the reflection coefficients. The sum rate maximization problem is formulated by jointly designing the UAV movement, RIS reflection matrix, and beamforming design from the UAV to users. This paper proposes a Beamforming control and Trajectory design algorithm based on a Multi-Pass Deep Q-Network (BT-MP-DQN). In the proposed algorithm, the UAV acts as an agent for periodically observing the state of the UAV multicast network and takes actions to adapt to the dynamic environment. Specifically, the movement of the UAV is discrete action, and the beamforming design is continuous action. The simulation results show that this proposed algorithm can effectively improve the achievable rate and satisfy the minimum rate of multicast group users. The deployment of the RIS is beneficial to network performance enhancement. In addition, the multicast network with UAV also outperforms the conventional multicast channel with a fixed-location transmitter. Author Affiliation: (a) School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, 110819, China (b) Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China * Corresponding author at: School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, 110819, China. Article History: Received 8 October 2022; Revised 22 February 2023; Accepted 9 March 2023 Byline: Pengshuo Ji (a,b), Jie Jia [jiajie@mail.neu.edu.cn] (a,b,*), Jian Chen (a,b), Liang Guo (a), An Du (a), Xingwei Wang (a)
- Published
- 2023
- Full Text
- View/download PDF
9. New Progress in Scientific Research by Tsinghua Shenzhen International Graduate School (SIGS)
- Subjects
Machine learning ,Machine vision ,Computer science ,Business ,News, opinion and commentary - Abstract
SHENZHEN, China, Nov. 17, 2022 /PRNewswire/ -- Professors Qionghai Dai (from Tsinghua University) and Haoqian Wang's (from the Division of Information Science and Technology at Tsinghua Shenzhen International Graduate School) [...]
- Published
- 2022
10. New COVID-19 Research from Department of Computer Science Described (IoT-Based Technological Framework for Inhibiting the Spread of COVID-19: A Pandemic Using Machine Learning and Fuzzy-Based Processes)
- Subjects
Epidemics ,Machine learning ,Computer science ,Business ,Health ,Health care industry - Abstract
2022 JUL 31 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Letter on the CDC & FDA -- Researchers detail new data in COVID-19. According to news originating [...]
- Published
- 2022
11. Department of Computer Science and Electronics Researchers Update Current Data on COVID-19 (Machine Learning Algorithms Application in COVID-19 Disease: A Systematic Literature Review and Future Directions)
- Subjects
Machine learning ,Severe acute respiratory syndrome ,Data mining ,Computer science ,Coronaviruses ,Algorithms ,Business ,Health ,Health care industry ,Data warehousing/data mining ,Algorithm - Abstract
2023 JAN 1 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Letter on the CDC & FDA -- Research findings on COVID-19 are discussed in a new report. [...]
- Published
- 2023
12. NTT Research Names Brent Waters as CIS Lab Director
- Subjects
Cryptography ,Computer science ,Business ,Business, international - Abstract
Distinguished Scientist to Lead Influential Cryptography Lab SUNNYVALE, Calif. -- NTT Research, Inc., a division of NTT (TYO:9432), today announced that it has named Dr. Brent Waters as Director of [...]
- Published
- 2022
13. DroidRL: Feature selection for android malware detection with reinforcement learning
- Author
-
Wu, Yinwei, Li, Meijin, Zeng, Qi, Yang, Tao, Wang, Junfeng, Fang, Zhiyang, and Cheng, Luyu
- Subjects
Natural language interfaces -- Rankings ,Spyware ,Computational linguistics -- Rankings ,Language processing -- Rankings ,Machine learning ,Data mining ,Computer science ,Neural networks -- Rankings ,Algorithms ,Data warehousing/data mining ,Neural network ,Algorithm ,Business ,Computers and office automation industries - Abstract
Keywords Reinforcement learning; Android malware detection; Feature selection; RNN; Sequence processing Highlights * This paper applied reinforcement learning algorithms to the feature selection phase of Android malware detection, reducing the burden of feature selection tasks. * This paper adopts Natural Language Processing methods to tackle the feature selection methods. * This paper presented a modifiable framework that can be easily ported to other feature selection tasks for malware detection. Abstract Due to the completely open-source nature of Android, the exploitable vulnerability of malware attacks is increasing. Machine learning, leading to a great evolution in Android malware detection in recent years, is typically applied in the classification phase. Since the correlation between features is ignored in some traditional ranking-based feature selection algorithms, applying wrapper-based feature selection models is a topic worth investigating. Though considering the correlation between features, wrapper-based approaches are time-consuming for exploring all possible valid feature subsets when processing a large number of Android features. To reduce the computational expense of wrapper-based feature selection, a framework named DroidRL is proposed. The framework deploys DDQN algorithm to obtain a subset of features which can be used for effective malware classification. To select a valid subset of features over a larger range, the exploration-exploitation policy is applied in the model training phase. The recurrent neural network (RNN) is used as the decision network of DDQN to give the framework the ability to sequentially select features. Word embedding is applied for feature representation to enhance the framework's ability to find the semantic relevance of features. The framework's feature selection exhibits high performance without any human intervention and can be ported to other feature selection tasks with minor changes. The experiment results show a significant effect when using the Random Forest as DroidRL's classifier, which reaches 95.6% accuracy with only 24 features selected. Author Affiliation: (a) College of Software Engineering, Sichuan University, Chengdu, China (b) School of Cyber Science and Engineering, Sichuan University, Chengdu, China (c) College of Computer Science, Sichuan University, Chengdu, China (d) School of Business, Sichuan University, Chengdu, China * Corresponding author. Article History: Received 21 September 2022; Revised 21 December 2022; Accepted 26 January 2023 Byline: Yinwei Wu (a), Meijin Li (a), Qi Zeng (c), Tao Yang (c), Junfeng Wang (c), Zhiyang Fang [fangzhiyang@scu.edu.cn] (*,b), Luyu Cheng (d)
- Published
- 2023
- Full Text
- View/download PDF
14. AIDTF: Adversarial training framework for network intrusion detection
- Author
-
Xiong, Wen Ding, Luo, Kai Lun, and Li, Rui
- Subjects
Detectors ,Machine learning ,Computer science ,Business ,Computers and office automation industries - Abstract
Keywords Cyberspace security; Intrusion detection; Adversarial training; Machine learning; Generative adversarial networks Abstract Network Intrusion Detection Systems (IDS) have achieved high accuracy by widely applying Machine Learning (ML) models. However, most current ML-based IDSs can not cope with targeted attacks from adversaries because they are commonly trained and tested using fixed data sets. In this paper, we propose an Adversarial Intrusion Detection Training Framework (AIDTF) to improve the robustness of IDSs, which consists of an attacker model a-model, a defender model d-model, and a black-box trainer t-module. Both the a-model and d-model are multilayer perceptrons, and the t-module is the module used to train IDSs. AIDTF improves the accuracy of IDS by using an adversarial training method, which is different from traditional training methods. Taking the distribution of normal samples in the dataset as the distribution that the a-model and d-model need to learn, the goal of the a-model is to generate samples that deceive the d-model, while the goal of the d-model is to determine whether the input samples are real samples, so there is an adversarial relationship between a-model and d-model. Different types of IDSs can be trained by the t-module using the samples generated from the confrontation between the a-model and the d-model, and we call this kind of IDS the Adversarial Training Intrusion Detection System (ATIDS). The main contribution of this paper is to propose a training method that is used to obtain an IDS with high accuracy not only for known test sets but also to identify unknown disguised attack samples. We tested different types of ATIDSs using the current mainstream attack methods, which include the Fast Gradient Method, Fast Gradient Sign Method, Projected Gradient Descent, and Jacobs Saliency Map Algorithm. The experimental results prove that AIDTF outperforms other adversarial training methods with not only higher accuracy for the test set but also up to a 99% recognition rate for the attack samples. Author Affiliation: (a) School of Computer Science and Technology, Dongguan University Of Technology, GuangDong, China (b) School of Cyberspace Security, Dongguan University Of Technology, GuangDong, China * Corresponding author. Article History: Received 11 October 2022; Revised 2 February 2023; Accepted 13 February 2023 Byline: Wen Ding Xiong [xiongwending@hotmail.com] (a), Kai Lun Luo [luokl@dgut.edu.cn] (b), Rui Li [ruili@dgut.edu.cn] (*,b)
- Published
- 2023
- Full Text
- View/download PDF
15. A forward-secure and efficient authentication protocol through lattice-based group signature in VANETs scenarios
- Author
-
Cao, Yibo, Xu, Shiyuan, Chen, Xue, He, Yunhua, and Jiang, Shuo
- Subjects
Hardness ,Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Forward security; Group signature; Lattice, Message authentication; Vehicular ad hoc networks (VANETs); Traceability; Applied cryptography; Information security Abstract Message authentication has been a research hotspot in current vehicular ad hoc networks (VANETs). Many researchers adopt group signatures based on number-theoretic assumptions to authenticate the vehicular users' identities. Nevertheless, the classical group signature is vulnerable to quantum computing attacks and without considering the negative consequences of secret key disclosure. In this paper, to address these problems, we propose a novel group signature protocol for authentication in VANETs, which is based on lattice cryptography to achieve quantum-resistance and Bonsai-tree signature architecture to achieve forward security. Our scheme is proven secure in terms of traceability, anonymity, and forward-security under the Short Integer Solution (SIS) and Learning With Errors (LWE) hardness. Through comprehensive performance evaluation, we demonstrate that the storage overhead of our scheme is relatively diminutive and the computation cost of the sign and verify algorithms are efficient and practical compared with other existing schemes. Author Affiliation: (a) School of Information Engineering, North China University of Technology, China (b) School of Cyberspace Security, Beijing University of Posts and Telecommunications, China (c) Department of Computer Science, The University of Hong Kong, China (d) Department of Computing, The Hong Kong Polytechnic University, China (e) Institute of Information Engineering, Chinese Academy Sciences, China * Corresponding authors. Article History: Received 29 March 2022; Revised 6 June 2022; Accepted 30 June 2022 (footnote)1 Yibo Cao and Shiyuan Xu have the same contribution to this paper. Byline: Yibo Cao (a,b,1), Shiyuan Xu [13501199447@163.com] (a,c,1,*), Xue Chen (a,d), Yunhua He (a,e,*), Shuo Jiang (a)
- Published
- 2022
- Full Text
- View/download PDF
16. Utility Maximization for Splittable Task Offloading in IoT Edge Network
- Author
-
Wang, Jiacheng, Zhang, Jianhui, Liu, Liming, Zheng, Xuzhao, Wang, Hanxiang, and Gao, Zhigang
- Subjects
Electric power production ,Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Internet of Things; Edge Networks; Time-Expanded Graph; Utility Maximization; Task Offloading Abstract This paper comprehensively investigates spatio-temporal dynamics for task offloading in the Internet of Things (IoT) Edge Network (iTEN) in order to maximize utility. Different from the previous works in the literature that only consider partially dynamic factors, this paper takes into account the time-varying wireless link quality, communication power, wireless interference on task offloading, and the spatiotemporal dynamics of energy harvested by terminals and their charging efficiency. Our goal is to maximize utility during the task offloading by considering the above-mentioned factors, which are relatively complex but closer to reality. This paper designs the Time-Expanded Graph (TEG) to transfer network dynamics and wireless interference into some static weight in the graph so as to devise the algorithm easily. This paper firstly devises the Single Terminal (ST) utility maximization algorithm on the basis of TEG when there is only one terminal. In the case of multiple terminals, it is very complicated to directly solve the utility maximization of the task offloading. This paper adopts the framework of Garg and Könemann and devises a multi-terminal algorithm (MT) to maximize the total utility of all terminals. MT is a fast approximate algorithm and its approximate ratio is 1-3[final sigma], where 0 Author Affiliation: (a) School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China (b) Hangzhou Hikvision Digital Technology Co., Ltd., Hangzhou, 310052, China * Corresponding author. Article History: Received 22 December 2021; Revised 27 June 2022; Accepted 4 July 2022 (footnote)1 Equal contribution and shared co-first authorship. Byline: Jiacheng Wang [jcwang@hdu.edu.cn] (a,1), Jianhui Zhang [jh_zhang@hdu.edu.cn] (a,1), Liming Liu [limingliu@hdu.edu.cn] (a), Xuzhao Zheng [zhengxuzhao66@163.com] (b), Hanxiang Wang [hx_wang@hdu.edu.cn] (a), Zhigang Gao [gaozhigang@hdu.edu.cn] (a,*)
- Published
- 2022
- Full Text
- View/download PDF
17. A goal-driven ruin and recreate heuristic for the 2D variable-sized bin packing problem with guillotine constraints
- Author
-
Gardeyn, Jeroen and Wauters, Tony
- Subjects
Computer science ,Algorithms ,Algorithm ,Business, general ,Business ,Business, international - Abstract
Keywords Packing; 2D bin packing; Guillotine; Heuristic; Variable-sized bins Highlights * Heuristic for solving the 2D bin packing problem with guillotine constraints. * Supports variants with variable-sized bins and 90-degree rotation of items. * Combines the ruin and recreate paradigm with a goal-driven approach. * Experiments demonstrate the effectiveness of the introduced heuristic. Abstract This paper addresses the two-dimensional bin packing problem with guillotine constraints. The problem requires a set of rectangular items to be cut from larger rectangles, known as bins, while only making use of edge-to-edge (guillotine) cuts. The goal is to minimize the total bin area needed to cut all required items. This paper also addresses variants of the problem which permit 90.sup.[ring operator] rotation of items and/or a heterogeneous set of bins. A novel heuristic is introduced which is based on the ruin and recreate paradigm combined with a goal-driven approach. When applying the proposed heuristic to benchmark instances from the literature, it outperforms the current state-of-the-art algorithms in terms of solution quality for all variants of the problem considered. Author Affiliation: KU Leuven, Department of Computer Science, NUMA, Belgium * Corresponding author. Article History: Received 13 January 2021; Accepted 17 November 2021 Byline: Jeroen Gardeyn [jeroen.gardeyn@kuleuven.be] (*), Tony Wauters [tony.wauters@kuleuven.be]
- Published
- 2022
- Full Text
- View/download PDF
18. Efficient persistent memory file systems using virtual superpages with multi-level allocator
- Author
-
Yang, Chaoshu, Yu, Zhiwang, Zhang, Runyu, Nie, Shun, Li, Hui, Chen, Xianzhang, Long, Linbo, and Liu, Duo
- Subjects
Strategic planning (Business) ,Memory ,Computer science ,Business ,Computers and office automation industries - Abstract
Keywords Persistent memory; Persistent memory file system; Virtual superpage; Space management strategy Abstract Emerging persistent memory file systems can significantly improve performance by utilizing the advantages of Persistent Memories (PMs). Especially, they can employ superpages of PMs to alleviate the overhead of locating file data and space management, reduce TLB misses rate. Unfortunately, file systems that organize file data using superpages also induce two critical problems. First, ensuring data consistency can cause severe write amplification during overwrite. Second, existing superpages management may lead to large space waste. In this paper, we propose a Virtual Superpage Mechanism (VSM) to solve the problems by taking advantages of virtual address space. In detail, VSM adopts multi-grained copy-on-write mechanism to reduce the write amplification and zero-copy file data migration mechanism to improve the space utilization efficiency. We implement the proposed VSM mechanism in Linux kernel. Compared with PMFS and NOVA, the state-of-the-art persistent memory file systems, the experimental results show that VSM improves the performance by 36% and 14% on average, respectively. Meanwhile, VSM can achieve the same space utilization efficiency of the file systems using 4 KB pages. Furthermore, we also propose a multi-level allocator, called VSMA, to further improve the performance of VSM. Experimental results show that the proposed VSMA outperforms VSM by 14% on average. Author Affiliation: (a) State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China (b) College of Computer Science, Chongqing University, Chongqing, China (c) College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China * Corresponding author. Article History: Received 21 February 2022; Revised 19 May 2022; Accepted 18 June 2022 (footnote)[white star] A preliminary version of this paper has presented in the 2020 Design, Automation and Test in Europe Conference (DATE 2020). Byline: Chaoshu Yang (a), Zhiwang Yu (a), Runyu Zhang (b), Shun Nie (b), Hui Li (a), Xianzhang Chen (b), Linbo Long (c), Duo Liu [liuduo@cqu.edu.cn] (b,*)
- Published
- 2022
- Full Text
- View/download PDF
19. Microsoft's CEO shares his fear about Artificial Intelligence (AI)
- Subjects
Microsoft Corp. ,Computer software industry ,Artificial intelligence ,Computer science ,Artificial intelligence ,Business - Abstract
To access, purchase, authenticate, or subscribe to the full-text of this article, please visit this link: https://www.thestreet.com/technology/microsofts-ceo-nadella-shares-fear-about-artificial-intelligence-ai Microsoft has been a leader in artificial intelligence (AI) taking it from something [...]
- Published
- 2023
20. Packet-in request redirection: A load-balancing mechanism for minimizing control plane response time in SDNs
- Author
-
Xia, Rui, Dai, Haipeng, Zheng, Jiaqi, Xu, Hong, Li, Meng, and Chen, Guihai
- Subjects
Computer science ,Algorithms ,Algorithm ,Business ,Computers and office automation industries - Abstract
Keywords Software defined networking (SDN); Distributed control plane; Load balancing; Lyapunov optimization; Approximation algorithm Abstract A distributed control plane is more scalable and robust in software defined networking. This paper focuses on controller load balancing using packet-in request redirection, that is, given the instantaneous state of the system, determining whether to redirect packet-in requests for each switch, such that the overall control plane response time (CPRT) is minimized. To address the above problem, we propose a framework based on Lyapunov optimization. First, we use the drift-plus-penalty algorithm to combine CPRT minimization problem with controller capacity constraints, and further derive a non-linear program, whose optimal solution is obtained with brute force using standard linearization techniques. Second, we present a greedy strategy to efficiently obtain a solution with a bounded approximation ratio. Third, we reformulate the program as a problem of maximizing a non-monotone submodular function subject to matroid constraints. We implement a controller prototype for packet-in request redirection, and conduct trace-driven simulations to validate our theoretical results. The results show that our algorithms can reduce the average CPRT by 81.6% compared to static assignment, and achieve a 3x improvement in maximum controller capacity violation ratio. Author Affiliation: (a) State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China (b) Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, Hong Kong Special Administrative Region of China * Corresponding authors. Article History: Received 16 February 2022; Revised 13 May 2022; Accepted 25 May 2022 (footnote)[white star] This paper is an extended version of work published in . This work was supported in part by the National Natural Science Foundation of China under Grant No. 61872178, 61832005, 61672276, 61872173, 61802172, in part by the Natural Science Foundation of Jiangsu Province, China under Grant No. BK20181251, in part by the Fundamental Research Funds for the Central Universities, China under Grant 021014380079, in part by funding from the Research Grants Council of Hong Kong (GRF 11209520) and from CUHK, Hong Kong Special Administrative Region of China (4937007, 4937008, 5501329, 5501517). Byline: Rui Xia [xiarui@smail.nju.edu.cn] (a), Haipeng Dai [haipengdai@nju.edu.cn] (a,*), Jiaqi Zheng [jzheng@nju.edu.cn] (a,*), Hong Xu [hongxu@cuhk.edu.hk] (b), Meng Li [menson@smail.nju.edu.cn] (a), Guihai Chen [gchen@nju.edu.cn] (a,*)
- Published
- 2022
- Full Text
- View/download PDF
21. Computer Science Department Researchers Further Understanding of COVID-19 (Intelligent Tutoring System: Learning Math for 6th-Grade Primary School Students)
- Subjects
Intelligent tutoring systems ,Elementary school students ,Computer science ,Coronaviruses ,Business ,Health ,Health care industry - Abstract
2021 JUL 4 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Letter on the CDC & FDA -- Fresh data on coronavirus are presented in a new report. [...]
- Published
- 2021
22. NTT Research Names Sanjam Garg Senior Scientist in its CIS Lab
- Subjects
Electrical engineering ,Scientists ,Cryptography ,Computer science ,Electrical engineering ,Business ,Business, international ,University of California - Abstract
Award-winning computer scientist with interests in theory and new constructions joins like-minded team SUNNYVALE, Calif. -- NTT Research, Inc., a division of NTT (TYO:9432), today announced that it has named [...]
- Published
- 2021
23. Deep Learning (DL)-based adaptive transport layer control in UAV Swarm Networks
- Author
-
Mao, Qian, Zhang, Lin, Hu, Fei, Bentley, Elizabeth Serena, and Kumar, Sunil
- Subjects
Drone aircraft ,Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Congestion control; Deep Learning (DL); Network coding; Transport Layer; UAV Networks Abstract This paper focuses on the congestion control issues in Unmanned Aerial Vehicle (UAV) Swarm Networks (USNs). In a USN, many network factors can cause segment loss, including dynamic swarming, high mobility, and link fading loss. With traditional transport layer protocols such as Transmission Control Protocol (TCP), these losses are interpreted as congestion events and will cause the data sending rate being decreased dramatically, therefore impacting throughput. In this paper, a learning-based adaptive network coding scheme is proposed to handle segment loss. In this scheme, a certain amount of redundancy is attached to the original data. If the segment loss is caused by random factors (such as radio interference), the lost segments are retrieved by decoding. However, if the loss is caused by congestion, the sender will retransmit the lost segments and decrease the sending rate. The coding rate is a critical factor, which should guarantee that the random loss can be retrieved by decoding while the congestion loss triggers retransmission and sending rate deduction. To achieve this goal, a Deep Learning (DL) algorithm is proposed, which comprehensively considers the wireless network conditions and dynamically optimizes the coding rate. Our experimental results show that the DL-based network coding scheme provides improved throughput and end-to-end delay compared to the TCP and general network coding schemes. Author Affiliation: (a) Mathematics & Computer Science Department, Whitworth University, Spokane, USA (b) Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, USA (c) Air Force Research Laboratory, Rome, NY, USA (d) Electrical and Computer Engineering, San Diego State University, San Diego, CA, USA * Corresponding author. Article History: Received 10 February 2021; Revised 25 August 2021; Accepted 25 September 2021 (footnote)[white star] This work was supported by the U.S. Air Force Research Laboratory, under agreement No. FA8750-18-1-0023. DISTRIBUTION STATEMENT A: Approved for Public Release; distribution unlimited 88ABW-2020-3119 on 08 Oct 2020. Byline: Qian Mao (a), Lin Zhang (b), Fei Hu [fei@eng.ua.edu] (b,*), Elizabeth Serena Bentley (c), Sunil Kumar (d)
- Published
- 2021
- Full Text
- View/download PDF
24. Introducing Cotinus.org, and Their Newly-Launched Free to Publish Open Access Computer Science Journals
- Subjects
Publishing industry ,Computer science ,Open access journals ,Publishing industry ,Business ,Business, international - Abstract
M2 PRESSWIRE-March 22, 2022-: Introducing Cotinus.org, and Their Newly-Launched Free to Publish Open Access Computer Science Journals (C)1994-2022 M2 COMMUNICATIONS RDATE:21032022 Cotinus is a non-profit association of academic and industry [...]
- Published
- 2022
25. Situational Crime Prevention (SCP) techniques to prevent and control cybercrimes: A focused systematic review
- Author
-
Ho, Heemeng, Ko, Ryan, and Mazerolle, Lorraine
- Subjects
Electrical engineering ,Cyberterrorism ,Scientists ,Crime prevention ,Computer science ,Developmental biology ,Electrical engineering ,Business ,Computers and office automation industries - Abstract
Keywords Cybercrime; Situational crime prevention; SCP; Criminology; Cyber-focused crime; Cyber-enabled crime Highlights * Evolution and developments of cybercrimes into cyber-focused and cyber-enabled crimes. * Survey of research articles on the application of Situational Crime Prevention (SCP) techniques in preventing cybercrimes. * Research gaps and future areas of research on SCP in cybercrimes. Abstract Situational Crime Prevention (SCP) is a criminological approach that is shown to reduce crime opportunities drawing from five different strategies comprising 25 techniques. With the global increase in cybercrime, practitioners and researchers are increasingly investigating opportunities for applying SCP strategies and techniques to prevent cyber-focused and cyber-enabled crimes. Recent research proposes ways that SCP can be applied to cybercrime. Yet most of this research utilizes only a few of the SCP techniques and the linkages between the SCP techniques and opportunities for reducing cybercrimes are rarely made explicit. In this paper we evaluate the relevance of the full spectrum of SCP techniques to cybercrime and explicate how computer scientists, cybercrime and cybersecurity researchers and practitioners apply SCP principles to prevent and control cyber-enabled crime. Through a focused systematic review of 352 articles across computer science, criminal justice and criminology literature using the PRISMA method, this paper clarifies terminologies, explores the rise of cybercrimes, and explains the value of SCP for responding to cybercrimes. We provide a review of the current research undertaken to apply SCP to cybercrimes and conclude with a discussion on research gaps and potential future areas of research. Author Affiliation: (a) Faculty of Engineering, Cyber Security, School of Information Technology and Electrical Engineering, Architecture and Information Technology, The University of Queensland, Brisbane, QLD 4072, Australia (b) Faculty of Humanities and Social Sciences, School of Social Science, The University of Queensland, Brisbane, QLD 4072, Australia * Corresponding author. Article History: Received 8 April 2021; Revised 5 December 2021; Accepted 10 January 2022 Byline: Heemeng Ho [heemeng.ho@uq.net.au] (a,*), Ryan Ko [ryan.ko@uq.edu.au] (a), Lorraine Mazerolle [l.mazerolle@uq.edu.au] (b)
- Published
- 2022
- Full Text
- View/download PDF
26. Task offloading optimization of cruising UAV with fixed trajectory
- Author
-
Liu, Peng, He, Han, Fu, Tingting, Lu, Huijuan, Alelaiwi, Abdulhameed, and Wasi, Md Wasif Islam
- Subjects
Electromagnetic waves ,Electromagnetic radiation ,Machine learning ,Data mining ,Electromagnetism ,Computer science ,Fire prevention ,Algorithms ,Energy consumption ,Electric waves ,Data warehousing/data mining ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Unmanned aerial vehicles; Task offloading; Edge computing; Q-learning Abstract Unmanned aerial vehicle (UAV) have been deployed in many applications, such as Power Grid inspection, forest fire prevention, and pollution surveillance. They often cruise along a fixed route above the target area. Due to the cost of remote communication and local computationally intensive tasks, resource-constrained drones tend to offload tasks to edge servers. In most cases, drones do not know the prior knowledge of user nodes and edge servers, and must reduce the altitude to provide services. Therefore, it is necessary to carefully decide when and where to collect and offload tasks to avoid unnecessary energy consumption and time delays. In this paper, we propose the benefit maximization problem under constraints such as time sensitivity, and propose an optimized task offloading strategy based on the reinforcement learning algorithm. We strive to directly solve the deficiencies in the profit maximization problem with modified Q-Learning algorithm. We test the performance under practical application scenarios with different environmental parameters. The experimental results prove that the solution proposed in this paper has better convergence and performance, as well as better reusability in similar application scenarios. Author Affiliation: (a) School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China (b) Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering, China Jiliang University, Hangzhou 310018, China (c) College of Computer and Information Sciences and Research Chair of Smart Technologies, King Saud University, Riyadh 11543, Saudi Arabia * Corresponding author. Article History: Received 30 March 2021; Revised 5 July 2021; Accepted 2 August 2021 Byline: Peng Liu (a,b), Han He [15306507997@163.com] (a,*), Tingting Fu (a), Huijuan Lu (b), Abdulhameed Alelaiwi (c), Md Wasif Islam Wasi (a)
- Published
- 2021
- Full Text
- View/download PDF
27. Content caching for shared medium networks under heterogeneous users' behaviors
- Author
-
Ghaffari Sheshjavani, Abdollah, Khonsari, Ahmad, Shariatpanahi, Seyed Pooya, and Moradian, Masoumeh
- Subjects
Computer science ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Cache-aided communication; Small cell networks; Coded caching; Heterogeneous user preference Abstract Content caching is a widely studied technique aimed to reduce the network load imposed by data transmission during peak time while ensuring users' quality of experience. It has been shown that when there is a common link between caches and the server, delivering contents via the coded caching scheme can significantly improve performance over conventional caching. However, finding the optimal content placement is a challenge in the case of heterogeneous users' behaviors. In this paper we consider heterogeneous number of demands and non-uniform content popularity distribution in the case of homogeneous and heterogeneous user-preferences. We propose a hybrid coded--uncoded caching scheme to trade-off between popularity and diversity. We derive explicit closed-form expressions of the server load for the proposed hybrid scheme and formulate the corresponding optimization problem. Results show that the proposed hybrid caching scheme can reduce the server load significantly and outperforms the baseline pure coded and pure uncoded schemes and previous works in the literature for both homogeneous and heterogeneous user preferences. Author Affiliation: (a) School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Iran (b) School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Iran * Corresponding author at: School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Iran. Article History: Received 25 March 2021; Revised 21 June 2021; Accepted 2 September 2021 (footnote)[white star] Some parts of this paper is the extended version of the problem that is presented at the WCNC 2020 conference (Sheshjavani et al., 2020). Byline: Abdollah Ghaffari Sheshjavani [abdollah.ghaffari@ut.ac.ir] (a), Ahmad Khonsari [a_khonsari@ut.ac.ir] (a,b,*), Seyed Pooya Shariatpanahi [p.shariatpanahi@ut.ac.ir] (a), Masoumeh Moradian [mmoradian@ipm.ir] (b)
- Published
- 2021
- Full Text
- View/download PDF
28. A machine learning assisted data placement mechanism for hybrid storage systems
- Author
-
Ren, Jinting, Chen, Xianzhang, Liu, Duo, Tan, Yujuan, Duan, Moming, Li, Ruolan, and Liang, Liang
- Subjects
Machine learning ,Computer storage device industry ,Data mining ,Computer science ,Tracers (Biology) ,Algorithms ,Data warehousing/data mining ,Algorithm ,Business ,Computers and office automation industries - Abstract
Keywords Data placement; Hybrid storage; Machine learning Abstract Emerging applications produce massive files that show different properties in file size, lifetime, and read/write frequency. Existing hybrid storage systems place these files onto different storage mediums assuming that the access patterns of files are fixed. However, we find that the access patterns of files are changeable during their lifetime. The key to improve the file access performance is to adaptively place the files on the hybrid storage system using the run-time status and the properties of both files and the storage systems. In this paper, we propose a machine learning assisted data placement mechanism that adaptively places files onto the proper storage medium by predicting access patterns of files. We design a PMFS based tracer to collect file access features for prediction and show how this approach is adaptive to the changeable access pattern. Based on data access prediction results, we present a linear data placement algorithm to optimize the data access performance on the hybrid storage mediums. Extensive experimental results show that the proposed learning algorithm can achieve over 90% accuracy for predicting file access patterns. Meanwhile, this paper can achieve over 17% improvement of system performance for file accesses compared with the state-of-the-art linear-time data placement methods. Author Affiliation: (a) College of Computer Science, Chongqing University, Chongqing, China (b) School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China * Corresponding author. Article History: Received 22 March 2021; Revised 12 September 2021; Accepted 25 September 2021 Byline: Jinting Ren (a), Xianzhang Chen [xzchen109@gmail.com] (a,*), Duo Liu (a), Yujuan Tan (a), Moming Duan (a), Ruolan Li (a), Liang Liang (b)
- Published
- 2021
- Full Text
- View/download PDF
29. Light: A Compatible, high-performance and scalable user-level network stack
- Author
-
Li, Junfeng, Li, Dan, Jiang, Huiyou, Lin, Du, Geng, Jinkun, Huang, Yukai, Ramakrishnan, K.K., and Zheng, Kai
- Subjects
Bees ,Computer science ,Business ,Computers ,Telecommunications industry - Abstract
Keywords User-level network stack; Compatibility; High performance; Scalability; Packet processing Abstract As the number of CPU cores and the speed of Ethernet NICs keep increasing on server machines, the network stack in the kernel has become the bottleneck for applications demanding very high throughput and ultra-low latency. Recently there is a trend towards moving the network stack out of the kernel. However, most kernel-bypass network stacks discard POSIX APIs that legacy applications have been built on, and the intricate work of transplanting applications brings the barrier to real-world deployment of kernel-bypass stacks. In this work, we propose Light, a novel user-level network stack, which not only gains highly scalable performance on multi-core server, but also achieves compatibility with legacy applications. For compatibility, Light realizes efficient blocking APIs in the user space, intercepts network-related APIs in a non-intrusive manner, and uses the FD space separation technique for proper API redirection. For high performance and scalability, Light adopts lock-free shared queue based inter-process communication and full connection affinity to reduce overheads of system call, cache miss, etc. Experiments demonstrate that many types of legacy applications could run on Light without modifying their source code. Compared with the latest kernel stack, Nginx on Light achieves up to 2.86x throughput and 78.2 % lower tail latency (99.9th percentile) with 14 CPU cores. Author Affiliation: (a) Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China (b) Computer Science Department, Stanford University, CA 94305, USA (c) Department of Computer Science and Engineering, University of California, Riverside, CA 92521, USA (d) Huawei Technologies, Shenzhen 518129, China * Corresponding author. Article History: Received 29 July 2021; Revised 3 December 2022; Accepted 1 April 2023 (footnote)[white star] In this paper, we significantly add the following work based on a previous version (Huang et al., 2017) published at APNet'17: (1) We elaborate the motivation and position of this work more clearly in §1 and §2, including a categorization and comparison of our work with related work, and a summary of design challenges; (2) We describe a number of mechanisms to retain the compatibility and reduce overheads in more detail in §3, including more figure illustrations; (3) We evaluate our solution with more experiments in §5, which covers more types of real-world applications; we also add detailed analysis about the cause of the performance difference between two pipeline models. No conflict of interest exists in the submission of this manuscript. None of the funding sources played a role in the design, collection, analysis or interpretation of the data, in the writing of the report, or in the decision to submit the manuscript for publication. Byline: Junfeng Li (a), Dan Li [tolidan@tsinghua.edu.cn] (a,*), Huiyou Jiang (a), Du Lin (a), Jinkun Geng (b), Yukai Huang (a), K.K. Ramakrishnan (c), Kai Zheng (d)
- Published
- 2023
- Full Text
- View/download PDF
30. Optimal algorithm for min-max line barrier coverage with mobile sensors on 2-dimensional plane
- Author
-
Yao, Pei, Guo, Longkun, Li, Peng, and Lin, Jiawei
- Subjects
Sensors ,Computer science ,Algorithms ,Energy consumption ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Barrier coverage; Mobile sensor; Exact algorithm; Optimal solution; Approximation algorithm Abstract Emerging IoT applications impose line barrier coverage (LBC) tasks with min--max movement objective due to requirements of energy balance, fairness, etc. In LBC, we are given a line barrier and a set of n sensors distributed on the plane. The aim is to move the sensors to fully cover the given barrier, such that the maximum movement of the mobile sensors is minimized and hence the energy consumption of the sensors are balanced. This paper proposes an exact algorithm to optimally solve LBC, which deserves a runtime O(n.sup.2) compared favorably to the previous state-of-art runtime O(n.sup.2logn). The key idea of the improvement is acceleration-via-approximation: devise a novel approximation algorithm and then use it to accelerate the calculation of optimum solutions. Extensive numerical experiments were carried out to evaluate the practical performance of our algorithm against other baselines, demonstrating its performance gain over the previous state-of-art algorithms. Author Affiliation: (a) College of Mathematics and Statistics, Anhui Normal University, Wuhu 241002, PR China (b) School of Computer Science, Qilu University of Technology, Jinan 250301, PR China (c) School of Mathematics and Statistics, Fuzhou University, Fuzhou 360116, PR China (d) Google LLC, Kirkland, WA, Unite States (e) College of Computer and Data Science/ College of Software, Fuzhou University, Fuzhou 360116, PR China * Corresponding author. Article History: Received 4 August 2022; Revised 15 February 2023; Accepted 14 March 2023 Byline: Pei Yao [pei.yao@foxmail.com] (a,c), Longkun Guo [longkun.guo@fzu.edu.cn] (b,c,*), Peng Li [penl@google.com] (d), Jiawei Lin [jiawei.lin_1931@foxmail.com] (e)
- Published
- 2023
- Full Text
- View/download PDF
31. Traffic-aware efficient consistency update in NFV-enabled software defined networking
- Author
-
Li, Pan, Liu, Guiyan, Guo, Songtao, and Zeng, Yue
- Subjects
Traffic congestion ,Virtualization ,Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Consistency update; Policy consistency; Transient congestion; Software defined networking; Network function virtualization Abstract In network function virtualization(NFV)-enabled software defined networks, the controller needs to frequently update the flow forwarding rules in the data plane to adapt to dynamic changes in network topologies or service requests. However, inconsistent rule updates may lead to blackholes, loops, transient congestion or policy violations (e.g., packets do not traverse designated network functions in a specific order), resulting in service interruption and throughput degradation. Therefore, this paper proposes an effective rule consistent update mechanism to avoid the above four problems simultaneously, while improving network throughput and satisfying user requests. Specifically, we first build three effective models to avoid blackholes, loops, and policy violations. Then, considering that network function nodes may change the sizes of their processed flows, we build a congestion avoidance model based on traffic changes to avoid congestion, which can reduce unnecessary rule update delays and packet loss. Subsequently, we prove that the consistent update problem constructed above is NP-hard, and then design an effective heuristic rule consistent update algorithm to obtain the rule update sequence that can simultaneously avoid blackholes, loops, congestion, and policy violations. Extensive trace-driven simulation results show that compared with the existing update methods, our proposed method can improve the success rate by up to 20.6% and reduce the maximum link utilization by up to 7.5%. Author Affiliation: (a) College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China (b) College of Computer Science, Chongqing University, Chongqing, 400044, China (c) Department of Computer Science and Technology, Nanjing University, Nanjing, 210023, China * Corresponding authors. Article History: Received 19 October 2022; Revised 25 March 2023; Accepted 31 March 2023 Byline: Pan Li (a), Guiyan Liu [gyliu@cqu.edu.cn] (b,*), Songtao Guo [guosongtao@cqu.edu.cn] (b,*), Yue Zeng (c)
- Published
- 2023
- Full Text
- View/download PDF
32. An efficient highly parallelized ReRAM-based architecture for motion estimation of HEVC
- Author
-
Zhang, Yuhao, Liu, Bing, Jia, Zhiping, Chen, Renhai, and Shen, Zhaoyan
- Subjects
Energy conservation ,Multiprocessing ,Image coding ,Computer science ,Multiprocessing ,Business ,Computers and office automation industries - Abstract
Keywords Resistive random access memory (ReRAM); Processing-in-memory (PIM); Motion estimation; High efficiency video coding (HEVC) Abstract Motion estimation (ME) is a high efficiency video coding (HEVC) process for determining motion vectors that describe the blocks transformation direction from one adjacent frame to a future frame in a video sequence. ME is a memory and computation consuming process which accounts for more than 50% of the total running time of HEVC. To conquer the memory and computation challenges, this paper presents ReME, a highly paralleled processing-in-memory (PIM) architecture for the ME process based on resistive random access memory (ReRAM). In ReME, the space of ReRAM is mainly separated into storage engine and ME processing engine. The storage engine is used as conventional memory to store video frames and intermediate data, while the computation operations of ME are performed in ME processing engines. Each ME processing engine in ReME consists of Sum of Absolute Differences (SAD) modules, interpolation modules, and Sum of Absolute Transformed Difference (SATD) modules that transfer ME functions into ReRAM-based logic analog computation units. ReME further cooperates these basic computation units to perform ME processes in a highly parallel manner. Simulation results show that the proposed ReME accelerator significantly outperforms other implementations with time consuming and energy saving. Author Affiliation: (a) School of Computer Science and Technology, Shandong University, China (b) College of Intelligence and Computing, Shenzhen Research Institute of Tianjin University, Tianjin University, China * Corresponding authors. Article History: Received 19 November 2020; Revised 19 February 2021; Accepted 31 March 2021 (footnote)[white star] A preliminary version of this paper was presented at the International Conference on Wireless Algorithms, Systems, and Applications (WASA 2020). Byline: Yuhao Zhang (a), Bing Liu (a), Zhiping Jia [jzp@sdu.edu.cn] (a,*), Renhai Chen [renhai.chen@tju.edu.cn] (b,*), Zhaoyan Shen (a)
- Published
- 2021
- Full Text
- View/download PDF
33. Balancing memory-accessing and computing over sparse DNN accelerator via efficient data packaging
- Author
-
Wang, Miao, Fan, Xiaoya, Zhang, Wei, Zhu, Ting, Yao, Tengteng, Ding, Hui, and Wang, Danghui
- Subjects
Energy efficiency ,Neural networks ,Computer science ,Packaging ,Neural network ,Business ,Computers and office automation industries - Abstract
Keywords DNN; Sparse accelerator architecture; Data packaging; Parallel optimization; Data compression Abstract Embedded devices are common carriers for deploying inference networks, which leverage the customized accelerator to achieve the promised performance with strict resource constraints. In the inference of Deep Neural Network(DNN), the sparsity existing in the activations and weights of every layer contributes massive non-effective memory accesses and computing operations. The data compression is adopted as a data pruning method for accelerator design, which eliminates the zero-valued data with a specific data packaging method. However, the data compression, in varying degrees, breaks the data regularity of the processing array DNN accelerators calculates with. The complexity of data access caused by irregular data organization will add extra control logic and decoding logic to compensate. The accelerator architecture that supports sparsity can use the sophisticated memory access scheming and parallel on-chip decoder structure via an efficient data packaging method to balance memory-accessing and computing for acceleration. In this paper, we propose a flexible and highly parallel accelerator architecture that uses a quantitative data packaging method which is efficient and stable for different degrees of sparsity and parallel optimization to explore the sparsity in DNN to achieve high performance with low energy consumption. The total DRAM accesses, performance and energy consumption of the proposed sparse architecture are evaluated with different inference networks. Experiments show that the DRAM accesses of the proposed efficient data packaging method is significantly lower than other commonly used sparse data compression storage methods, the improved performance and saved energy of the sparse accelerator architecture after adopting the optimization method proposed in this paper are up to 1.2x and 1.6x, respectively, over a comparably provisioned do not support sparsity accelerator. In addition, the accelerator architecture proposed has achieved energy efficiency and performance improvements of up to 1.70x and 1.56x, compared with the state-of-the-art architectures. Author Affiliation: (a) School of Computer Science, Northwestern Polytechnical University, Xi'an, 710129, China (b) National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, China (c) Engineering and Research Center of Embedded Systems Integration (Ministry of Education), China * Corresponding author at: School of Computer Science, Northwestern Polytechnical University, Xi'an, 710129, China. Article History: Received 20 November 2020; Revised 11 January 2021; Accepted 9 March 2021 Byline: Miao Wang (a,b), Xiaoya Fan (a,b), Wei Zhang (a,c), Ting Zhu (a,c), Tengteng Yao (a,c), Hui Ding (a,c), Danghui Wang [wangdh@nwpu.edu.cn] (a,c,*)
- Published
- 2021
- Full Text
- View/download PDF
34. Microscopic printing analysis and application for classification of source printer
- Author
-
Nguyen, Quoc-Thông, Mai, An, Chagas, Lionel, and Reverdy-Bruas, Nadège
- Subjects
Forensic sciences ,Computer science ,Algorithms ,Algorithm ,Business ,Computers and office automation industries - Abstract
Keywords Microscopic printing; Source printer identification; Printer forensics; Documentauthentication; Support vector machine Abstract Identifying a forged printed document with scanned evidence can be a challenge. Microscopic printing is showing random shape which depends on the printing source as well as printing material. This paper presents a statistical analysis of the printing patterns under a microscopic scale, analyses the effect of printing direction, printing substrate (uncoated and coated paper), and printing technology (conventional offset, waterless offset, and electrophotography). The analysis shows a negligible effect of printing direction, yet, using the shape descriptor indexes, the printing materials and technologies are distinguishable under a microscopic scale. As a result, the algorithms based on Support Vectors Machine and Random Forest are developed, with shape descriptor indexes as features, for printing source identification. Both proposed algorithms, equally, achieve a high classification accuracy rate, over 92% accuracy with complex geometric-shape patterns. Thanks to the lightweight and efficiency of the Support Vectors Machine, the study shows promising applications for real-world and potential implementation in the Internet of Things devices. Author Affiliation: (a) Université de Lille, ENSAIT, GEMTEX, F-59000 Lille, France (b) School of Computer Science & Engineering, International University, and Vietnam National University, Ho Chi Minh City, Vietnam (c) University Grenoble Alpes, CNRS, Grenoble INP (Institute of Engineering, Univ. Grenoble Alpes), LGP2, F-38000 Grenoble, France * Corresponding author. Article History: Received 9 January 2021; Revised 2 April 2021; Accepted 4 May 2021 Byline: Quoc-Thông Nguyen [nguyenquocthong1111@gmail.com] (*,a), An Mai (b), Lionel Chagas (c), Nadège Reverdy-Bruas (c)
- Published
- 2021
- Full Text
- View/download PDF
35. Local universities prep for burgeoning AI landscape
- Subjects
Educational technology ,Artificial intelligence ,Natural language interfaces ,Computational linguistics ,Language processing ,Machine learning ,Universities and colleges ,Computer science ,Artificial intelligence ,Technology in education ,Business ,Business, regional ,Tulane University - Abstract
Byline: admin Main photo courtesy DepositPhotos. Artificial Intelligence has made its way into college catalogs across the country, and New Orleans-area universities are making their own investments in AI or [...]
- Published
- 2023
36. Evolving malice scoring models for ransomware detection: An automated approach by utilising genetic programming and cooperative coevolution
- Author
-
John, Taran Cyriac, Abbasi, Muhammad Shabbir, Al-Sahaf, Harith, Welch, Ian, and Jang-Jaccard, Julian
- Subjects
Computer science ,Business ,Computers and office automation industries - Abstract
Keywords Ransomware detection; Evolutionary computation; Symbolic regression; Genetic programming; Malice score; Cooperative coevolution Abstract Malice scoring is a technique that is present throughout the literature to quantify a software malignance through the assignment of a malice score. However, the majority of existing malice scoring models are synthesised using manually selected features and weights, where a domain specialist is needed. Hence, this paper aim at utilising Genetic Programming and cooperative coevolution to automatically evolve an ensemble of symbolic regression functions to assign a malice score to an instance of software data. Using a publicly available dataset, the effectiveness of the proposed method is assessed and compared to that of the state-of-the-art malice scoring method. The experimental results show that the proposed method has significantly outperformed the benchmark method and exhibits the best-performing model that produces an overall balanced accuracy of 95.80%, correctly classifying 94.21% and 97.39% of unseen malicious and benign instances, respectively. Furthermore, various aspects of the proposed method and experimental results have been analysed in-depth to provide insight into the evolutionary process and some of the automatically evolved models. Author Affiliation: (a) School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand (b) Department of Computer Science, University of Agriculture Faisalabad, Punjab, Pakistan (c) Cybersecurity Lab, Massey University, Auckland, New Zealand * Corresponding author. Article History: Received 23 October 2022; Revised 24 February 2023; Accepted 27 March 2023 Byline: Taran Cyriac John [johntara@myvuw.ac.nz] (a), Muhammad Shabbir Abbasi [Shabbir.Abbasi@ecs.vuw.ac.nz] (a,b), Harith Al-Sahaf [Harith.Al-Sahaf@ecs.vuw.ac.nz] (*,a), Ian Welch [Ian.Welch@ecs.vuw.ac.nz] (a), Julian Jang-Jaccard [J.Jang-jaccard@massey.ac.nz] (c)
- Published
- 2023
- Full Text
- View/download PDF
37. Lightweight blockchain consensus mechanism and storage optimization for resource-constrained IoT devices
- Author
-
Li, Chunlin, Zhang, Jing, Yang, Xianmin, and Youlong, Luo
- Subjects
Information science ,Computer science ,Business ,Computers and office automation industries - Abstract
Keywords Resource-constrained devices; Lightweight blockchain; Consensus mechanism; Block storage; RS erasure code Highlights * Lightweight blockchain consensus mechanism and storage optimization for resource-constrained IoT devices are proposed. * In order to achieve lightweight blockchain, an improved PBFT blockchain consensus mechanism based on reward and punishment strategy is proposed. * In order to reduce the storage consumption, a blockchain storage optimization scheme based on RS erasure code is proposed. * Experimental results show that the strategies can reduce the delay of the consensus, lower the communication resources, and reduce the cost of blockchain storage. Abstract Blockchain is a distributed digital ledger with features such as tamper-proof and privacy protection, which can provide reliable security solutions for the Internet of Things, smart home, and other scenarios. However, most of the devices in these scenarios only have limited computing, storage, bandwidth, and other resources, which makes it difficult to bear the burden of the blockchain consensus process and subsequent storage of the blockchain ledger. Therefore, a lightweight blockchain needs to be designed to suit for resource-constrained device scenarios. In order to achieve lightweight blockchain, an improved PBFT blockchain consensus mechanism based on reward and punishment strategy is proposed in this paper. Moreover, to reduce the storage overhead on the premise of ensuring the recoverability of blockchain, a blockchain storage optimization scheme based on RS erasure code is proposed in this paper. Experimental results show that the strategies proposed in this paper can reduce the delay of the consensus, the communication resources required by the consensus, and the cost of blockchain storage. Author Affiliation: (a) School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430063, China (b) Data Recovery Key Laboratory of Sichuan Province, College of Mathematics and Information Science, Neijiang Normal University, Neijiang 641100, PR China * Corresponding authors. Article History: Received 19 September 2020; Revised 21 March 2021; Accepted 28 March 2021 Byline: Chunlin Li [chunlinli74@163.com] (a,b,*), Jing Zhang (a), Xianmin Yang [391217726@qq.com] (b,*), Luo Youlong (a)
- Published
- 2021
- Full Text
- View/download PDF
38. Delay and energy aware task scheduling mechanism for fog-enabled IoT applications: A reinforcement learning approach
- Author
-
Raju, Mekala Ratna and Mothku, Sai Krishna
- Subjects
Computer science ,Algorithms ,Algorithm ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Task scheduling; Fog computing; Fuzzy inference system; Reinforcement learning; Service delay and energy consumption Abstract With the expansion of the internet of things (IoT) devices and their applications, the demand for executing complex and deadline-aware tasks is growing rapidly. Fog-enabled IoT architecture has evolved to accomplish these tasks at the fog layer. However, fog computing devices have limited power supply and computation resources compared to cloud devices. In delay-sensitive applications of fog-enabled IoT architecture, executing tasks with stringent deadlines while reducing the service latency and energy usage of fog resources is a difficult challenge. This paper presents an effective task scheduling strategy to allocate fog computing resources for IoT requests to meet the deadline of the requests and resource availability. Initially, the scheduling problem is formulated as mixed-integer nonlinear programming (MINLP) to reduce the energy consumption of the fog resources and service time of the tasks subject to the deadline and resource availability constraints. To address the high dimensionality issue of the tasks in a dynamic environment, a fuzzy-based reinforcement learning (FRL) mechanism is employed to reduce the service delay of the tasks and energy usage of the fog nodes. Initially, the tasks are prioritized using fuzzy logic. Then the prioritized tasks are scheduled using the on-policy reinforcement learning technique, which enhances the long-term reward compared to the Q-learning approach. Further, the evaluation outcomes reflect that the proposed task scheduling technique outperforms the existing algorithms with an improvement of up to 23% and 18% regarding service latency and energy consumption, respectively. Author Affiliation: Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, 620015, India * Corresponding author. Article History: Received 28 September 2022; Revised 9 January 2023; Accepted 31 January 2023 Byline: Mekala Ratna Raju [ratnarajumekala@gmail.com] (*), Sai Krishna Mothku [saikrishna@nitt.edu]
- Published
- 2023
- Full Text
- View/download PDF
39. PPTA: A location privacy-preserving and flexible task assignment service for spatial crowdsourcing
- Author
-
Zhou, Menglun, Zheng, Yifeng, Wang, Songlei, Hua, Zhongyun, Huang, Hejiao, Gao, Yansong, and Jia, Xiaohua
- Subjects
Mobile devices ,Crowdsourcing ,Privacy ,Sensors ,Cryptography ,Computer science ,Privacy issue ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Spatial crowdsourcing; Task assignment; Location privacy Abstract With the rapid growth of sensor-rich mobile devices, spatial crowdsourcing (SC) has emerged as a new crowdsourcing paradigm harnessing the crowd to perform location-dependent tasks. To appropriately select workers that are near the tasks, SC systems need to perform location-based task assignment, which requires collecting worker locations and task locations. Such practice, however, may easily compromise the location privacy of workers. In light of this, in this paper, we design, implement, and evaluate PPTA, a new system framework for location privacy-preserving task assignment in SC with strong security guarantees. PPTA takes advantage of only lightweight cryptography (such as additive secret sharing, function secret sharing, and secure shuffle), and provides a suite of tailored secure components required by practical location-based task assignment processes. Specifically, aiming for practical usability, PPTA is designed to flexibly support two realistic task assignment settings: (i) the online setting where tasks arrive and get processed at the SC platform one by one, and (ii) the batch-based setting where tasks arrive and get processed in a batch. Extensive experiments over a real-world dataset demonstrate that while providing strong security guarantees, PPTA supports task assignment with efficacy comparable to plaintext baselines and with promising performance. Author Affiliation: (a) School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China (b) School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China (c) Department of Computer Science, City University of Hong Kong, Hong Kong, China * Corresponding author. Article History: Received 26 September 2022; Revised 2 January 2023; Accepted 27 January 2023 Byline: Menglun Zhou [menglun.zhou@outlook.com] (a), Yifeng Zheng [yifeng.zheng@hit.edu.cn] (a,*), Songlei Wang [songlei.wang@outlook.com] (a), Zhongyun Hua [huazhongyun@hit.edu.cn] (a), Hejiao Huang [huanghejiao@hit.edu.cn] (a), Yansong Gao [yansong.gao@njust.edu.cn] (b), Xiaohua Jia [csjia@cityu.edu.hk] (a,c)
- Published
- 2023
- Full Text
- View/download PDF
40. An efficient heterogeneous authenticated key agreement scheme for unmanned aerial vehicles
- Author
-
Pan, Xiangyu, Jin, Yuqiao, and Li, Fagen
- Subjects
Investment analysis ,Mobile devices ,Drone aircraft ,Computer science ,Security management ,Research institutes ,Business ,Computers and office automation industries - Abstract
Keywords Authenticated key agreement; Heterogeneous cryptosystem; Unmanned aerial vehicle; IBC; PKI Abstract Unmanned aerial vehicle (UAV) technology is becoming more and more popular recently due to the rapid development of Internet of things (IoT) and network technology. It has gradually expanded from military field to civil field because of the convenience brought by UAVs. However, the communication of UAV is based on open wireless network, which makes it vulnerable to varieties of attacks. Besides, UAVs are generally considered as mobile devices with limited resources. It is necessary to ensure the security of UAV communication and reduce the computation overhead and communication cost on UAV's side as much as possible. Authenticated key agreement (AKA) scheme is a proper way to meet the above requirements. It enables an UAV and a ground station (GS) to share a session key. Then they can use the session key as a symmetric key and communicate securely through symmetric encryption, which is much less expensive than asymmetric encryption. In this paper, we propose a heterogeneous authenticated key agreement (HAKA) scheme for an UAV to communicate with a GS, in which the UAV belongs to identity-based cryptosystem (IBC) and the GS belongs to public key infrastructure (PKI). Through rigorous security analysis, we show that the proposed scheme is provably secure. Moreover, the comparative experimental results show that our scheme is the most efficient and suitable for UAVs with limited resources. Author Affiliation: (a) School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China (b) The Second Research Institute of Civil Aviation Administration of China (CAAC), Chengdu 610041, China * Corresponding author. Article History: Received 11 May 2022; Revised 7 December 2022; Accepted 30 December 2022 Byline: Xiangyu Pan (a), Yuqiao Jin (b), Fagen Li [fagenli@uestc.edu.cn] (a,*)
- Published
- 2023
- Full Text
- View/download PDF
41. Performance and reliability optimization for high-density flash-based hybrid SSDs
- Author
-
Luo, Longfei, Li, Shicheng, Lv, Yina, and Shi, Liang
- Subjects
Flash memory ,Computer science ,Flash memory ,Business ,Computers and office automation industries - Abstract
Keywords Hybrid storage; Garbage collection; Flash memory; Read disturb Abstract Hybrid SSDs that integrate both large-capacity flash and high-performance flash have become the mainstream of the existing SSD architectures. Two or more flash modes have been supported to satisfy the complex and changeable application requirements, such as single-level-cell (SLC) mode and quad-level-cell (QLC) mode, which can be adaptively switched. However, our empirical studies show that the combination is not well-designed in existing hybrid architectures. We conduct experiments on different scenarios to study the performance and reliability of real hybrid SSDs. Three interesting findings about performance collapse, performance fluctuation, and read disturb of QLC region can be obtained. To solve these problems, this paper proposed HyFlex, including three novel schemes. First, a velocity-based I/O scheduling (VIS) scheme is a novel data placement scheme to avoid performance corrupt. Second, a garbage collection aware capacity tuning (GCT) scheme is a novel flash-mode management scheme to avoid burst performance degradation. Third, a read disturb-aware data migration (DAM) scheme is a novel data migration scheme to alleviate the read disturb of the QLC region. The experiments are conducted on an existing simulator with hybrid SSD extensions. Experimental results show that HyFlex achieves encouraging performance and reliability optimization. Author Affiliation: (a) Software/Hardware Co-design Engineering Research Center, Ministry of Education, Shanghai, China (b) School of Computer Science and Technology, East China Normal University, Shanghai, China (c) Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China * Corresponding author. Article History: Received 30 September 2022; Revised 21 December 2022; Accepted 15 January 2023 Byline: Longfei Luo [luffeyluo.22@gmail.com] (a,b), Shicheng Li [shicheng9779@gmail.com] (a,b), Yina Lv [elainelv95@gmail.com] (a,b), Liang Shi [shi.liang.hk@gmail.com] (a,b,c,*)
- Published
- 2023
- Full Text
- View/download PDF
42. Service function chain migration with the long-term budget in dynamic networks
- Author
-
Qin, Yudong, Guo, Deke, Luo, Lailong, Zhang, Jingyu, and Xu, Ming
- Subjects
Virtualization ,Computer science ,Budget ,Military electronics industry ,Business ,Computers ,Telecommunications industry - Abstract
Keywords Service function chain; SFC migration; Long-term budget; Dynamic networks Abstract Mobile edge computing emerges as a new paradigm to provide low-latency network services in the close proximity to users. Based on the network function virtualization (NFV) technology, network services can be flexibly provisioned as service function chain (SFC) deployed at edge servers. In some scenarios, such as the vehicular or UAV-assisted edge computing, the network topology varies rapidly due to the mobile edge servers, which changes the routing path between adjacent VNFs in an SFC. Migrating SFC to adapt to the frequent topology change can reduce the SFC latency, and improve the quality of users' experience. However, frequent SFC migration will unavoidably increase the operation cost. In this paper, to optimize the system performance in a cost-efficient manner, we study the SFC migration problem in dynamic networks with a long-term cost budget constraint. We then propose the Topology-aware Min-latency SFC Migration (TMSM) method to strike a desirable balance between the SFC latency and the migration cost. Specifically, we first apply the Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization sub-problems. Since the decomposed problem is still NP-hard, a Markov approximation based heuristic is proposed to seek a near-optimal solution for each sub-problem. Compared with the rerouting-only strategy, which does not migrate any VNF, our TMSM reduces the latency by at least 21% on average in each time slot. Extensive evaluations show that the proposed algorithm achieves a better tradeoff between the SFC latency and migration cost than the baselines. Author Affiliation: (a) College of Computer Science and Technology, National University of Defense Technology, Changsha, China (b) Science and Technology Laboratory on Information Systems Engineering, National University of Defense Technology, Changsha, China (c) State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha, China (d) School of Computer and Communication Engineering, Changsha University of Science & Technology, Changsha, China * Corresponding author at: Science and Technology Laboratory on Information Systems Engineering, National University of Defense Technology, Changsha, China. Article History: Received 19 November 2022; Revised 3 January 2023; Accepted 4 January 2023 Byline: Yudong Qin (a), Deke Guo [guodeke@gmail.com] (b,c,*), Lailong Luo (a,b), Jingyu Zhang (b,d), Ming Xu (a)
- Published
- 2023
- Full Text
- View/download PDF
43. A general delta-based in-band network telemetry framework with extremely low bandwidth overhead
- Author
-
Sheng, Siyuan, Huang, Qun, and Lee, Patrick P.C.
- Subjects
Computer science ,Computer network protocols ,Protocol ,Business ,Computers ,Telecommunications industry - Abstract
Keywords In-band network telemetry; Network measurement Abstract In-band network telemetry (INT) enriches network management at scale through the embedding of complete device-internal states into each packet along its forwarding path, yet such embedding of INT information also incurs significant bandwidth overhead in the data plane. We propose DeltaINT, a general INT framework that achieves extremely low bandwidth overhead and supports various packet-level and flow-level applications in network management. DeltaINTbuilds on the insight that state changes are often negligible at most time, so it embeds the complete state information into a packet only when the state change is deemed significant. We propose two variants for DeltaINTthat trade between bandwidth usage and measurement accuracy, while both variants achieve significantly lower bandwidth overhead than the original INT framework. We theoretically derive the time/space complexities and the guarantees of bandwidth mitigation for DeltaINT. We implement DeltaINTin both software and P4. Our evaluation shows that DeltaINTsignificantly mitigates the bandwidth overhead, and the deployment in a Tofino switch incurs limited hardware resource usage. Author Affiliation: (a) Department of Computer Science and Engineering, The Chinese University of Hong Kong, China (b) Department of Computer Science and Technology, Peking University, China * Corresponding author. Article History: Received 10 August 2022; Revised 23 November 2022; Accepted 8 January 2023 (footnote)[white star] Note: An earlier version of this paper appeared at the 29th Annual IEEE International Conference on Network Protocols (ICNP'21) Sheng et al. (2021). In this extended version, we extend DeltaINTto support monitoring with full accuracy, while still significantly reducing the bandwidth overhead of the original INT framework. We now propose two variants of DeltaINT, namely DeltaINT-O(proposed in our conference version) and DeltaINT-E(proposed in this extended version), that trade between bandwidth usage and measurement accuracy. We also add new evaluation results for both software and hardware implementations. Byline: Siyuan Sheng [sysheng21@cse.cuhk.edu.hk] (a), Qun Huang [huangqun@pku.edu.cn] (b,*), Patrick P.C. Lee [pclee@cse.cuhk.edu.hk] (a)
- Published
- 2023
- Full Text
- View/download PDF
44. Data from Toronto Update Knowledge in COVID-19 (Clinical Application of Detecting COVID-19 Risks: A Natural Language Processing Approach)
- Subjects
Medical research ,Medicine, Experimental ,Natural language interfaces ,Computational linguistics ,Language processing ,Computer science ,Business ,Health ,Health care industry - Abstract
2023 JAN 15 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Letter on the CDC & FDA -- Researchers detail new data in COVID-19. According to news originating [...]
- Published
- 2023
45. Ant colony optimization for path planning in search and rescue operations
- Author
-
Morin, Michael, Abi-Zeid, Irène, and Quimper, Claude-Guy
- Subjects
Convergence (Social sciences) ,Search and rescue operations ,Mathematical optimization ,Computer science ,Algorithms ,Algorithm ,Business, general ,Business ,Business, international - Abstract
Keywords Evolutionary computations; Search and rescue; Optimal search path planning; Ant colony optimization; Humanitarian operations Highlights * We proposed and evaluated algorithms for optimal search path planning with visibility. * Ant colony algorithms efficiently optimize search plans (path and effort allocations). * Problem-based pheromone initialization and update benefit search plan optimization. * Luby and Geometric restart policy help convergence and diversification. * Extensive experiments show that efficient metaheuristic can lead to operational plans. Abstract In search and rescue operations, an efficient search path, colloquially understood as a path maximizing the probability of finding survivors, is more than a path planning problem. Maximizing the objective adequately, i.e., quickly enough and with sufficient realism, can have substantial positive impact in terms of human lives saved. In this paper, we address the problem of efficiently optimizing search paths in the context of the NP-hard optimal search path problem with visibility, based on search theory. To that end, we evaluate and develop ant colony optimization algorithm variants where the goal is to maximize the probability of finding a moving search object with Markovian motion, given a finite time horizon and finite resources (scans) to allocate to visible regions. Our empirical results, based on evaluating 96 variants of the metaheuristic with standard components tailored to the problem and using realistic size search environments, provide valuable insights regarding the best algorithm configurations. Furthermore, our best variants compare favorably, especially on the larger and more realistic instances, with a standard greedy heuristic and a state-of-the-art mixed-integer linear program solver. With this research, we add to the empirical body of evidence on an ant colony optimization algorithms configuration and applications, and pave the way to the implementation of search path optimization in operational decision support systems for search and rescue. Author Affiliation: (a) Department of Operations and Decision Systems, Université Laval, Québec, Canada (b) Department of Computer Science and Software Engineering, Université Laval, Québec, Canada * Corresponding author. Article History: Received 2 August 2021; Accepted 8 June 2022 Byline: Michael Morin [Michael.Morin@osd.ulaval.ca] (*,a), Irène Abi-Zeid (a), Claude-Guy Quimper (b)
- Published
- 2023
- Full Text
- View/download PDF
46. Contention alleviation in WiFi networks by using light-weight machine learning model
- Author
-
Bak, Charn-doh and Han, Seung-Jae
- Subjects
Wi-Fi ,Machine learning ,Neural networks ,Wireless local area networks (Computer networks) ,Computer science ,Wireless network ,Wireless LAN/WAN system ,Neural network ,Business ,Computers ,Telecommunications industry - Abstract
Keywords IEEE 802.11; Light-weight deep learning; Contention alleviation; Reinforcement learning Abstract The random access MAC (e.g., CSMA/CA) is known to be vulnerable to heavy collisions when the transmission contention level is high. Heavy collisions have typically been mitigated by adjusting the length of the waiting interval imposed before individual transmission attempts, called 'backoff time'. For example, there exist a large body of studies to enhance the efficiency of the WiFi backoff mechanism. In this paper, we propose a novel scheme that inserts a configurable non-random duration, called offset, before the beginning of the conventional CSMA/CA backoff. Since the existing backoff mechanism is untouched, the proposed scheme is compatible with all legacy and future WiFi standards. The size of offset is dynamically determined at run time by using a lightweight machine learning model. While machine learning models have been applied to the problem of contention alleviation in WiFi networks, the proposed scheme is unique in its flexibility and low overhead for model training and inferencing. Our scheme employs a simple lightweight DNN (Deep Neural Network) model which does not require a large amount of training data. The complexity of our machine learning model does not increase even if the number of the devices in the WiFi networks increases. Furthermore, the proposed scheme is executed in a distributed fashion, so that (near) real-time adaptation to dynamic change of traffic condition is possible at each end device that is not equipped with high computing resource (i.e., on-device DNN model execution). Author Affiliation: Department of Computer Science, Yonsei University, Seoul, Republic of Korea * Corresponding author. Article History: Received 21 March 2022; Revised 31 October 2022; Accepted 19 December 2022 Byline: Charn-doh Bak [breakcd77@yonsei.ac.kr], Seung-Jae Han [seungjaehan@yonsei.ac.kr] (*)
- Published
- 2023
- Full Text
- View/download PDF
47. Scale-CIM: Precision-scalable computing-in-memory for energy-efficient quantized neural networks
- Author
-
Lee, Young Seo, Gong, Young-Ho, and Chung, Sung Woo
- Subjects
Energy conservation ,Computer-integrated manufacturing -- Energy use ,Memory ,Amplifiers (Electronics) -- Energy use ,Energy efficiency ,Computer science ,Neural networks -- Energy use ,Energy consumption ,Energy management systems -- Energy use ,Neural network ,Business ,Computers and office automation industries - Abstract
Keywords Digital-based computing-in-memory; Quantized neural networks; Precision-scalable computation Highlights * Quantized neural networks (QNNs) have been widely exploited to reduce memory usage and computational complexity. * Though analog-based CIM architectures enable energy-efficient in-memory MAC (Multiply-accumulate) operations for QNNs, they necessitate extremely large and power-consuming analog-to-digital converters. * Scale-CIM performs precision-scalable MAC operations with high parallelism, by executing MAC operations using CIM (computing-in-memory) arrays and the existing sense amplifiers, without ADCs. Abstract Quantized neural networks (QNNs), which perform multiply-accumulate (MAC) operations with low-precision weights or activations, have been widely exploited to reduce energy consumption. QNNs usually have a trade-off between energy consumption and accuracy depending on the quantized precision, so that it is necessary to select an appropriate precision for energy efficiency. Nevertheless, the conventional hardware accelerators such as Google TPU are typically designed and optimized for a specific precision (e.g., 8-bit), which may degrade energy efficiency for other precisions. Though an analog-based computing-in-memory (CIM) technology supporting variable precision has been proposed to improve energy efficiency, its implementation requires extremely large and power-consuming analog-to-digital converters (ADCs). In this paper, we propose Scale-CIM, a precision-scalable CIM architecture which supports MAC operations based on digital computations (not analog computations). Scale-CIM performs binary MAC operations with high parallelism, by executing digital-based multiplication operations in the CIM array and accumulation operations in the peripheral logic. In addition, Scale-CIM supports multi-bit MAC operations without ADCs, based on the binary MAC operations and shift operations depending on the precision. Since Scale-CIM fully utilizes the CIM array for various quantized precisions (not for a specific precision), it achieves high compute-throughput. Consequently, Scale-CIM enables precision-scalable CIM-based MAC operations with high parallelism. Our simulation results show that Scale-CIM achieves 1.5~15.8 x speedup and reduces system energy consumption by 53.7~95.7% across different quantized precisions, compared to the state-of-the-art precision-scalable accelerator. Author Affiliation: (a) Department of Computer Science, Korea University, Seoul 02841, Republic of Korea (b) School of Computer and Information Engineering, Kwangwoon University, Seoul 01897, Republic of Korea (c) Samsung Electronics, Hwaseong, Gyeonggi-do 18448, Republic of Korea * Corresponding authors. Article History: Received 22 June 2022; Revised 26 September 2022; Accepted 14 November 2022 (footnote)1 This work was done when Young Seo Lee worked at Korea University. Byline: Young Seo Lee [leeyoungseo@korea.ac.kr] (a,c,1), Young-Ho Gong [yhgong@kw.ac.kr] (b,*), Sung Woo Chung [swchung@korea.ac.kr] (a,*)
- Published
- 2023
- Full Text
- View/download PDF
48. Privacy-preserving certificateless public auditing supporting different auditing frequencies
- Author
-
Huang, Yinghui, Shen, Wenting, Qin, Jing, and Hou, Huiying
- Subjects
Privacy ,Computer science ,Privacy issue ,Business ,Computers and office automation industries - Abstract
Keywords Cloud storage; Cloud security; Public auditing; Privacy-preserving; Data security Abstract Public auditing enables a verifier, say a third party auditor (TPA), to verify whether the cloud correctly stores the user's cloud data. To date, a lot of public auditing schemes have been proposed. However, none of them consider the problem of auditing frequency. In practice, to reduce the cost of auditing service and avoid waste of resources, the users prefer to frequently verify the integrity of high-value data and check the integrity of low-value data with low frequency. In this paper, we propose a privacy-preserving certificateless public auditing scheme supporting different auditing frequencies. In this scheme, a novel auditing strategy is provided, in which the TPA is allowed to complete auditing tasks on high-value and low-value files with different auditing frequencies. The user's privacy can be achieved by utilizing permutation technology to confuse the real file index. Hence, the TPA cannot distinguish the files with high value. We also use the random masking technology to mask the auditing proof, which guarantees the privacy of user data. The proposed scheme avoids the complicated certificate management and key escrow since it is designed in the certificateless cryptography. We prove that the proposed scheme is secure. The experimental results are shown to validate the efficiency of the proposed scheme. Author Affiliation: (a) College of Computer Science and Technology, Qingdao University, Qingdao 266071, China (b) School of Mathematicsy, Shandong University, Jinan 250100, China (c) School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou 450001, China * Corresponding author. Article History: Received 1 November 2022; Revised 18 January 2023; Accepted 10 March 2023 Byline: Yinghui Huang (a), Wenting Shen [shenwentingmath@163.com] (*,a), Jing Qin (b), Huiying Hou (c)
- Published
- 2023
- Full Text
- View/download PDF
49. 2DF-IDS: Decentralized and differentially private federated learning-based intrusion detection system for industrial IoT
- Author
-
Friha, Othmane, Ferrag, Mohamed Amine, Benbouzid, Mohamed, Berghout, Tarek, Kantarci, Burak, and Choo, Kim-Kwang Raymond
- Subjects
Artificial intelligence ,Security software -- Innovations ,Detectors -- Innovations ,Cryptography ,Computer science ,Network security software ,Artificial intelligence ,Business ,Computers and office automation industries - Abstract
Keywords Cybersecurity; Privacy; Intrusion detection; Industry 4.0; Decentralized federated learning; Differential privacy; IoT/IIoT security; Post-Quantum cryptography Abstract Advanced technologies, such as the Internet of Things (IoT) and Artificial Intelligence (AI), underpin many of the innovations in Industry 4.0. However, the interconnectivity and open nature of such systems in smart industrial facilities can also be targeted and abused by malicious actors, which reinforces the importance of cyber security. In this paper, we present a secure, decentralized, and Differentially Private (DP) Federated Learning (FL)-based IDS (2DF-IDS), for securing smart industrial facilities. The proposed 2DF-IDS comprises three building blocks, namely: a key exchange protocol (for securing the communicated weights among all peers in the system), a differentially private gradient exchange scheme (achieve improved privacy of the FL approach), and a decentralized FL approach (that mitigates the single point of failure/attack risk associated with the aggregation server in the conventional FL approach). We evaluate our proposed system through detailed experiments using a real-world IoT/IIoT dataset, and the results show that the proposed 2DF-IDS system can identify different types of cyber attacks in an Industrial IoT system with high performance. For instance, the proposed system achieves comparable performance (94.37%) with the centralized learning approach (94.37%) and outperforms the FL-based approach (93.91%) in terms of accuracy. The proposed system is also shown to improve the overall performance by 12%, 13%, and 9% in terms of F1-score, recall, and precision, respectively, under strict privacy settings when compared to other competing FL-based IDS solutions. Author Affiliation: (a) Networks and Systems Laboratory (LRS), Badji Mokhtar-Annaba University, B.P.12, Annaba 23000, Algeria (b) Artificial Intelligence & Digital Science Research Center, Technology Innovation Institute, United Arab Emirates (c) UMR CNRS 6027 IRDL, University of Brest, Brest, France (d) Laboratory of Automation and Manufacturing Engineering, University of Batna 2, Batna, Algeria (e) School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, Canada (f) Department of Information Systems and Cyber Security, University of Texas at San Antonio, San Antonio, TX 78249, USA * Corresponding author. Article History: Received 1 September 2022; Revised 30 November 2022; Accepted 9 January 2023 Byline: Othmane Friha [othmane.friha@univ-annaba.org] (*,a), Mohamed Amine Ferrag [mohamed.ferrag@tii.ae] (b), Mohamed Benbouzid [Mohamed.Benbouzid@univ-brest.fr] (c), Tarek Berghout [t.berghout@univ-batna2.dz] (d), Burak Kantarci [Burak.Kantarci@uottawa.ca] (e), Kim-Kwang Raymond Choo [raymond.choo@fulbrightmail.org] (f)
- Published
- 2023
- Full Text
- View/download PDF
50. The MicroRV32 framework: An accessible and configurable open source RISC-V cross-level platform for education and research
- Author
-
Ahmadi-Pour, Sallar, Herdt, Vladimir, and Drechsler, Rolf
- Subjects
Executives ,Digital integrated circuits ,Computer science ,Education ,Programmable logic array ,Business ,Computers and office automation industries - Abstract
Keywords RISC-V; RTL; FPGA; Virtual prototype; Open source Abstract In this paper we propose [mu]RV32 (MicroRV32) an open source RISC-V platform for education and research. [mu]RV32 integrates several peripherals alongside a configurable 32 bit RISC-V core interconnected with a generic bus system. It supports bare-metal applications as well as the FreeRTOS operating system. Beside an RTL implementation in the modern SpinalHDL language ([mu]RV32 RTL) we also provide a corresponding binary compatible Virtual Prototype (VP) that is implemented in standard compliant SystemC TLM ([mu]RV32 VP). In combination the VP and RTL descriptions pave the way for advanced cross-level methodologies in the RISC-V context. Moreover, based on a readily available open source tool flow, [mu]RV32 RTL can be exported into a Verilog description and simulated with the Verilator tool or synthesized onto an FPGA. The tool flow is very accessible and fully supported under Linux. As part of our experiments we provide a set of ready to use application benchmarks and report execution performance results of [mu]RV32 at the RTL, VP and FPGA level together with a proof-of-concept FPGA synthesis statistic for different processor configurations. We believe that our [mu]RV32 platform is a suitable foundation for further research and education purposes due to its open source nature, accessible toolchain working in Linux and support for small low-priced FPGAs in combination with a solid feature set. Author Affiliation: (a) Institute of Computer Science, University of Bremen, Bremen, Germany (b) Cyber-Physical Systems, DFKI GmbH, Bremen, Germany * Corresponding author. Article History: Received 11 July 2022; Revised 5 September 2022; Accepted 9 October 2022 Byline: Sallar Ahmadi-Pour [sallar@uni-bremen.de] (a,*), Vladimir Herdt [vherdt@uni-bremen.de] (a,b), Rolf Drechsler [drechsler@uni-bremen.de] (a,b)
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.