149 results
Search Results
2. ENGINEERING AND COMPUTER SCIENCE PAPER ABSTRACTS
- Subjects
Computer science ,Science and technology - Abstract
EMOTIONAL INTELLIGENCE AND THE SOUL: LIMITATIONS TO STRONG AI. ETHAN WIDEN, FAULKNER UNIVERSITY. Limitations in the reach of programming beyond strict logic as well as weaknesses in the current tests [...]
- Published
- 2018
3. Prolog to modular design principles for protocols with an application to the transport layer; a tutorial introduction to the paper by Shankar
- Author
-
Braham, Robert
- Subjects
Network Architecture ,Protocol ,Transport Layer Control ,Computer Science ,Network Models ,Analysis - Published
- 1991
4. Speed Proportional Integrative Derivative Controller: Optimization Functions in Metaheuristic Algorithms
- Author
-
López, Luis Fernando de Mingo, García, Francisco Serradilla, Naranjo Hernández, José Eugenio, and Blas, Nuria Gómez
- Subjects
Algorithm ,Mathematical optimization ,Computer science ,Algorithms - Abstract
Recent advancements in computer science include some optimization models that have been developed and used in real applications. Some metaheuristic search/optimization algorithms have been tested to obtain optimal solutions to speed controller applications in self-driving cars. Some metaheuristic algorithms are based on social behaviour, resulting in several search models, functions, and parameters, and thus algorithm-specific strengths and weaknesses. The present paper proposes a fitness function on the basis of the mathematical description of proportional integrative derivate controllers showing that mean square error is not always the best measure when looking for a solution to the problem. The fitness developed in this paper contains features and equations from the mathematical background of proportional integrative derivative controllers to calculate the best performance of the system. Such results are applied to quantitatively evaluate the performance of twenty-one optimization algorithms. Furthermore, improved versions of the fitness function are considered, in order to investigate which aspects are enhanced by applying the optimization algorithms. Results show that the right fitness function is a key point to get a good performance, regardless of the chosen algorithm. The aim of this paper is to present a novel objective function to carry out optimizations of the gains of a PID controller, using several computational intelligence techniques to perform the optimizations. The result of these optimizations will demonstrate the improved efficiency of the selected control schema., Author(s): Luis Fernando de Mingo López [1]; Francisco Serradilla García [1,2]; José Eugenio Naranjo Hernández (corresponding author) [1,2]; Nuria Gómez Blas [1] 1. Introduction Many optimization problems are nondeterministic polynomial-time [...]
- Published
- 2021
- Full Text
- View/download PDF
5. Computer Science and Philosophy: Did Plato Foresee Object-Oriented Programming?
- Author
-
Tylman, Wojciech
- Subjects
Object oriented programming ,Greek philosophy ,Computer science ,Philosophy of science ,Pythagoreanism ,Object-oriented programming ,Reusable code ,Distributed object technology ,Science and technology - Abstract
This paper contains a discussion of striking similarities between influential philosophical concepts of the past and the approaches currently employed in selected areas of computer science. In particular, works of the Pythagoreans, Plato, Abelard, Ash'arites, Malebranche and Berkeley are presented and contrasted with such computer science ideas as digital computers, object-oriented programming, the modelling of an object's actions and causality in virtual environments, and 3D graphics rendering. The intention of this paper is to provoke the computer science community to go off the beaten path in order to find inspiration for the development of new approaches in software engineering., Author(s): Wojciech Tylman [sup.1] Author Affiliations: (Aff1) 0000 0004 0620 0652, grid.412284.9, Department of Microelectronics and Computer Science, Lódz University of Technology, , Lódz, Poland Introduction Today's computer science is [...]
- Published
- 2018
- Full Text
- View/download PDF
6. From Coding To Curing. Functions, Implementations, and Correctness in Deep Learning
- Author
-
Angius, Nicola and Plebe, Alessio
- Subjects
Driverless cars ,Computer science ,Library and information science ,Science and technology ,Social sciences - Abstract
This paper sheds light on the shift that is taking place from the practice of 'coding', namely developing programs as conventional in the software community, to the practice of 'curing', an activity that has emerged in the last few years in Deep Learning (DL) and that amounts to curing the data regime to which a DL model is exposed during training. Initially, the curing paradigm is illustrated by means of a study-case on autonomous vehicles. Subsequently, the shift from coding to curing is analysed taking into consideration the epistemological notions, central in the philosophy of computer science, of function, implementation, and correctness. First, it is illustrated how, in the curing paradigm, the functions performed by the trained model depend much more on dataset curation rather than on the model algorithms which, in contrast with the coding paradigm, do not comply with requested specifications. Second, it is highlighted how DL models cannot be considered implementations according to any of the available definitions of implementation that follow an intentional theory of functions. Finally, it is argued that DL models cannot be evaluated in terms of their correctness but rather in their experimental computational validity., Author(s): Nicola Angius [sup.1], Alessio Plebe [sup.1] Author Affiliations: (1) grid.10438.3e, 0000 0001 2178 8421, Department of Cognitive Science, University of Messina, , Messina, Italy Introduction Let us loosely use [...]
- Published
- 2023
- Full Text
- View/download PDF
7. Speed Proportional Integrative Derivative Controller: Optimization Functions in Metaheuristic Algorithms
- Author
-
López, Luis Fernando de Mingo, García, Francisco Serradilla, Naranjo Hernández, José Eugenio, and Blas, Nuria Gómez
- Subjects
Algorithm ,Mathematical optimization ,Computer science ,Algorithms - Abstract
Recent advancements in computer science include some optimization models that have been developed and used in real applications. Some metaheuristic search/optimization algorithms have been tested to obtain optimal solutions to speed controller applications in self-driving cars. Some metaheuristic algorithms are based on social behaviour, resulting in several search models, functions, and parameters, and thus algorithm-specific strengths and weaknesses. The present paper proposes a fitness function on the basis of the mathematical description of proportional integrative derivate controllers showing that mean square error is not always the best measure when looking for a solution to the problem. The fitness developed in this paper contains features and equations from the mathematical background of proportional integrative derivative controllers to calculate the best performance of the system. Such results are applied to quantitatively evaluate the performance of twenty-one optimization algorithms. Furthermore, improved versions of the fitness function are considered, in order to investigate which aspects are enhanced by applying the optimization algorithms. Results show that the right fitness function is a key point to get a good performance, regardless of the chosen algorithm. The aim of this paper is to present a novel objective function to carry out optimizations of the gains of a PID controller, using several computational intelligence techniques to perform the optimizations. The result of these optimizations will demonstrate the improved efficiency of the selected control schema., Author(s): Luis Fernando de Mingo López [1]; Francisco Serradilla García [1,2]; José Eugenio Naranjo Hernández (corresponding author) [1,2]; Nuria Gómez Blas [1] 1. Introduction Many optimization problems are nondeterministic polynomial-time [...]
- Published
- 2021
- Full Text
- View/download PDF
8. A Critical Analysis of Scientific Productivity of the 'Robotics' Research in India during 2009-2018
- Author
-
Gaud, Nutan
- Subjects
Robotics industry -- Analysis ,Robotics -- Analysis ,Time ,Databases ,Computer science ,Robotics industry ,Library and information science - Abstract
This paper describes results of a scientometric study of 'Robotics' research publications during the period of 10 years i.e. (2009-2018). The raw data was collected from leading citation database i.e. Scopus. The study examines and analysis various scientometrics parameter and after the analysis it has been found that the out of total 4325 research paper, the highest documents were published in 2018 i.e. 791 (18.29%) while the minimum 134 (3.1%) of research papers were found in the year 2009 and the annual growth rate of publications was in fluctuating trends. The maximum (1.23) relative growth rate was found in 2010; the highest doubling time (3.43) was recorded in 2018; the maximum papers were written by more than three authors i.e. 1657 research papers. The average degree of author's collaborations was (0.93). Krishna, K.M. was the most productive author with (51) research papers contribution. Out of a total 16670 citations, 2718 citations were recorded in 2010 and a total 4325 publications, 2704 (62.52%) of records were conference paper. The highest publications came from computer science subject and the highest publications were published in ACM international conference proceeding series while the maximum 2508 'Robotics keyword was used by the authors during the period of study. Keywords: Annual Groth Rate, Relative Growth Rate and Doubling time, Degree of Authors Collaboration, Collaboration Coefficient, Collaborative Index., 1. INTRODUCTION The development and improvement of new technology robotics play a vital role. In the modern era robotics technique use every fields and the feature era is totally depend [...]
- Published
- 2019
9. Why There is no General Solution to the Problem of Software Verification
- Author
-
Symons, John and Horner, Jack K.
- Subjects
Computer science ,Science and technology - Abstract
How can we be certain that software is reliable? Is there any method that can verify the correctness of software for all cases of interest? Computer scientists and software engineers have informally assumed that there is no fully general solution to the verification problem. In this paper, we survey approaches to the problem of software verification and offer a new proof for why there can be no general solution., Author(s): John Symons [sup.1], Jack K. Horner [sup.1] [sup.2] Author Affiliations: (1) grid.266515.3, 0000 0001 2106 0692, Department of Philosophy, University of Kansas, , Wescoe Hall 1445 Jayhawk Blvd, 66045-7590, [...]
- Published
- 2020
- Full Text
- View/download PDF
10. On the Mutual Dependence Between Formal Methods and Empirical Testing in Program Verification
- Author
-
Angius, Nicola
- Subjects
Computer science ,Library and information science ,Science and technology ,Social sciences - Abstract
This paper provides a review of Raymond Turner's book Computational Artefacts. Towards a Philosophy of Computer Science. Focus is made on the definition of program correctness as the twofold problem of evaluating whether both the symbolic program and the physical implementation satisfy a set of specifications. The review stresses how these are not two separate problems. First, it is highlighted how formal proofs of correctness need to rely on the analysis of physical computational processes. Secondly, it is underlined how software testing requires considering the formal relations holding between the specifications and the symbolic program. Such a mutual dependency between formal and empirical program verification methods is finally shown to influence the debate on the epistemological status of computer science., Author(s): Nicola Angius [sup.1] Author Affiliations: (1) grid.11450.31, 0000 0001 2097 9138, Department of History, Human Sciences, and Education, University of Sassari, , Sassari, Italy Raymond Turner's book Computational Artefacts. [...]
- Published
- 2020
- Full Text
- View/download PDF
11. RETRACTED: Specification and verification of dynamic evolution of software architectures
- Author
-
Hongzhen, Xu and Guosun, Zeng
- Subjects
Software quality ,Software ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2010.08.005 Byline: Xu Hongzhen, Zeng Guosun Abstract: This article has been retracted: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy). This article has been retracted at the request of the Editor-in-Chief. This paper has been withdrawn as it became apparent that a second, virtually identical article was published at about the same time in Journal of Software. The two papers had very similar content, addressing the same problem with similar techniques. Following a detailed investigation, with the support of the authors, the differences in the text and the techniques used were considered too minor. Author Affiliation: (a) Department of Computer Science and Technology, Tongji University, Shanghai 201804, China (b) Department of Computer Science and Technology, East China Institute of Technology, Fuzhou, Jiangxi Province 344000, China (c) Application of Nuclear Technology Engineering Center of Ministry of Education, Nanchang, Jiangxi Province 330013, China (d) Embedded System and Service Computing Key Lab of Ministry of Education, Shanghai 201804, China
- Published
- 2010
12. Quorum-based power-saving multicast protocols in the asynchronous ad hoc network
- Author
-
Kuo, Yu-Chen
- Subjects
ATM ,Energy consumption ,Computer science ,Asynchronous communications - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.03.001 Byline: Yu-Chen Kuo Keywords: Ad hoc network; Chinese Remainder Theorem; Multicast protocol; Power management; Quorum systems Abstract: The asynchronous PS (Power-Saving) unicast protocol was designed for two PS wireless hosts to transmit the unicast message in the ad hoc network even their clocks are asynchronous. However, as regard to transmit a multicast message among more than two PS hosts, the protocol could not guarantee that all PS hosts can wake up at the same time. Some PS hosts may be in the PS mode when the multicast message is transmitted. Thus, the multicast message should be retransmitted again and again until all PS hosts receive the message. It will increase the energy consumption and the usage of the bandwidth. In this paper, we propose quorum-based PS multicast protocols for PS hosts to transmit multicast messages in the asynchronous ad hoc network. In those protocols, PS hosts use quorums to indicate their wakeup patterns. We introduce the rotation m-closure property to guarantee that m different quorums have the intersection even quorums are rotated due to asynchronous clocks. Thus, m PS hosts adopting m quorums satisfying the rotation m-closure property could wake up simultaneously and receive the multicast message even their clocks are asynchronous. We propose two quorum systems named the uniform k-arbiter and the CRT (Chinese Remainder Theorem) quorum system, which satisfy the rotation m-closure property. As shown in our analysis results, our quorum-based PS multicast protocols adopting those quorum systems can save more energy to transmit multicast messages. Author Affiliation: Department of Computer Science and Information Management, Soochow University, 56, Kueiyang St. Sec. 1, Taipei 100, Taiwan Article History: Received 19 July 2009; Revised 20 January 2010; Accepted 1 March 2010 Article Note: (footnote) [star] The preliminary version of this paper was presented in the Proceeding of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC), 2008, pp. 332-337.
- Published
- 2010
13. Two versions of architectures for dynamic implied addressing mode
- Author
-
Youn, Jonghee M., Ahn, Minwook, Paek, Yunheung, Kim, Jongwung, and Cho, Jeonghun
- Subjects
Algorithm ,Electrical engineering ,Mathematical optimization ,Algorithms ,Computer science ,Electrical engineering - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2010.05.014 Byline: Jonghee M. Youn (a), Minwook Ahn (a), Yunheung Paek (a), Jongwung Kim (b), Jeonghun Cho (b) Keywords: Embedded processor; Compiler; Optimization; Addressing mode Abstract: The complexity of today's embedded applications increases with various requirements such as execution time, code size or power consumption. To satisfy these requirements for performance, efficient instruction set design is one of the important issues because an instruction customized for specific applications can make better performance than multiple instructions in aspect of fast execution time, decrease of code size, and low power consumption. Limited encoding space, however, does not allow adding application specific and complex instructions freely to the instruction set architecture. To resolve this problem, conventional architectures increases free space for encoding by trimming excessive bits required beyond the fixed word length. This approach however shows severe weakness in terms of the complexity of compiler, code size and execution time. In this paper, we propose a new instruction encoding scheme based on the dynamic implied addressing mode (DIAM) to resolve limited encoding space and side-effect by trimming. We report our two versions of architectures to support our DIAM-based approach. In the first version, we use a special on-chip memory to store extra encoding information. In the second version, we replace the memory by a small on-chip buffer along with a special instruction. We also suggest a code generation algorithm to fully utilize DIAM. In our experiment, the architecture augmented with DIAM shows about 8% code size reduction and 18% speed up on average, as compared to the basic architecture without DIAM. Author Affiliation: (a) School of Electrical Engineering and Computer Science, Seoul National University, Republic of Korea (b) School of Electrical Engineering and Computer Science, Kyungpook National University, Republic of Korea Article History: Received 1 October 2009; Revised 14 May 2010; Accepted 31 May 2010 Article Note: (footnote) [star] This is a revised and expanded version of two papers published in the 7th IEEE Symposium on Application Specific Processors, July 2009 and the 11th IEEE International Conference on High Performance Computing and Communications, June 2009 .
- Published
- 2010
14. An intelligent backbone formation algorithm for wireless ad hoc networks based on distributed learning automata
- Author
-
Torkestani, Javad Akbari and Meybodi, Mohammad Reza
- Subjects
Robot ,Algorithm ,Robots ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2009.10.007 Byline: Javad Akbari Torkestani (a), Mohammad Reza Meybodi (b)(c) Keywords: Wireless ad hoc networks; Backbone formation; Broadcast storm problem; Connected dominating set; Distributed learning automata Abstract: In wireless ad hoc networks, due to the dynamic topology changes, multi hop communications and strict resource limitations, routing becomes the most challenging issue, and broadcasting is a common approach which is used to alleviate the routing problem. Global flooding is a straightforward broadcasting method which is used in almost all existing topology-based routing protocols and suffers from the notorious broadcast storm problem. The connected dominating set (CDS) formation is a promising approach for reducing the broadcast routing overhead in which the messages are forwarded along the virtual backbone induced by the CDS. In this paper, we propose an intelligent backbone formation algorithm based on distributed learning automata (DLA) in which a near optimal solution to the minimum CDS problem is found. Sending along this virtual backbone alleviates the broadcast storm problem as the number of hosts responsible for broadcast routing is reduced to the number of hosts in backbone. The proposed algorithm can be also used in multicast routing protocols, where the only multicast group members need to be dominated by the CDS. In this paper, the worst case running time and message complexity of the proposed backbone formation algorithm to find a 1/(1-I[micro]) optimal size backbone are computed. It is shown that by a proper choice of the learning rate of the proposed algorithm, a trade-off between the running time and message complexity of algorithm with the backbone size can be made. The simulation results show that the proposed algorithm significantly outperforms the existing CDS-based backbone formation algorithms in terms of the network backbone size, and its message overhead is only slightly more than the least cost algorithm. Author Affiliation: (a) Department of Computer Engineering, Islamic Azad University, Science and Research Branch, Tehran, Iran (b) Computer Engineering and IT Department, Amirkabir University of Technology, Tehran, Iran (c) Institute for Studies in Theoretical Physics and Mathematics (IPM), School of Computer Science, Tehran, Iran Article History: Received 1 September 2008; Revised 1 October 2009; Accepted 8 October 2009 Article Note: (miscellaneous) Responsible Editor: Edwin Chong
- Published
- 2010
15. A novel low-cost, limited-resource approach to autonomous multi-robot exploration and mapping
- Author
-
Gifford, Christopher M., Webb, Russell, Bley, James, Leung, Daniel, Calnon, Mark, Makarewicz, Joseph, Banz, Bryan, and Agah, Arvin
- Subjects
Human beings -- Influence on nature ,Robots ,Computer science ,Robotics ,Solar system ,Robot ,Computers - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.robot.2009.09.014 Byline: Christopher M. Gifford, Russell Webb, James Bley, Daniel Leung, Mark Calnon, Joseph Makarewicz, Bryan Banz, Arvin Agah Abstract: Mobile robots are becoming more heavily used in environments where human involvement is limited, impossible, or dangerous. These robots perform some of the more laborious human tasks on Earth and throughout the solar system, simultaneously saving resources and offering automation. Higher levels of autonomy are also being sought in these applications, such as distributed exploration and mapping of unknown areas. Smaller, less expensive mobile robots are becoming more prevalent, which introduces unique challenges in terms of limited sensing accuracy and onboard computing resources. This paper presents a novel low-cost, limited-resource approach to autonomous multi-robot mapping and exploration in unstructured environments. Design and implementation details are presented, along with results from two planetary style environments. Results demonstrate that low-cost ($ 1250) mobile robots capable of simultaneous localization and mapping can be successfully constructed. The multi-robot system presented in this paper participated in the 2008 International Conference on Robotics and Automation (ICRA) Space Robotics Challenge, receiving two awards for successfully completing the 'Onto the Surface' and 'Map the Environment' events in a simulated planetary environment. This work demonstrates not only that such systems are possible, but also that this direction of research is important and needs attention. Author Affiliation: Electrical Engineering and Computer Science Department, University of Kansas, Lawrence, KS 66045, USA Article History: Received 5 September 2008; Revised 2 September 2009; Accepted 21 September 2009
- Published
- 2010
16. Fault localization through evaluation sequences
- Author
-
Zhang, Zhenyu, Jiang, Bo, Chan, W.K., Tse, T.H., and Wang, Xinming
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2009.09.041 Byline: Zhenyu Zhang (a), Bo Jiang (a), W.K. Chan (b), T.H. Tse (a), Xinming Wang (c) Keywords: Fault localization; Boolean expression; Predicate; Evaluation sequence Abstract: Predicate-based statistical fault-localization techniques find fault-relevant predicates in a program by contrasting the statistics of the evaluation results of individual predicates between failed runs and successful runs. While short-circuit evaluations may occur in program executions, treating predicates as atomic units ignores this fact, masking out various types of useful statistics on dynamic program behavior. In this paper, we differentiate the short-circuit evaluations of individual predicates on individual program statements, producing one set of evaluation sequences per predicate. We then investigate experimentally the effectiveness of using these sequences to locate faults by comparing existing predicate-based techniques with and without such differentiation. We use both the Siemens program suite and four real-life UNIX utility programs as our subjects. The experimental results show that the proposed use of short-circuit evaluations can, on average, improve predicate-based statistical fault-localization techniques while incurring relatively small performance overhead. Author Affiliation: (a) Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong (b) Department of Computer Science, City University of Hong Kong, Tat Chee Avenue, Hong Kong (c) Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong Article History: Received 2 March 2009; Revised 12 August 2009; Accepted 22 September 2009 Article Note: (footnote) [star] A preliminary version of this paper () has been presented at the 32nd Annual International Computer Software and Applications Conference (COMPSAC 2008).
- Published
- 2010
17. FPGA and ASIC implementations of the I[middle dot] T pairing in characteristic three
- Subjects
Digital integrated circuits ,Cryptography ,Universities and colleges ,Computer science ,Application-specific integrated circuits ,Custom integrated circuits ,Programmable logic array ,Application-specific integrated circuit ,Custom IC ,Computers ,Electronics ,Engineering and manufacturing industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compeleceng.2009.05.001 Byline: Jean-Luc Beuchat (a), Hiroshi Doi (b), Kaoru Fujita (c), Atsuo Inomata (d), Piseth Ith (a), Akira Kanaoka (a), Masayoshi Katouno (c), Masahiro Mambo (a), Eiji Okamoto (a), Takeshi Okamoto (e), Takaaki Shiga (f), Masaaki Shirase (g), Ryuji Soga (c), Tsuyoshi Takagi (g), Ananda Vithanage (f), Hiroyasu Yamamoto (c) Keywords: Tate pairing; I* T pairing; Elliptic curve cryptography; Finite field arithmetic; Hardware accelerator Abstract: Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. As they rely critically on efficient implementations of pairing primitives, the study of hardware accelerators has become an active research area. In this paper, we propose two coprocessors for the reduced I*.sub.T pairing introduced by Barreto et al. as an alternative means of computing the Tate pairing on supersingular elliptic curves. We prototyped our architectures on FPGAs. According to our place-and-route results, our coprocessors compare favorably with other solutions described in the open literature. We eventually present the first ASIC implementation of the reduced I*.sub.T pairing. Author Affiliation: (a) Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan (b) Graduate School of Information Security, Institute of Information Security, 2-14-1 Tsuruya-cho Kanagawa-ku, Yokohama 221-0835, Japan (c) FDK Module System Technology Corporation, 1 Kamanomae, Kamiyunagaya-machi, Jyoban, Iwaki-shi, Japan (d) Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara 630-0192, Japan (e) Department of Computer Science, Tsukuba University of Technology, 4-12-7 Kasuga, Tsukuba, Ibaraki 305-8521, Japan (f) FDK Corporation, 1 Kamanomae, Kamiyunagaya-machi, Jyoban, Iwaki-shi, Japan (g) School of Systems Information Science, Future University-Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655, Japan Article History: Received 24 June 2008; Revised 2 February 2009; Accepted 18 May 2009 Article Note: (footnote) [star] This paper is an extended version of .
- Published
- 2010
18. A high capacity reversible data hiding scheme with edge prediction and difference expansion
- Author
-
Wu, Hsien-Chu, Lee, Chih-Chiang, Tsai, Chwei-Shyong, Chu, Yen-Ping, and Chen, Hung-Ruei
- Subjects
Image processor ,Image processing -- Equipment and supplies ,Image processing -- Statistics ,Universities and colleges ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2009.06.056 Byline: Hsien-Chu Wu (a), Chih-Chiang Lee (b), Chwei-Shyong Tsai (c), Yen-Ping Chu (d), Hung-Ruei Chen (c) Keywords: Difference expansion; Digital watermarking; Edge prediction; Reversible data hiding; Steganography Abstract: To enhance the embedding capacity of a reversible data hiding system, in this paper, a novel multiple-base lossless scheme based on JPEG-LS pixel value prediction and reversible difference expansion will be presented. The proposed scheme employs a pixel value prediction mechanism to decrease the distortion caused by the hiding of the secret data. In general, the prediction error value tends to be much smaller in smooth areas than in edge areas, and more secret data embedded in smooth areas still meets better stego-image quality. The multiple-base notational system, on the other hand, is applied to increase the payload of the image. With the system, the payload of each pixel, determined by the complexity of its neighboring pixels, can be very different. In addition, the cover image processed by the proposed scheme can be fully recovered without any distortion. Experimental results, as shown in this paper, have demonstrated that the proposed method is capable of hiding more secret data while keeping the stego-image quality degradation imperceptible. Author Affiliation: (a) Department of Computer Science and Information Engineering, National Taichung Institute of Technology, 129, Section 3, San Min Road, Taichung City, Taiwan, ROC (b) Department of Computer Science and Engineering, National Chung Hsing University, 250, Kuo Kuang Road, Taichung City, Taiwan, ROC (c) Department of Management Information Systems, National Chung Hsing University, 250, Kuo Kuang Road, 402 Taichung, Taiwan, ROC (d) Department of Computer Science and Information Engineering, Tunghai University, 181, Section 3, Taichung Port Road, Situn District, Taichung City, Taiwan, ROC Article History: Received 8 August 2007; Revised 24 April 2009; Accepted 23 June 2009
- Published
- 2009
19. On the detection of signaling DoS attacks on 3G/WiMax wireless networks
- Author
-
Lee, Patrick P.C., Bu, Tian, and Woo, Thomas
- Subjects
Algorithm ,Network security software ,Algorithms ,Denial of service attacks ,Computer science ,Wi-Max ,Security software - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2009.05.008 Byline: Patrick P.C. Lee (a), Tian Bu (b), Thomas Woo (b) Keywords: 3G wireless; Security; DoS attacks Abstract: Third generation (3G) wireless networks based on the CDMA2000 and UMTS standards are now increasingly being deployed throughout the world. Because of their complex signaling and relatively limited bandwidth, these 3G networks are generally more vulnerable than their wireline counterparts, thus making them fertile ground for new attacks. In this paper, we identify and study a novel denial of service (DoS) attack, called signaling attack, that exploits the unique vulnerabilities of the signaling/control plane in 3G wireless networks. Using simulations driven by real traces, we are able to demonstrate the impact of a signaling attack. Specifically, we show how a well-timed low-volume signaling attack can potentially overload the control plane and detrimentally affect the key elements in a 3G wireless infrastructure. The low-volume nature of the signaling attack allows it to avoid detection by existing intrusion detection algorithms, which are often signature or volume-based. As a counter-measure, we present and evaluate an online early detection algorithm based on the statistical CUSUM method. Through the use of extensive trace-driven simulations, we demonstrate that the algorithm is robust and can identify an attack in its inception, before significant damage is done. Apart from 3G networks, we also show that many emerging wide-area networks such as 802.16/WiMax share the same vulnerability and our solution can also apply. Author Affiliation: (a) Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong (b) Bell Labs, Alcatel-Lucent, 600-700 Mountain Avenue, Murray Hill, NJ 07974, USA Article History: Received 15 September 2008; Revised 27 April 2009; Accepted 20 May 2009 Article Note: (footnote) [star] An earlier and shorter conference version of this paper appeared in IEEE INFOCOM'07 . This paper additionally includes a discussion of the signaling attack in WiMax/802.16, provides evaluation on different dimensions of attacks, and presents more rigorous arguments in some of our analysis.
- Published
- 2009
20. Generic operations and capabilities in the JR concurrent programming language
- Author
-
Chan, Hiu Ning, Angela, Gallagher, Andrew J., Goundan, Appu S., Yeung, Yi Lin William Au, Keen, Aaron W., and Olsson, Ronald A.
- Subjects
Computer programming ,Computer science ,Generic drugs ,Computer programming ,Computers - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cl.2008.05.002 Byline: Hiu Ning (Angela) Chan (a), Andrew J. Gallagher (a), Appu S. Goundan (a), Yi Lin William Au Yeung (a), Aaron W. Keen (b), Ronald A. Olsson (a) Abstract: The JR concurrent programming language extends Java with additional concurrency mechanisms, which are built upon JR's operations and capabilities. JR operations generalize methods in how they can be invoked and serviced. JR capabilities act as reference to operations. Recent changes to the Java language and implementation, especially generics, necessitated corresponding changes to the JR language and implementation. This paper describes the new JR language features (known as JR2) of generic operations and generic capabilities. These new features posed some interesting implementation challenges. The paper describes our initial implementation (JR21) of generic operations and capabilities, which works in many, but not all, cases. It then describes the approach our improved implementation (JR24) uses to fully implement generic operations and capabilities. The paper also describes the benchmarks used to assess the compilation and execution time performances of JR21 and JR24. The JR24 implementation reduces compilation times, mainly due to reducing the number of files generated during JR program translation, without noticeably impacting execution times. Author Affiliation: (a) Department of Computer Science, University of California, Davis, CA 95616, USA (b) Computer Science Department, California Polytechnic State University, San Luis Obispo, CA 93407, USA Article History: Received 24 January 2008; Accepted 16 May 2008
- Published
- 2009
- Full Text
- View/download PDF
21. On the reliability of large-scale distributed systems - A topological view
- Author
-
He, Yuan, Ren, Hao, Liu, Yunhao, and Yang, Baijian
- Subjects
Algorithm ,Yuan (China) ,Peer to peer computing ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2009.03.012 Byline: Yuan He (a), Hao Ren (b), Yunhao Liu (a), Baijian Yang (c) Keywords: Cut vertex; Peer-to-peer; Reliability; Detection; Distributed method Abstract: In large-scale, self-organized distributed systems, such as peer-to-peer (P2P) overlays and wireless sensor networks (WSN), a small proportion of the nodes are likely to be more critical to the system's reliability than others. This paper focuses on detecting cut vertices so that we can either neutralize or protect these critical nodes. Detection of cut vertices is trivial if the global knowledge of the whole system is known but it is very challenging when the global knowledge is not available. In this paper, we propose a completely distributed scheme where every single node can determine whether it is a cut vertex or not. In addition, our design can also confine the detection overhead to a constant instead of being proportional to the size of a network. The correctness of this algorithm is theoretically proved and the key performance gains are measured and verified through trace-driven simulations. Author Affiliation: (a) Hong Kong University of Science and Technology, Department of Computer Science and Engineering, Hong Kong (b) College of Computer, National University of Defense Technology, Changsha, Hunan 410073, China (c) Department of Technology, Ball State University, Muncie, IN 47306, United States Article History: Received 12 August 2008; Revised 25 March 2009; Accepted 25 March 2009 Article Note: (miscellaneous) Responsible Editor: A. Capone
- Published
- 2009
22. An assessment of systems and software engineering scholars and institutions (2002-2006)
- Author
-
Wong, W. Eric, Tse, T.H., Glass, Robert L., Basili, Victor R., and Chen, T.Y.
- Subjects
Software development/engineering ,Software quality ,Software engineering ,Software ,Computer science ,Universities and colleges - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2009.06.018 Byline: W. Eric Wong (a), T.H. Tse (b), Robert L. Glass (c), Victor R. Basili (d), T.Y. Chen (e) Keywords: Top scholars; Top institutions; Systems and software engineering; Research publications Abstract: This paper summarizes a survey of publications in the field of systems and software engineering from 2002 to 2006. The survey is an ongoing, annual event that identifies the top 15 scholars and institutions over a 5-year period. The rankings are calculated based on the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea, and the top-ranked scholar is Magne JA[cedilla]rgensen of Simula Research Laboratory, Norway. Author Affiliation: (a) Department of Computer Science, The University of Texas at Dallas, Richardson, TX 75083, USA (b) Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong (c) Computing Trends, 18 View Street, Paddington, QLD 4064, Australia (d) Department of Computer Science, University of Maryland, College Park, MD 20742, USA (e) Faculty of Information and Communication Technologies, Swinburne University of Technology, John Street, Melbourne 3122, Australia Article History: Received 5 June 2009; Accepted 5 June 2009
- Published
- 2009
23. An interdisciplinary approach to coalition formation
- Author
-
Berghammer, Rudolf, Rusinowska, Agnieszka, and De Swart, Harrie
- Subjects
Algebra ,Computer science ,Algorithms ,Algorithm ,Business ,Business, general ,Business, international - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.ejor.2008.02.011 Byline: Rudolf Berghammer (a), Agnieszka Rusinowska (b), Harrie de Swart (c) Keywords: Graph theory; RelView; Relational algebra; Dominance; Stable government Abstract: A stable government is by definition not dominated by any other government. However, it may happen that all governments are dominated. In graph-theoretic terms this means that the dominance graph does not possess a source. In this paper we are able to deal with this case by a clever combination of notions from different fields, such as relational algebra, graph theory and social choice theory, and by using the computer support system RelView for computing solutions and visualizing the results. Using relational algorithms, in such a case we break all cycles in each initial strongly connected component by removing the vertices in an appropriate minimum feedback vertex set. In this way we can choose a government that is as close as possible to being un-dominated. To achieve unique solutions, we additionally apply the majority ranking recently introduced by Balinski and Laraki. The main parts of our procedure can be executed using the RelView tool. Its sophisticated implementation of relations allows to deal with graph sizes that are sufficient for practical applications of coalition formation. Author Affiliation: (a) Institute of Computer Science, University of Kiel, Olshausenstra[sz]e 40, 24098 Kiel, Germany (b) GATE, CNRS, Universite Lumiere Lyon 2, 93 Chemin des Mouilles, B.P.167, 69131 Ecully Cedex, France (c) Department of Philosophy, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands Article History: Received 4 December 2006; Accepted 12 February 2008 Article Note: (footnote) [star] Co-operation for this paper was supported by European COST Action 274 'Theory and Applications of Relational Structures as Knowledge Instruments' (TARSKI).
- Published
- 2009
- Full Text
- View/download PDF
24. Real-time FFT with pre-calculation
- Author
-
Yen, Wen-Fang, You, Shingchern D., and Chang, Yung-Chao
- Subjects
Algorithms ,Computer science ,Algorithm ,Computers ,Electronics ,Engineering and manufacturing industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compeleceng.2008.10.002 Byline: Wen-Fang Yen (a), Shingchern D. You (b), Yung-Chao Chang (c) Keywords: Real-time FFT; Real-time signal processing; Completion delay Abstract: The technique of pre-calculation process for real-time FFT is presented in this paper. The real-time FFT algorithm simultaneously constructs and computes the butterfly modules while the incoming data is collected. Thus, the time to complete the FFT calculation is shorter when compared to the conventional FFT. The proposed pre-calculation process that can further reduce this time is verified. Furthermore, depending on the computing capability of the processor, different number of pre-calculation stages for better performance is also suggested in the paper. For a critical mission requiring a shorter time to complete the FFT calculation, the proposed approach is a better choice. Author Affiliation: (a) Department of Electronic Engineering, Hwa Hsia Institute of Technology, 111, Gong Jhuan Rd., Chung Ho, Taipei, Taiwan, ROC (b) Department of Computer Science and Information Engineering, National Taipei University of Technology, 1, Sec. 3, Chung-Hsiao East Rd., Taipei, Taiwan, ROC (c) Delhum Technology and Service Corp., 9F, 332, Sec. 1, Tun-Hwa South Rd., Taipei, Taiwan, ROC Article History: Received 15 April 2008; Accepted 22 October 2008
- Published
- 2009
25. Dynamic Web Service discovery architecture based on a novel peer based overlay network
- Author
-
Sioutas, S., Sakkopoulos, E., Makris, Ch., Vassiliadis, B., Tsakalidis, A., and Triantafillou, P.
- Subjects
Web services ,Computer science ,Web services ,Business ,Computers and office automation industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2008.11.845 Byline: S. Sioutas (a), E. Sakkopoulos (b), Ch. Makris (b), B. Vassiliadis (c), A. Tsakalidis (b), P. Triantafillou (b) Keywords: Web Services Discovery; Peer to Peer overlay networks; Databases Abstract: Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services. So far, discovery mechanisms based on the UDDI standard rely on many centralized and area-specific directories, which poses information stress problems such as performance bottlenecks and fault tolerance. In this context, decentralized approaches based on Peer to Peer overlay networks have been proposed by many researchers as a solution. In this paper, we propose a new structured P2P overlay network infrastructure designed for Web Services Discovery. We present theoretical analysis backed up by experimental results, showing that the proposed solution outperforms popular decentralized infrastructures for web discovery, Chord (and some of its successors), BATON (and it's successor) and Skip-Graphs. Author Affiliation: (a) Department of Informatics, Ionian University, 7 Tsirigoti Square, 49100 Corfu, Greece (b) Computer Engineering and Informatics Department, University of Patras, Greece (c) Computer Science, Hellenic Open University, Patras, Greece Article History: Received 18 June 2008; Revised 12 November 2008; Accepted 13 November 2008 Article Note: (footnote) [star] Preliminary version of this paper was presented in Proceedings of IEEE International Conference on Information Technology: Coding and Computing, Next - Generation Web and Grid Systems (IEEE/ITCC 2005), pp. 193-198.
- Published
- 2009
- Full Text
- View/download PDF
26. An anomaly prevention approach for real-time task scheduling
- Author
-
Chen, Ya-Shu, Chang, Li-Pin, Kuo, Tei-Wei, and Mok, Aloysius K.
- Subjects
Computer science ,Business ,Computers and office automation industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2008.07.038 Byline: Ya-Shu Chen (a), Li-Pin Chang (b), Tei-Wei Kuo (c), Aloysius K. Mok (d) Keywords: Scheduling anomaly; Real-time task scheduling; Process synchronization; Scheduler stability Abstract: This research responds to practical requirements in the porting of embedded software over platforms and the well-known multiprocessor anomaly. In particular, we consider the task scheduling problem when the system configuration changes. With mutual-exclusive resource accessing, we show that new violations of the timing constraints of tasks might occur even when a more powerful processor or device is adopted. The concept of scheduler stability and rules are then proposed to prevent scheduling anomaly from occurring in task executions that might be involved with task synchronization or I/O access. Finally, we explore policies for bounding the duration of scheduling anomalies. Author Affiliation: (a) Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan (b) Department of Computer Science, National Chiao-Tung University, Hsin-Chu 300, Taiwan (c) Department of Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan (d) Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712, USA Article History: Received 8 June 2006; Revised 2 July 2008; Accepted 21 July 2008 Article Note: (footnote) [star] This paper is an extended version of the paper that appeared in .
- Published
- 2009
- Full Text
- View/download PDF
27. Exploiting an abstract-machine-based framework in the design of a Java ILP processor
- Author
-
Wang, H.C. and Yuen, C.K.
- Subjects
Computer science ,Business ,Computers and office automation industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2008.07.006 Byline: H.C. Wang, C.K. Yuen Keywords: Abstract machine; Java processor; Embedded processor; ILP; VLIW Abstract: Abstract machines bridge the gap between the high-level of programming languages and the low-level mechanisms of a real machine. The paper proposed a general abstract-machine-based framework (AMBF) to build instruction level parallelism processors using the instruction tagging technique. The constructed processor may accept code written in any (abstract or real) machine instruction set, and produce tagged machine code after data conflicts are resolved. This requires the construction of a tagging unit which emulates the sequential execution of the program using tags rather than actual values. The paper presents a Java ILP processor by using the proposed framework. The Java processor takes advantage of the tagging unit to dynamically translate Java bytecode instructions to RISC-like tag-based instructions to facilitate the use of a general-purpose RISC core and enable the exploitation of instruction level parallelism. We detailed the Java ILP processor architecture and the design issues. Benchmarking of the Java processor using SpecJVM98 and Linpack has shown the overall ILP speedup improvement between 78% and 173%. Author Affiliation: Department of Computer Science, School of Computing, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore Article History: Received 16 June 2007; Revised 29 April 2008; Accepted 30 July 2008
- Published
- 2009
- Full Text
- View/download PDF
28. The Hamburg Metaphor Database project: issues in resource creation
- Author
-
Lonneker-Rodman, Birte
- Subjects
Computer science ,Algorithms ,Databases ,CD-ROM catalog ,CD-ROM database ,Database ,Algorithm ,Computers ,Humanities - Abstract
Byline: Birte Lonneker-Rodman (1) Keywords: Agreement; Annotation; Conceptual information; Evaluation; Lexical information; Mapping; Metaphor; Resource creation Abstract: This paper concerns metaphor resource creation. It provides an account of methods used, problems discovered, and insights gained at the Hamburg Metaphor Database project, intended to inform similar resource creation initiatives, as well as future metaphor processing algorithms. After introducing the project, the theoretical underpinnings that motivate the subdivision of represented information into a conceptual and a lexical level are laid out. The acquisition of metaphor attestations from electronic corpora is explained, and annotation practices as well as database contents are evaluated. The paper concludes with an overview of related projects and an outline of possible future work. Author Affiliation: (1) International Computer Science Institute, 1947 Center Street, Suite 600, Berkeley, CA, 94704, USA Article History: Registration Date: 02/09/2008 Received Date: 12/09/2007 Accepted Date: 02/09/2008 Online Date: 01/10/2008
- Published
- 2008
29. A hierarchical key management scheme for secure group communications in mobile ad hoc networks
- Author
-
Wang, Nen-Chung and Fang, Shian-Zhang
- Subjects
Protocol ,Tunnels ,Computer science ,Computer network protocols - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2006.12.564 Byline: Nen-Chung Wang (a), Shian-Zhang Fang (b) Keywords: Group communication; Group key; Key management; Mobile ad hoc networks; Network security Abstract: A mobile ad hoc network (MANET) is a kind of wireless communication infrastructure that does not have base stations or routers. Each node acts as a router and is responsible for dynamically discovering other nodes it can directly communicate with. However, when a message without encryption is sent out through a general tunnel, it may be maliciously attacked. In this paper, we propose a hierarchical key management scheme (HKMS) for secure group communications in MANETs. For the sake of security, we encrypt a packet twice. Due to the frequent changes of the topology of a MANET, we also discuss group maintenance in this paper. Finally, we conducted the security and performance analysis to compare the proposed scheme with Tseng et al.'s [Tseng, Y.-M., Yang, C.-C., Liao, D.-R., 2007. A secure group communication protocol for ad hoc wireless networks. In: Advances in Wireless Ad Hoc and Sensor Networks and Mobile Computing. Book Series Signal and Communication Technology. Springer] and Steiner et al.'s [Steiner, M., Tsudik, G., Waidner, M., 1998. CLIQUES: a new approach to group key agreement. In: Proceedings of the 18th IEEE International Conference on Distributed Computing System. Amsterdam, Netherlands, pp. 380-387] schemes. Author Affiliation: (a) Department of Computer Science and Information Engineering, National United University, Miao-Li 360, Taiwan, ROC (b) Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413, Taiwan, ROC Article History: Received 11 March 2006; Revised 24 November 2006; Accepted 22 December 2006
- Published
- 2007
30. Distributed mechanism in detecting and defending against the low-rate TCP attack
- Author
-
Sun, Haibin, Lui, John C.S., and Yau, David K.Y.
- Subjects
TCP/IP ,Transmission Control Protocol/Internet Protocol (Computer network protocol) ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.09.016 Byline: Haibin Sun (a), John C.S. Lui (a), David K.Y. Yau (b) Keywords: Network security; Dos attack; Low-rate attack Abstract: In this paper, we consider a distributed mechanism to detect and to defend against the low-rate TCP attack. The low-rate TCP attack is a recently discovered attack. In essence, it is a periodic short burst that exploits the homogeneity of the minimum retransmission timeout (RTO) of TCP flows and forces all affected TCP flows to backoff and enter the retransmission timeout state. When these affected TCP flows timeout and retransmit their packets, the low-rate attack will again send a short burst to force these affected TCP flows to enter RTO again. Therefore these affected TCP flows may be entitled to zero or very low transmission bandwidth. This sort of attack is difficult to identify due to a large family of attack patterns. We propose a distributed detection mechanism to identify the low-rate attack. In particular, we use the dynamic time warping approach to robustly and accurately identify the existence of the low-rate attack. Once the attack is detected, we use a fair resource allocation mechanism to schedule all packets so that (1) the number of affected TCP flows is minimized and, (2) provide sufficient resource protection to those affected TCP flows. Experiments are carried out to quantify the robustness and accuracy of the proposed distributed detection mechanism. In particular, one can achieve a very low false positive/negative when compare to legitimate Internet traffic. Our experiments also illustrate the the efficiency of the defense mechanism across different attack patterns and network topologies. Author Affiliation: (a) Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong (b) Department of Computer Science, Purdue University, USA Article History: Received 5 January 2005; Revised 9 July 2005; Accepted 8 September 2005 Article Note: (footnote) [star] The preliminary version of this paper appeared in the International Conference of Network Protocols (ICNP) 2004, Berlin, Germany.
- Published
- 2006
31. Automatic generation of test cases from Boolean specifications using the MUMCUT strategy
- Author
-
Yu, Yuen Tak, Lau, Man Fai, and Chen, Tsong Yueh
- Subjects
Universities and colleges ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.08.016 Byline: Yuen Tak Yu (a), Man Fai Lau (b), Tsong Yueh Chen (b) Keywords: Black-box testing; Boolean specification; Fault-based testing; Specification-based testing; Test case generation Abstract: A recent theoretical study has proved that the MUMCUT testing strategy (1) guarantees to detect seven types of fault in Boolean specifications in irredundant disjunctive normal form, and (2) requires only a subset of the test sets that satisfy the previously proposed MAX-A and MAX-B strategies, which can detect the same types of fault. This paper complements previous work by investigating various methods for the automatic generation of test cases to satisfy the MUMCUT strategy. We evaluate these methods by using several sets of Boolean expressions, including those derived from real airborne software systems. Our results indicate that the greedy CUN and UCN methods are clearly better than others in consistently producing significantly smaller test sets, whose sizes exhibit linear correlation with the length of the Boolean expressions in irredundant disjunctive normal form. This study provides empirical evidences that the MUMCUT strategy is indeed cost-effective for detecting the faults considered in this paper. Author Affiliation: (a) Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong (b) Faculty of Information and Communication Technologies, Swinburne University of Technology, Hawthorn 3122, Australia Article History: Received 27 February 2003; Revised 17 August 2005; Accepted 19 August 2005 Article Note: (footnote) [star] The work is supported in part by a grant from City University of Hong Kong (project no. 7001233), a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC ref. no. CityU 1083/00E), and Australian Research Council Discovery Grants (project IDs: DP0345147 and DP0558597).
- Published
- 2006
32. An assessment of systems and software engineering scholars and institutions (2000-2004)
- Author
-
Tse, T.H., Chen, T.Y., and Glass, Robert L.
- Subjects
Software development/engineering ,Software quality ,Software engineering ,Software ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.08.018 Byline: T.H. Tse (a), T.Y. Chen (b), Robert L. Glass (c) Keywords: Top scholars; Top institutions; Research publications; Systems and software engineering Abstract: This paper presents the findings of a five-year study of the top scholars and institutions in the Systems and Software Engineering field, as measured by the quantity of papers published in the journals of the field in 2000-2004. The top scholar is Hai Zhuge of the Chinese Academy of Sciences, and the top institution is Korea Advanced Institute of Science and Technology. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period. Author Affiliation: (a) Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong (b) Faculty of Information and Communication Technologies, Swinburne University of Technology, John Street, Melbourne 3122, Australia (c) Computing Trends, 18 View St., Brisbane, QLD 4064, Australia Article History: Received 29 July 2005; Revised 2 August 2005; Accepted 13 August 2005
- Published
- 2006
33. Adaptive schemes for location update generation in execution location-dependent continuous queries
- Author
-
Lam, Kam-Yiu and Ulusoy, AZguR
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.07.015 Byline: Kam-Yiu Lam (a), Azgur Ulusoy (b) Keywords: Location-dependent continuous queries; Location update; Moving object database; Location management Abstract: An important feature that is expected to be owned by today's mobile computing systems is the ability of processing location-dependent continuous queries on moving objects. The result of a location-dependent query depends on the current location of the mobile client which has generated the query as well as the locations of the moving objects on which the query has been issued. When a location-dependent query is specified to be continuous, the result of the query can continuously change. In order to provide accurate and timely query results to a client, the location of the client as well as the locations of moving objects in the system has to be closely monitored. Most of the location generation methods proposed in the literature aim to optimize utilization of the limited wireless bandwidth. The issues of correctness and timeliness of query results reported to clients have been largely ignored. In this paper, we propose an adaptive monitoring method (AMM) and a deadline-driven method (DDM) for managing the locations of moving objects. The aim of our methods is to generate location updates with the consideration of maintaining the correctness of query evaluation results without increasing location update workload. Extensive simulation experiments have been conducted to investigate the performance of the proposed methods as compared to a well-known location update generation method, the plain dead-reckoning (pdr). Author Affiliation: (a) Department of Computer Science, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong (b) Department of Computer Engineering, Bilkent University, Bilkent, Ankara 06800, Turkey Article History: Received 27 January 2004; Revised 1 July 2005; Accepted 15 July 2005 Article Note: (footnote) [star] This work described in this paper was supported by a research grant from the Research Grant Council of Hong Kong Special Administration Region, China [Project No. CityU 1076/02E].
- Published
- 2006
34. Memory latency consideration for load sharing on heterogeneous network of workstations
- Author
-
Wang, Yi-Min
- Subjects
Computer industry ,Microcomputer industry ,Computer industry ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2005.03.002 Byline: Yi-Min Wang Keywords: Heterogeneous network of workstations; Load sharing; Process migration; Memory latency Abstract: With the development of cheap personal computer and high-speed network, heterogeneous network of workstation has become the trend in high performance computing. This paper focuses on the load sharing problem for heterogeneous network of workstations. Load sharing means even workloads of all coordinated computers in the heterogeneous system without leaving any computer idle. When some nodes suffer from heavy loading, it is necessary to migrate some processes to the nodes with light loading. However, most load sharing policies focus only on different CPU speed and/or memory capacity without taking the effect of memory access latencies into consideration. In the paper, we propose a new load sharing policy, CPU-Memory-Power-based policy, to improve CPU-Memory-based policy. In addition to CPU speed and memory capacity, this policy also puts emphasis on memory access latency. Experimental results show that this method performs better than the other policies, and that memory access latency is actually an important consideration in the design of load sharing policies on heterogeneous network of workstation. Author Affiliation: Department of Computer Science and Information Management, Providence University, 200 Chung-chi Road, Shalu, Taichung Hsien, 433 Taiwan, ROC Article History: Received 22 October 2002; Revised 23 January 2004; Accepted 15 March 2005 Article Note: (footnote) [star] This research work was partially supported by the National Science Council of Republic of China under grant NSC91-2213-E-126-003.
- Published
- 2006
35. Two controlled experiments concerning the comparison of pair programming to peer review
- Author
-
Muller, Matthias M.
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.12.019 Byline: Matthias M. Muller Keywords: Pair programming; Peer reviews; Empirical software engineering; Controlled experiment Abstract: This paper reports on two controlled experiments comparing pair programming with single developers who are assisted by an additional anonymous peer code review phase. The experiments were conducted in the summer semester 2002 and 2003 at the University of Karlsruhe with 38 computer science students. Instead of comparing pair programming to solo programming this study aims at finding a technique by which a single developer produces similar program quality as programmer pairs do but with moderate cost. The study has one major finding concerning the cost of the two development methods. Single developers are as costly as programmer pairs, if both programmer pairs and single developers with an additional review phase are forced to produce programs of similar level of correctness. In conclusion, programmer pairs and single developers become interchangeable in terms of development cost. As this paper reports on the results of small development tasks the comparison could not take into account long time benefits of either technique. Author Affiliation: Department of Computer Science, Fakultat fur Informatik, Universitat Karlsruhe, Am Fasanengarten 5, 76131 Karlsruhe, Germany Article History: Received 3 June 2004; Revised 23 December 2004; Accepted 24 December 2004
- Published
- 2005
36. ConSUS: a light-weight program conditioner
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.03.034 Byline: Sebastian Danicic (a), Mohammed Daoudi (a), Chris Fox (b), Mark Harman (c), Robert M. Hierons (c), John R. Howroyd (a), Lahcen Ourabya (a), Martin Ward (d) Abstract: Program conditioning consists of identifying and removing a set of statements which cannot be executed when a condition of interest holds at some point in a program. It has been applied to problems in maintenance, testing, re-use and re-engineering. All current approaches to program conditioning rely upon both symbolic execution and reasoning about symbolic predicates. The reasoning can be performed by a 'heavy duty' theorem prover but this may impose unrealistic performance constraints. This paper reports on a lightweight approach to theorem proving using the FermaT Simplify decision procedure. This is used as a component to ConSUS, a program conditioning system for the Wide Spectrum Language WSL. The paper describes the symbolic execution algorithm used by ConSUS, which prunes as it conditions. The paper also provides empirical evidence that conditioning produces a significant reduction in program size and, although exponential in the worst case, the conditioning system has low degree polynomial behaviour in many cases, thereby making it scalable to unit level applications of program conditioning. Author Affiliation: (a) Department of Mathematics and Computer Science, Goldsmiths College, University of London, New Cross, London SE14 6NW, United Kingdom (b) Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, United Kingdom (c) Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom (d) Software Technology Research Laboratory, De Montfort University, The Gateway, Leicester LE1 9BH, United Kingdom
- Published
- 2005
37. Placement of proxy-based multicast overlays
- Author
-
Wu, Min-You, Zhu, Yan, and Shu, Wei
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.11.019 Byline: Min-You Wu (a), Yan Zhu (b), Wei Shu (b) Abstract: Proxy-based multicast is an approach to implementing the multicast function with proxy servers, which has many advantages compared to the IP multicast. When proxy locations are predetermined, many routing algorithms can be used to build a multicast overlay network. On the other hand, the problem of where to place proxies has not been extensively studied yet. A study on the placement problem for proxy-based multicast is presented in this paper. The objectives of proxy placement can be minimization of delay time, minimization of bandwidth consumption, or minimization of both. These performance metrics are studied in this paper and a number of algorithms are proposed for multicast proxy placement. Performance study is presented and practical issues of proxy placement are also discussed. Author Affiliation: (a) Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China (b) Department of Electrical and Computer Engineering, The University of New Mexico, MSC01 1100, Albuquerque, NM 87131-0001, USA Article Note: (miscellaneous) Responsible Editor: I. Matta
- Published
- 2005
38. A generalized fault-tolerant sorting algorithm on a product network
- Author
-
Chen, Yuh-Shyan, Chang, Chih-Yung, Lin, Tsung-Hung, and Kuo, Chun-Bo
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2004.11.005 Byline: Yuh-Shyan Chen (a), Chih-Yung Chang (b), Tsung-Hung Lin (a), Chun-Bo Kuo (a) Keywords: Fault-tolerant; Product networks; Snake order; Odd-even sorting; Bitonic sorting Abstract: A product network defines a class of topologies that are very often used such as meshes, tori, and hypercubes, etc. This paper proposes a generalized algorithm for fault-tolerant parallel sorting in product networks. To tolerate r -1 faulty nodes, an r-dimensional product network containing faulty nodes is partitioned into a number of subgraphs such that each subgraph contains at most one fault. Our generalized sorting algorithm is divided into two steps. First, a single-fault sorting operation is presented to correctly performed on each faulty subgraph containing one fault. Second, each subgraph is considered a supernode, and a fault-tolerant multiway merging operation is presented to recursively merge two sorted subsequences into one sorted sequence. Our generalized sorting algorithm can be applied to any product network only if the factor graph of the product graph can be embedding in a ring. Further, we also show the time complexity of our sorting operations on a grid, hypercube, and Petersen cube. Performance analysis illustrates that our generalized sorting scheme is a truly efficient fault-tolerant algorithm. Author Affiliation: (a) Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan, ROC (b) Department of Computer Science and Information Engineering, Tamkang University, Taipei, Taiwan, ROC Article History: Received 30 September 2000; Revised 6 March 2002; Accepted 15 November 2004 Article Note: (footnote) [star] A preliminary version of this paper were presented in Proceedings of HPC 2000: 6th International Conference on Applications of High-Performance Computers in Engineering, Maui, Hawaii, January 26-28, 2000, and was supported by the National Science Council, ROC, under contract no. NSC89-2213-E-216-010.
- Published
- 2005
39. A lower bound for multicast key distribution
- Author
-
Snoeyink, Jack, Suri, Subhash, and Varghese, George
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.09.001 Byline: Jack Snoeyink (a), Subhash Suri (b), George Varghese (c) Keywords: Multicast; Security; Protocol analysis Abstract: With the rapidly growing importance of multicast in the Internet, several schemes for scalable key distribution have been proposed. These schemes require the broadcast of I(log n) encrypted messages to update the group key when the nth user joins or leaves the group. In this paper, we establish a matching lower bound (Independently, and concurrently, Richard Yang and Simon Lam discovered a similar bound with slightly different properties and proofs. An earlier version of our paper appeared in Infocom 2001 while their result appears in [R. Yang, S. Lam, A secure group key management communication lower bound, Technical Report TR-00-24, Department of Computer Sciences, UT Austin, July 2000, revised September 2000].), thus showing that I(log n) encrypted messages are necessary for a general class of key distribution schemes and under different assumptions on user capabilities. While key distribution schemes can exercise some tradeoff between the costs of adding or deleting a user, our main result shows that for any scheme there is a sequence of 2n insertion and deletions whose total cost is I[c](n log n). Thus, any key distribution scheme has a worst-case cost of I[c](log n) either for adding or for deleting a user. Author Affiliation: (a) Department of Computer Science, UNC-Chapel Hill, Chapel Hill, NC 27599-3175, USA (b) Department of Computer Science, University of California, Santa Barbara, CA 93106, USA (c) Computer Science and Engineering Department, University of California, 9500 Gilman Drive, La Jolla, CA 92093-0114, USA Article History: Received 2 September 2002; Revised 3 March 2004; Accepted 7 September 2004 Article Note: (miscellaneous) Reponsible Editor: S. Lam
- Published
- 2005
40. Required skills and Competences of Librarians for Effective Software Application and use in Contemporary Libraries in Nigeria
- Author
-
Inyang, Elizabeth and Mngutayo, James
- Subjects
Librarians -- Technology application ,Libraries -- Technology application ,Software -- Usage -- Technology application ,Developing countries -- Technology application ,Computer science ,Workshops (Educational programs) ,Vendor relations ,Software quality ,Technology application ,Library and information science - Abstract
Libraries today are in the process of transiting from traditional/manual operations to digital/electronic ones especially in developing countries such as Nigeria. To effectively address this, librarians need software application and user skills and competences to migrate totally to ICT (automation). The paper therefore identified the software applications, user skills and competencies needed. To arrive at that the concept of librarian, software application and use, required skills and competences are examined. They have been identified as skills in computer, ICT, knowledge of the types of software in existence generally and in the library, knowledge of how to select appropriate software for the library, knowledge of how to use them, avoiding inadequate ones in quality and services and so on. It assert that skills and competences can be acquired via suppliers of software, computer science program, workshops and conferences, short courses etc and concluded that these skills and competences are a must for librarians given their information provision role on the internet. Keywords: Software application and use, librarians, required skills and competences, Introduction Librarians are managers of library and information systems. Library and Information science is a dynamic field and today is in the era of information age managed by ICT. The [...]
- Published
- 2018
41. Enabling user-oriented management for ubiquitous computing: The meta-design approach
- Author
-
Guo, Bin, Zhang, Daqing, and Imai, Michita
- Subjects
Company business management ,End user ,User need ,End users ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.07.016 Byline: Bin Guo (a)(b), Daqing Zhang (a), Michita Imai (b) Keywords: Ubiquitous computing management; Meta-design; Cooperation; Semantic Web; End user development; Smart object; Wireless sensor network Abstract: This paper presents iPlumber, a user-oriented management system for ubiquitous computing environments. Different from previous low-benefit "zero-configuration" systems or high cognitive-cost "end user programming" tools, our work attempts to attain a better balance between user benefits and cost by exploring the meta-design approach. A set of typical management activities in ubicomp environments are supported, from basic-level software sharing, foraging, and low-cost software configuration to advanced-level cooperative software co-design and error handling. These activities are elaborated through a smart home control scenario. The usability of our system is validated through an initial user study with a total of 33 subjects to test the management activities from an open exhibition environment and a controlled university environment. Author Affiliation: (a) Telecommunication Network and Services Department, Institut TELECOM SudParis, 9, rue Charles Fourier, 91011 Evry Cedex, France (b) Department of Information & Computer Science, Keio University, Japan Article History: Received 28 December 2009; Revised 20 June 2010; Accepted 9 July 2010
- Published
- 2010
42. Small target detection using cross product based on temporal profile in infrared image sequences
- Author
-
Bae, Tae-Wuk, Kim, Byoung-Ik, Kim, Young-Choon, and Sohng, Kyu-Ik
- Subjects
Algorithms ,Computer science ,Electrical engineering ,Universities and colleges ,Algorithm ,Electrical engineering ,Computers ,Electronics ,Engineering and manufacturing industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compeleceng.2010.05.004 Byline: Tae-Wuk Bae (a), Byoung-Ik Kim (a), Young-Choon Kim (b), Kyu-Ik Sohng (a) Keywords: Infrared sequences; Small target detection; Cross product; Temporal profile; Hypothesis testing Abstract: This paper presents a new small target detection method using cross product of temporal pixels based on temporal profiles in infrared (IR) image sequences. Temporal characteristics of small targets and various backgrounds are different. A new algorithm classifies target pixels and background pixels through hypothesis testing using the cross product of pixels on temporal profile and predicts the temporal backgrounds based on the results. Small target pixels are detected by subtracting the predicted temporal background profile from the original temporal profile. For performance comparison between the proposed method and the conventional methods, the receiver operating characteristics (ROC) curves were computed experimentally. Experimental results show that the proposed algorithm has better discrimination of target and clutter pixels and lower false alarm rates than conventional methods. Author Affiliation: (a) School of Electrical Engineering and Computer Science, Kyungpook National University, South Korea (b) Dept. of Information and Communication Engineering, Youngdong University, South Korea Article History: Received 6 December 2009; Accepted 17 May 2010 Article Note: (footnote) [star] Reviews processed and proposed for publication to the Editor-in-Chief by Associate Editor Dr. F. Sahin.
- Published
- 2010
43. Embedding multi-dimensional meshes into twisted cubes
- Author
-
Dong, Qiang, Yang, Xiaofan, and Wang, Dajin
- Subjects
Multiprocessing ,Universities and colleges ,Computer science ,Multiprocessing ,Computers ,Electronics ,Engineering and manufacturing industries - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compeleceng.2010.03.003 Byline: Qiang Dong (a)(b), Xiaofan Yang (a), Dajin Wang (b) Keywords: Interconnection networks; Graph embedding; Twisted cube; Mesh; Parallel processing Abstract: The twisted cube is an important variant of the most popular hypercube network for parallel processing. In this paper, we consider the problem of embedding multi-dimensional meshes into twisted cubes in a systematic way. We present a recursive method for embedding a family of disjoint multi-dimensional meshes into a twisted cube with dilation 1 and expansion 1. We also prove that a single multi-dimensional mesh can be embedded into a twisted cube with dilation 2 and expansion 1. Our work extends some previously known results. Author Affiliation: (a) College of Computer Science, Chongqing University, Chongqing 400044, China (b) Department of Computer Science, Montclair State University, Montclair, NJ 07043, USA Article History: Received 19 June 2009; Revised 12 December 2009; Accepted 18 March 2010 Article Note: (footnote) [star] Reviews processed and proposed for publication to the Editor-in-Chief by Associate Editor Dr. M. Malek.
- Published
- 2010
44. Efficient many-to-one authentication with certificateless aggregate signatures
- Author
-
Zhang, Lei, Qin, Bo, Wu, Qianhong, and Zhang, Futai
- Subjects
Algorithm ,Algorithms ,Cryptography ,Computer science ,Universities and colleges - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.04.008 Byline: Lei Zhang (a), Bo Qin (a)(b), Qianhong Wu (a)(c), Futai Zhang (d)(e) Keywords: Information security; Message authentication; Digital signature; Certificateless cryptography Abstract: Aggregate signatures allow an efficient algorithm to aggregate n signatures of n distinct messages from n different users into one single signature. The resulting aggregate signature can convince a verifier that the n users did indeed sign the n messages. This feature is very attractive for authentications in bandwidth-limited applications such as reverse multicasts and senor networks. Certificateless public key cryptography enables a similar functionality of public key infrastructure (PKI) and identity (ID) based cryptography without suffering from complicated certificate management in PKI or secret key escrow problem in ID-based cryptography. In this paper, we present a new efficient certificateless aggregate signature scheme which has the advantages of both aggregate signatures and certificateless cryptography. The scheme is proven existentially unforgeable against adaptive chosen-message attacks under the standard computational Diffie-Hellman assumption. Our scheme is also very efficient in both communication and computation and the proposal is practical for many-to-one authentication. Author Affiliation: (a) Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, Av. PaA[macron]sos Catalans 26, E-43007 Tarragona, Catalonia, Spain (b) Department of Maths, School of Science, Xi'an University of Technology, China (c) School of Computer, Key Lab. of Aerospace Information Security and Trusted Computing, Ministry of Education, Wuhan University, China (d) School of Computer Science and Technology, Nanjing Normal University, Nanjing, China (e) Jiangsu Engineering Research Center on Information Security and Privacy Protection Technology, Nanjing, China Article History: Received 1 March 2009; Revised 20 November 2009; Accepted 6 April 2010 Article Note: (miscellaneous) Responsible Editor: R. Molva
- Published
- 2010
45. Specification and verification of dynamic evolution of software architectures
- Author
-
Hongzhen, Xu and Guosun, Zeng
- Subjects
Modularity ,Software architecture ,Algorithm ,Software quality ,Algorithms ,Computer science ,Software - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2010.08.005 Byline: Xu Hongzhen (a)(b)(c), Zeng Guosun (a)(d) Keywords: Software architecture; Dynamic evolution; Evolution rule; Verification; Liveness Abstract: With software systems being more and more open and complex, a major challenge for those systems is to evolve themselves gradually, especially during their runtime, where software architectures can provide a foundation for dynamic software evolution. In this paper, we propose a specifying and verifying method for dynamic evolution of software architectures using hypergraph grammars. We propose two general atomic evolution rules and three general composite evolution rules of software architectures based on hypergraphs, and specify dynamic evolution of software architectures according to those rules and a predefined architecture style through a case study. At last we verify the liveness property of dynamic evolution of software architectures using model checking, and give out corresponding verification algorithms. Our approach provides both a user-friendly graphical representation and formal models based on grammars. Author Affiliation: (a) Department of Computer Science and Technology, Tongji University, Shanghai 201804, China (b) Department of Computer Science and Technology, East China Institute of Technology, Fuzhou, Jiangxi Province 344000, China (c) Application of Nuclear Technology Engineering Center of Ministry of Education, Nanchang, Jiangxi Province 330013, China (d) Embedded System and Service Computing Key Lab of Ministry of Education, Shanghai 201804, China Article History: Received 15 October 2009; Revised 25 May 2010; Accepted 17 August 2010
- Published
- 2010
46. LESSON: A system for lecture notes searching and sharing over Internet
- Author
-
Zhou, Shuigeng, Xu, Ming, and Guan, Jihong
- Subjects
Teachers ,Computer science ,Peer to peer computing - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2010.04.069 Byline: Shuigeng Zhou (a)(b), Ming Xu (a)(b), Jihong Guan (c) Keywords: Lecture notes; Metasearch engines; Peer-to-peer networks; Content sharing; Query routing Abstract: In this paper, we present a system LESSON for lecture notes searching and sharing, which is dedicated to both instructors and students for effectively supporting their Web-based teaching and learning activities. The LESSON system employs a metasearch engine for lecture notes searching from Web and a peer-to-peer (P2P) overlay network for lecture notes sharing among the users. A metasearch engine provides an unified access to multiple existing component search engines and has better performance than general-purpose search engines. With the help of a P2P overlay network, all computers used by instructors and students can be connected into a virtual society over the Internet and communicate directly with each other for lecture notes sharing, without any centralized server and manipulation. In order to merge results from multiple component search engines into a single ranked list, we design the RSF strategy that takes rank, similarity and features of lecture notes into account. To implement query routing decision for effectively supporting lecture notes sharing, we propose a novel query routing mechanism. Experimental results indicate that the LESSON system has better performance in lecture notes searching from Web than some popular general-purpose search engines and some existing metasearch schemes; while processing queries within the system, it outperforms some typical routing methods. Concretely, it can achieve relatively high query hit rate with low bandwidth consumption in different types of network topologies. Author Affiliation: (a) School of Computer Science, Fudan University, Shanghai 200433, China (b) Shanghai Key Lab of Intelligent Information Processing, Shanghai 200433, China (c) Department of Computer Science & Technology, Tongji University, Shanghai 201804, China Article History: Received 17 June 2009; Revised 28 April 2010; Accepted 30 April 2010
- Published
- 2010
47. A Superscalar software architecture model for Multi-Core Processors (MCPs)
- Author
-
Choi, Gyu Sang and Das, Chita R.
- Subjects
Multiprocessing ,Software quality ,Computer science ,Multiprocessing ,Software - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2010.04.068 Byline: Gyu Sang Choi (a), Chita R. Das (b) Keywords: Multi-Core; SuperScalar; Software architecture model; Multi-thread Abstract: Design of high-performance servers has become a research thrust to meet the increasing demand of network-based applications. One approach to design such architectures is to exploit the enormous computing power of Multi-Core Processors (MCPs) that are envisioned to become the state-of-the-art in processor architecture. In this paper, we propose a new software architecture model, called SuperScalar, suitable for MCP machines. The proposed SuperScalar model consists of multiple pipelined thread pools, where each pipelined thread pool consists of multiple threads, and each thread takes a different role. The main advantages of the proposed model are global information sharing by the threads and minimal memory requirement due to fewer threads. We have conducted in-depth performance analyses of the proposed scheme along with three prior software architecture schemes (Multi-Process (MP), Multi-Thread (MT) and Event-Driven (ED)) via an analytical model. The performance results indicate that the proposed SuperScalar model shows the best performance across all system and workload parameters compared to the MP, MT and ED models. Although the MT model shows competitive performance with less number of processing cores and smaller data cache size, the advantage of the SuperScalar model becomes obvious as the number of processing cores increases. Author Affiliation: (a) Department of Information and Communication Engineering, Yeungnam University, Sojae Building #202-1, 214-1 Dae-dong, Gyeongsan-si, Gyeongsangbuk-do 712-749, Republic of Korea (b) Department of Computer Science and Engineering, Pennsylvania State University, University Park, PA 16802, United States Article History: Received 8 June 2009; Revised 29 April 2010; Accepted 29 April 2010 Article Note: (footnote) [star] This research was supported by the Yeungnam University research grants in 2010. Moreover, this research was supported in part by NSF grants EIA-0202007, CCR-0208734, CCF-0429631 and CNS-0509251.
- Published
- 2010
48. Varied PVD+LSB evading detection programs to spatial domain in data embedding systems
- Author
-
Yang, Cheng-Hsing, Weng, Chi-Yao, Wang, Shiuh-Jeng, and Sun, Hung-Min
- Subjects
Embedded system ,System on a chip ,Embedded systems ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2010.03.081 Byline: Cheng-Hsing Yang (a), Chi-Yao Weng (b), Shiuh-Jeng Wang (c), Hung-Min Sun (b) Keywords: Pixel-value differencing; LSB; Steganography; Detection analysis Abstract: Image steganographic schemes based on the least-significant-bit (LSB) replacement method own the character of high capacity and good quality, but they are detected easily by some programs. Pixel-value differencing (PVD) approaches are one kind of methods to pass some program detections, but PVD approaches usually provide lower capacities and larger distortion. Accordingly, the combined method of PVD and LSB replacement was proposed to raise the capacity and the quality of PVD approaches over the past literatures. In this paper, not only we contribute a new exploration in spatial domain to benefiting both the capacity and the fidelity on the basis of varied LSB+PVD approaches, but also the risk of the RS-steganalysis detection program is evaded. Furthermore, proof works are conducted to proclaim the correctness of the general LSB+PVD method. Following up, the varied LSB+PVD approach is therefore applied to our scheme. Author Affiliation: (a) Department of Computer Science, National Pingtung University of Education, Pingtung 900, Taiwan (b) Department of Computer Science, National Tsing-Hua University, Hsinchu 300, Taiwan (c) Department of Information Management, Central Police University, Taoyuan 333, Taiwan Article History: Received 23 June 2008; Revised 4 February 2010; Accepted 27 March 2010
- Published
- 2010
49. Cooperative user-network interactions in next generation communication networks
- Author
-
Antoniou, Josephina, Papadopoulou, Vicky, Vassiliou, Vasos, and Pitsillides, Andreas
- Subjects
Computer science ,Fighter planes - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.03.013 Byline: Josephina Antoniou (a), Vicky Papadopoulou (b), Vasos Vassiliou (a), Andreas Pitsillides (a) Keywords: Next generation networks; Access network selection; Game theory; Cooperative games; Present value Abstract: Next Generation Communication Networks employ the idea of convergence, where heterogeneous access technologies may coexist, and a user may be served by anyone of the participating access networks, motivating the emergence of a Network Selection mechanism. The triggering and execution of the Network Selection mechanism becomes a challenging task due to the heterogeneity of the entities involved, i.e., the users and the access networks. This heterogeneity results in different and often conflicting interests for these entities, motivating the question of how they should behave in order to remain satisfied from their interactions. This paper studies cooperative user-network interactions and seeks appropriate modes of behaviour for these entities such that they achieve own satisfaction overcoming their conflicting interests. Author Affiliation: (a) University of Cyprus, Dept. of Computer Science, 75 Kallipoleos Street, P.O. Box 20537, CY-1678 Nicosia, Cyprus (b) European University Cyprus, Dept. of Computer Science, 6, Diogenes Street, P.O. Box 22006, CY-1516 Nicosia, Cyprus Article History: Received 19 May 2009; Revised 28 February 2010; Accepted 25 March 2010 Article Note: (miscellaneous) Responsible Editor: A. Capone
- Published
- 2010
50. Redesigning multi-channel P2P live video systems with View-Upload Decoupling
- Author
-
Wu, Di, Liang, Chao, Liu, Yong, and Ross, Keith W.
- Subjects
Algorithm ,Peer to peer computing ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.03.024 Byline: Di Wu (a), Chao Liang (b), Yong Liu (b), Keith W. Ross (c) Keywords: P2P applications; Video streaming Abstract: In current multi-channel P2P live video systems, there are several fundamental performance problems including exceedingly-large channel switching delays, long playback lags, and poor performance for less popular channels. These performance problems primarily stem from two intrinsic characteristics of multi-channel P2P video systems: channel churn and channel-resource imbalance. In this paper, we propose a radically different cross-channel P2P streaming framework, called View-Upload Decoupling (VUD). VUD strictly decouples peer downloading from uploading, bringing stability to multi-channel systems and enabling cross-channel resource sharing. We propose a set of peer assignment and bandwidth allocation algorithms to properly provision bandwidth among channels, and introduce substream-swarming to reduce the bandwidth overhead. We evaluate the performance of VUD via extensive simulations as well with a PlanetLab implementation. Our simulation and PlanetLab results show that VUD is resilient to channel churn, and achieves lower switching delay and better streaming quality. In particular, the streaming quality of small channels is greatly improved. Author Affiliation: (a) Department of Computer Science, Sun Yat-Sen University, Guangzhou 510006, China (b) Department of Electrical and Computer Engineering, Polytechnic Institute of NYU, Brooklyn, NY 11201, United States (c) Department of Computer Science and Engineering, Polytechnic Institute of NYU, Brooklyn, NY 11201, United States
- Published
- 2010
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.