2,934 results
Search Results
2. Paper gold
- Author
-
Howard, Marjoria
- Subjects
International Data Group Inc. -- Management ,Computer science ,Periodical publishing -- Management ,Travel, recreation and leisure - Published
- 1988
3. Computer science education in the People's Republic of China in the late 1980's
- Author
-
Wilson, Judith D., Adams, Elizabeth S., Baouendi, Helene P., Marion, William A., and Yaverbaum, Gayle J.
- Subjects
Education ,Computer Science ,China ,Analysis - Abstract
COMPUTER SCIENCE EDUCATION IN THE PEOPLE'S REPUBLIC OF CHINA IN THE LATE 1980'S In May 1987, a delegation of computer professionals from around the world with interests in computer education […]
- Published
- 1988
4. Let babble commence
- Author
-
Kelly-Bootle, Stan
- Subjects
Computer Science ,Teaching ,Curriculum ,Computer Education ,Trends - Abstract
Parallels exist between postmodern literary criticism and the current introspections of computer science. Both academic domains generate what has been dubbed "the theory carnival" (although "paradigm carnival" might be more […]
- Published
- 1990
5. Teaching calculation and discrimination: a more effective curriculum
- Author
-
Gries, David
- Subjects
Software engineering -- Education ,Computer science ,Education ,Computers ,Software development/engineering ,Computer education ,Education - Abstract
Teaching Calculation and Discrimination: A MORE EFFECTIVE CURRICULUM There is real concern, and not only on the part of computer scientists, with the lack of rigor and accountability in software [...]
- Published
- 1991
6. Prolog to modular design principles for protocols with an application to the transport layer; a tutorial introduction to the paper by Shankar
- Author
-
Braham, Robert
- Subjects
Network Architecture ,Protocol ,Transport Layer Control ,Computer Science ,Network Models ,Analysis - Published
- 1991
7. Building blocks of silver
- Author
-
Ochs, Tom
- Subjects
Software Engineering ,Methods ,Computer Science ,Theory ,Program Development Techniques - Abstract
Of all the monsters that fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors. For these, one seeks bullets […]
- Published
- 1992
8. A new direction for ACM publications
- Author
-
Bell, Gwen, Denning, Peter J., and Cohen, Jacques
- Subjects
Association for Computing Machinery ,Trade and professional associations ,Publishing industry ,Computer industry ,Computer science ,Industrial research ,Computers ,Industry association information ,Research and development ,Publishing industry ,Microcomputer industry ,Computer industry - Abstract
I am pleased to announce a package of changes in the Publications area that will continue to build our momentum and improve the quality of our services to you and [...]
- Published
- 1992
9. An interview with Robin Milner
- Author
-
Frenkel, Karen
- Subjects
Computer science ,Industrial research ,Computers ,Research and development - Abstract
RM: Yes, from 1960 to 1963, I was at Ferranti, and from 1963 to 1968, I was at City University. Then I got a chance to do a full-time research [...]
- Published
- 1993
10. Organizational impact on CASE technology
- Author
-
Kwok, Lawrence K.L. and Arnett, Kirk P.
- Subjects
Organization Structure ,MIS ,Computer Science ,Computer-aided software engineering ,Management information systems -- Management ,Computer-aided software engineering -- Usage - Abstract
Computer-Aided Software Engineering (CASE) technology, a fairly recent development for programmers, analysts, software engineers, and even corporate executives, is composed of automated tools which represent many years of research on […]
- Published
- 1993
11. Computer chess: the Drosophila of AI
- Author
-
Coles, L. Stephen
- Subjects
Artificial Intelligence ,Chess ,Computer Game ,Research and Development ,Human Factors ,Competition ,History of Computing ,Future of Computing ,Computer Science ,Cognitive Science - Abstract
The game of chess traditionally has been considered the epitome of intellectual skill and accomplishment. Will computer chess be the crowning achievement? The domain of computer chess playing is suggested […]
- Published
- 1994
12. Random access AI
- Author
-
Newquist, Harvey P., III
- Subjects
Artificial Intelligence ,Trends ,Outlook ,Scientific Research ,Computer Science - Abstract
Over the past six months, I've had the opportunity to immerse myself in the history of AI as part gf a research project for a book called The Brain Makers. […]
- Published
- 1994
13. Dependency sequences and hierarchical clocks: Efficient alternatives to vector clocks for mobile computing systems
- Author
-
Prakash, Ravi and Singhal, Mukesh
- Subjects
Information science ,Computer science ,Computers and office automation industries - Abstract
Keywords: Mobile Computing; Communication Overhead; Mobile Host; Internal Event; Dependency Sequence Abstract Vector clocks have been used to capture causal dependencies between processes in distributed computing systems. Vector clocks are not suitable for mobile computing systems due to (i) lack of scalability: its size is equal to the number of nodes, and (ii) its inability to cope with fluctuations in the number of nodes. This paper presents two efficient alternatives to vector clock, namely, sets of dependency sequences, and hierarchical clock. Both the alternatives are scalable and are immune to fluctuations in the number of nodes in the system. Author Affiliation: (1) Computer Science Program, The University of Texas at Dallas, 75083â0688, Richardson, TX, USA (2) Department of Computer and Information Science, The Ohio State University, 2015 Neil Avenue, 43210, Columbus, OH, USA Article History: Registration Date: 09/28/2004 Byline: Ravi Prakash (1), Mukesh Singhal (2)
- Published
- 1997
- Full Text
- View/download PDF
14. Lowâloss TCP/IP header compression for wireless networks
- Author
-
Degermark, Mikael, Engan, Mathias, Nordgren, Björn, and Pink, Stephen
- Subjects
Telecommunication systems ,Computer science ,Transmission Control Protocol/Internet Protocol (Computer network protocol) ,TCP/IP ,Computers and office automation industries - Abstract
Keywords: Packet Loss; Mobile Host; Packet Loss Rate; Compression State; Mobile IPv6 Abstract Wireless is becoming a popular way to connect mobile computers to the Internet and other networks. The bandwidth of wireless links will probably always be limited due to properties of the physical medium and regulatory limits on the use of frequencies for radio communication. Therefore, it is necessary for network protocols to utilize the available bandwidth efficiently. Headers of IP packets are growing and the bandwidth required for transmitting headers is increasing. With the coming of IPv6 the address size increases from 4 to 16 bytes and the basic IP header increases from 20 to 40 bytes. Moreover, most mobility schemes tunnel packets addressed to mobile hosts by adding an extra IP header or extra routing information, typically increasing the size of TCP/IPv4 headers to 60 bytes and TCP/IPv6 headers to 100 bytes. In this paper, we provide new header compression schemes for UDP/IP and TCP/IP protocols. We show how to reduce the size of UDP/IP headers by an order of magnitude, down to four to five bytes. Our method works over simplex links, lossy links, multiâaccess links, and supports multicast communication. We also show how to generalize the most commonly used method for header compression for TCP/IPv4, developed by Jacobson, to IPv6 and multiple IP headers. The resulting scheme unfortunately reduces TCP throughput over lossy links due to unfavorable interaction with TCP's congestion control mechanisms. However, by adding two simple mechanisms the potential gain from header compression can be realized over lossy wireless networks as well as pointâtoâpoint modem links. Author Affiliation: (1) CDT/Dept.of Computer Science, Lule University of Technology, Sâ971 87, Luleå, Sweden (2) Telia Research AB, Aurorum 6, 977 75, Luleå, Sweden (3) Swedish Institute of Computer Science, PO Box 1263, 164 28, Kista, Sweden Article History: Registration Date: 09/28/2004 Byline: Mikael Degermark (1), Mathias Engan (1), Björn Nordgren (1, 2), Stephen Pink (1, 3)
- Published
- 1997
- Full Text
- View/download PDF
15. ACM SIGMOBILE Continues to Promote Research and Discussion
- Subjects
Industry association information ,Computer science ,Computer industry - Abstract
SIGMOBILE, ACM's Special Interest Group on the Mobility of Systems, Users, Data and Computing, has experienced a period of growth and acceptance among both the academic and industrial communities. It […]
- Published
- 1999
16. Scheduling of realâtime traffic in IEEE 802.11 wireless LANs
- Author
-
Coutras, Constantine, Gupta, Sanjay, and Shroff, Ness B.
- Subjects
Computer networks ,Wi-Fi ,Information networks ,Freedom of the press ,Computer science ,Computer network protocols ,Protocol ,Computers and office automation industries - Abstract
Keywords: Medium Access Control; Wireless Local Area Network; Medium Access Control Protocol; Data Frame; Distribute Coordination Function Abstract The desire to provide universal connectivity for mobile computers and communication devices is fueling a growing interest in wireless packet networks. To satisfy the needs of wireless data networking, study group 802.11 was formed under IEEE project 802 to recommend an international standard for Wireless Local Area Networks (WLANs). A key part of the standard are the Medium Access Control (MAC) protocols. Given the growing popularity of realâtime services and multimedia based applications it is critical that the 802.11 MAC protocols be tailored to meet their requirements. The 802.11 MAC layer protocol provides asynchronous, timeâbounded, and contention free access control on a variety of physical layers. In this paper we examine the ability of the point coordination function to support time bounded services. We present our proposal to support realâtime services within the framework of the point coordination function and discuss the specifics of the connection establishment procedure. We conduct performance evaluation and present numerical results to help understand the issues involved. Author Affiliation: (1) Department of Computer Science, Illinois Institute of Technology, 60616, Chicago, IL, USA (2) GSM Products Division, Motorola, 1501 West Shore Drive (IL27â3223), 60004, Arlington Heights, IL, USA (3) School of Electrical and Computer Engineering, Purdue University, 47907, West Lafayette, IN, USA Article History: Registration Date: 09/24/2004 Byline: Constantine Coutras (1), Sanjay Gupta (2), Ness B. Shroff (3)
- Published
- 2000
- Full Text
- View/download PDF
17. Digital storage: computer age gets a Valhalla
- Author
-
Hafner, Katie
- Subjects
Computer Museum History Center -- Exhibitions ,Computer science ,Computer industry ,Science museums -- Innovations - Abstract
TWELVE years ago, Keith Uncapher, a computer scientist at the RAND Corporation, was shocked to discover that the Johnniac, a behemoth of a computer he had helped develop in the [...]
- Published
- 2001
18. Improved server assisted signatures
- Author
-
Bicakci, Kemal and Baykal, Nazife
- Subjects
Cryptography ,File servers ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.08.008 Byline: Kemal Bicakci (a), Nazife Baykal (b) Keywords: Server-assisted signature; One-time signature; Digital signature; Nonrepudiation; Network security Abstract: It is well known that excessive computational demands of public key cryptography have made its use limited especially when constrained devices are of concern. To reduce the costs of generating public key signatures one viable method is to employ a third party; the server. In open networks, getting help from a verifiable-server has an advantage over proxy-based solutions since as opposed to proxy-server, verifiable-server's cheating can be proven. Verifiable-server assisted signatures were proposed in the past but they could not totally eliminate public key operations for the signer. In this paper, we propose a new alternative called SAOTS (server assisted one-time signatures) where just like proxy signatures generating a public key signature is possible without performing any public key operations at all. This feature results in both computational efficiency and implementation simplicity (e.g. a reduction in the code size) of the proposed protocol. In addition, SAOTS is a more promising approach since the signature is indistinguishable from a standard signature, no storage is necessary for the signer to prove the server's cheating and the protocol works in less number of rounds (two instead of three). On the other hand, the drawback of SAOTS is the increased bandwidth requirement between the sender and server. Author Affiliation: (a) Department of Computer Science, Vrije Universiteit, De Boelelaan 1083, 1081 HV Amsterdam, Netherlands (b) Informatics Institute, Middle East Technical University, 06531 Ankara, Turkey Article History: Received 6 April 2004; Revised 9 August 2004; Accepted 9 August 2004 Article Note: (miscellaneous) Responsible Editor: D. Frincke
- Published
- 2005
19. A lower bound for multicast key distribution
- Author
-
Snoeyink, Jack, Suri, Subhash, and Varghese, George
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.09.001 Byline: Jack Snoeyink (a), Subhash Suri (b), George Varghese (c) Keywords: Multicast; Security; Protocol analysis Abstract: With the rapidly growing importance of multicast in the Internet, several schemes for scalable key distribution have been proposed. These schemes require the broadcast of I(log n) encrypted messages to update the group key when the nth user joins or leaves the group. In this paper, we establish a matching lower bound (Independently, and concurrently, Richard Yang and Simon Lam discovered a similar bound with slightly different properties and proofs. An earlier version of our paper appeared in Infocom 2001 while their result appears in [R. Yang, S. Lam, A secure group key management communication lower bound, Technical Report TR-00-24, Department of Computer Sciences, UT Austin, July 2000, revised September 2000].), thus showing that I(log n) encrypted messages are necessary for a general class of key distribution schemes and under different assumptions on user capabilities. While key distribution schemes can exercise some tradeoff between the costs of adding or deleting a user, our main result shows that for any scheme there is a sequence of 2n insertion and deletions whose total cost is I[c](n log n). Thus, any key distribution scheme has a worst-case cost of I[c](log n) either for adding or for deleting a user. Author Affiliation: (a) Department of Computer Science, UNC-Chapel Hill, Chapel Hill, NC 27599-3175, USA (b) Department of Computer Science, University of California, Santa Barbara, CA 93106, USA (c) Computer Science and Engineering Department, University of California, 9500 Gilman Drive, La Jolla, CA 92093-0114, USA Article History: Received 2 September 2002; Revised 3 March 2004; Accepted 7 September 2004 Article Note: (miscellaneous) Reponsible Editor: S. Lam
- Published
- 2005
20. An adaptive call admission algorithm for cellular networks
- Author
-
Beigy, Hamid and Meybodi, M.R.
- Subjects
Robot ,Algorithm ,Robots ,Algorithms ,Computer science - Abstract
In this paper, we first propose a new continuous action-set learning automaton and theoretically study its convergence properties and show that it converges to the optimal action. Then we give an adaptive and autonomous call admission algorithm for cellular mobile networks, which uses the proposed learning automaton to minimize the blocking probability of the new calls subject to the constraint on the dropping probability of the handoff calls. The simulation results show that the performance of the proposed algorithm is close to the performance of the limited fractional guard channel algorithm for which we need to know all the traffic parameters in advance.
- Published
- 2005
21. A generalized fault-tolerant sorting algorithm on a product network
- Author
-
Chen, Yuh-Shyan, Chang, Chih-Yung, Lin, Tsung-Hung, and Kuo, Chun-Bo
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2004.11.005 Byline: Yuh-Shyan Chen (a), Chih-Yung Chang (b), Tsung-Hung Lin (a), Chun-Bo Kuo (a) Keywords: Fault-tolerant; Product networks; Snake order; Odd-even sorting; Bitonic sorting Abstract: A product network defines a class of topologies that are very often used such as meshes, tori, and hypercubes, etc. This paper proposes a generalized algorithm for fault-tolerant parallel sorting in product networks. To tolerate r -1 faulty nodes, an r-dimensional product network containing faulty nodes is partitioned into a number of subgraphs such that each subgraph contains at most one fault. Our generalized sorting algorithm is divided into two steps. First, a single-fault sorting operation is presented to correctly performed on each faulty subgraph containing one fault. Second, each subgraph is considered a supernode, and a fault-tolerant multiway merging operation is presented to recursively merge two sorted subsequences into one sorted sequence. Our generalized sorting algorithm can be applied to any product network only if the factor graph of the product graph can be embedding in a ring. Further, we also show the time complexity of our sorting operations on a grid, hypercube, and Petersen cube. Performance analysis illustrates that our generalized sorting scheme is a truly efficient fault-tolerant algorithm. Author Affiliation: (a) Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan, ROC (b) Department of Computer Science and Information Engineering, Tamkang University, Taipei, Taiwan, ROC Article History: Received 30 September 2000; Revised 6 March 2002; Accepted 15 November 2004 Article Note: (footnote) [star] A preliminary version of this paper were presented in Proceedings of HPC 2000: 6th International Conference on Applications of High-Performance Computers in Engineering, Maui, Hawaii, January 26-28, 2000, and was supported by the National Science Council, ROC, under contract no. NSC89-2213-E-216-010.
- Published
- 2005
22. Evolutionary algorithms for scheduling a flowshop manufacturing cell with sequence dependent family setups
- Author
-
Franca, Paulo M., Gupta, Jatinder N.D., Mendes, Alexandre S., Moscato, Pablo, and Veltink, Klaas J.
- Subjects
Algorithm ,Algorithms ,Universities and colleges ,School construction ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cie.2003.11.004 Byline: Paulo M. Franca (a), Jatinder N.D. Gupta (b), Alexandre S. Mendes (d), Pablo Moscato (d), Klaas J. Veltink (c) Abstract: This paper considers the problem of scheduling part families and jobs within each part family in a flowshop manufacturing cell with sequence dependent family setups times where it is desired to minimize the makespan while processing parts (jobs) in each family together. Two evolutionary algorithms -- a Genetic Algorithm and a Memetic Algorithm with local search -- are proposed and empirically evaluated as to their effectiveness in finding optimal permutation schedules. The proposed algorithms use a compact representation for the solution and a hierarchically structured population where the number of possible neighborhoods is limited by dividing the population into clusters. In comparison to a Multi-Start procedure, solutions obtained by the proposed evolutionary algorithms were very close to the lower bounds for all problem instances. Moreover, the comparison against the previous best algorithm, a heuristic named CMD, indicated a considerable performance improvement. Author Affiliation: (a) Departamento de Engenharia de Sistemas, DENSIS Universidade Estadual de Campinas, UNICAMP C. P. 6101, 13081-970 Campinas, SP, Brazil (b) Department of Accounting and Information Systems, College of Administrative Science, The University of Alabama in Huntsville, Huntsville, AL 35899, USA (c) Department of Econometrics, University of Groningen Postbus 800, 9700 AV Groningen, The Netherlands (d) Department of Computer Science, School of Electrical Engineering and Computer Science, Faculty of Engineering and Built Environment, University of Newcastle Callaghan, 2308, NSW, Australia Article History: Received 1 April 2001; Revised 1 November 2002; Accepted 1 November 2003
- Published
- 2005
23. Multi-disk scheduling for time-constrained requests in RAID-0 devices
- Author
-
Lo, Shi-Wu, Kuo, Tei-Wei, and Lam, Kam-Yiu
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.05.029 Byline: Shi-Wu Lo (a), Tei-Wei Kuo (a), Kam-Yiu Lam (b) Keywords: Real-time disk scheduling; RAID-0; Multi-disk scheduling Abstract: In this paper, we study the scheduling problem of real-time disk requests in multi-disk systems, such as RAID-0. We first propose a multi-disk scheduling algorithm, called Least-Remaining-Request-Size-First (LRSF), to improve soft real-time performance of I/O systems. LRSF may be integrated with different real-time/non-real-time single-disk scheduling algorithms, such as SATF and SSEDV, adopted by the disks in a multi-disk system. We then extend LRSF by considering the serving requests on-the-way (OTW) to the target request to minimize the starvation problem for requests that need to retrieve a large amount of data. The pre-fetching issue in RAID-0 is also studied to further improve the I/O performance. The performance of the proposed algorithm and schemes is investigated and compared with other disk scheduling algorithms through a series of experiments using both randomly generated workload and realistic workload. Author Affiliation: (a) Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC (b) Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong Article History: Received 21 March 2003; Revised 23 May 2004; Accepted 23 May 2004
- Published
- 2005
24. Application layer reachability monitoring for IP multicast
- Author
-
Sarac, Kamil and Almeroth, Kevin C.
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.11.004 Byline: Kamil Sarac (a), Kevin C. Almeroth (b) Abstract: The successful deployment of multicast in the Internet requires the availability of good network management solutions. One of the first management tasks for multicast is to verify its availability in the network. This task is usually referred to as reachability monitoring. Reachability ensures that sources can reach all existing and potential group members. Reachability also implies that receivers have multicast connectivity and can reach all sources. As a result, verifying reachability becomes very important to maintain availability and robustness of the multicast service between sources and receivers. In this paper, we present an application layer mechanism to monitor multicast reachability. First, we justify the need for reachability monitoring systems. Then, we present our monitoring system called sdr-monitor. Sdr-monitor leverages an existing application and provides close to real-time reachability monitoring for the multicast infrastructure. It is the first system that is developed and deployed for monitoring multicast reachability. We present the architecture of the system and then discuss its operation. Finally, we include our evaluations on a data set that we collected using this system. With this analysis, we present long term reachability characteristics of multicast infrastructure during a four year monitoring period between 1999 and 2003 and discuss potential causes for reachability problems. Author Affiliation: (a) Department of Computer Science, University of Texas at Dallas, 2601 N. Floyd Rd., Richardson, TX 75083, USA (b) Department of Computer Science, University of California Santa Barbara, Santa Barbara, CA 93106, USA Article Note: (miscellaneous) Responsible Editor: I. Nikolaidis
- Published
- 2005
25. Placement of proxy-based multicast overlays
- Author
-
Wu, Min-You, Zhu, Yan, and Shu, Wei
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.11.019 Byline: Min-You Wu (a), Yan Zhu (b), Wei Shu (b) Abstract: Proxy-based multicast is an approach to implementing the multicast function with proxy servers, which has many advantages compared to the IP multicast. When proxy locations are predetermined, many routing algorithms can be used to build a multicast overlay network. On the other hand, the problem of where to place proxies has not been extensively studied yet. A study on the placement problem for proxy-based multicast is presented in this paper. The objectives of proxy placement can be minimization of delay time, minimization of bandwidth consumption, or minimization of both. These performance metrics are studied in this paper and a number of algorithms are proposed for multicast proxy placement. Performance study is presented and practical issues of proxy placement are also discussed. Author Affiliation: (a) Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China (b) Department of Electrical and Computer Engineering, The University of New Mexico, MSC01 1100, Albuquerque, NM 87131-0001, USA Article Note: (miscellaneous) Responsible Editor: I. Matta
- Published
- 2005
26. Hypermedia maintenance support applications: Benefits and development costs
- Author
-
Crowder, Richard, Wills, Gary, and Hall, Wendy
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compind.2005.03.004 Byline: Richard Crowder, Gary Wills, Wendy Hall Keywords: Open Hypermedia; Manufacturing; Effort Analysis Abstract: The most common questions asked by industrialists regarding any electronic based systems to support maintenance activities are, what are the benefits and how much will it cost. In this paper we will consider these questions by examining two hypermedia based maintenance support systems. In order to access the tangible and intangible benefits of hypermedia, the identification of benefits and cost are required at an early stage of the development process. This paper discusses some of the benefits to be gained by industry from using hypermedia applications to support their information provision. The cost is directly related to the effort required to produce a hypermedia application, with the greatest effort in authoring. Two methods of costing are presented: a detailed engineering approach and an approach using heuristics. In addition we will comment on how the presented approaches can be applied to Web and Semantic Web applications. Author Affiliation: Intelligence Agent Multimedia Group, School of Electronics and Computer Science, Department of Electronics and Computer Science, University of Southampton, UK Article History: Received 15 April 2004; Accepted 21 March 2005
- Published
- 2005
27. ConSUS: a light-weight program conditioner
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.03.034 Byline: Sebastian Danicic (a), Mohammed Daoudi (a), Chris Fox (b), Mark Harman (c), Robert M. Hierons (c), John R. Howroyd (a), Lahcen Ourabya (a), Martin Ward (d) Abstract: Program conditioning consists of identifying and removing a set of statements which cannot be executed when a condition of interest holds at some point in a program. It has been applied to problems in maintenance, testing, re-use and re-engineering. All current approaches to program conditioning rely upon both symbolic execution and reasoning about symbolic predicates. The reasoning can be performed by a 'heavy duty' theorem prover but this may impose unrealistic performance constraints. This paper reports on a lightweight approach to theorem proving using the FermaT Simplify decision procedure. This is used as a component to ConSUS, a program conditioning system for the Wide Spectrum Language WSL. The paper describes the symbolic execution algorithm used by ConSUS, which prunes as it conditions. The paper also provides empirical evidence that conditioning produces a significant reduction in program size and, although exponential in the worst case, the conditioning system has low degree polynomial behaviour in many cases, thereby making it scalable to unit level applications of program conditioning. Author Affiliation: (a) Department of Mathematics and Computer Science, Goldsmiths College, University of London, New Cross, London SE14 6NW, United Kingdom (b) Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, United Kingdom (c) Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom (d) Software Technology Research Laboratory, De Montfort University, The Gateway, Leicester LE1 9BH, United Kingdom
- Published
- 2005
28. Design and simulation of a supplemental protocol for BGP
- Author
-
Yeh, Jyh-Haw, Zhang, Wei, Hu, Wen-Chen, and Lee, Chung-Wei
- Subjects
Network security software ,Universities and colleges ,Computer science ,Security software - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.01.007 Byline: Jyh-haw Yeh (a), Wei Zhang (a), Wen-chen Hu (b), Chung-wei Lee (c) Keywords: Internet policy routing; Border gateway protocol; Internet topology; Routing simulation Abstract: Internet policy routing has attracted a lot of attention in the last decade and it is believed that this topic will become even more important in the foreseeable future. The growing diversity of the Internet brings in many organizations under different authorities with conflicting interests. Each such organization forms an Autonomous System (AS), with its own policy regulating network traffic across its boundaries to protect valuable network resources. As a result, a policy violation at any intermediate AS may cause a packet to be silently dropped before reaching its destination. The Border Gateway Protocol (BGP) was introduced to solve this packet-dropping problem in the mid-1990s, followed by a series of revisions. Currently, BGP is the dominant protocol in this field. However, BGP is a distance-vector and hop-by-hop protocol, resulting in a loss of reachability information for some destinations, even though feasible routes to those destinations may physically exist. Unreachable destinations under BGP are not necessarily truly unreachable. To overcome this deficiency, this paper presents a source policy route discovery protocol to supplement BGP. Simulation results show that almost all the false negative unreachable destinations can be resolved by the proposed protocol. Author Affiliation: (a) Department of Computer Science, Boise State University, 1910 University Drive, Boise, ID 83725, United States (b) Department of Computer Science, University of North Dakota, Grand Forks, ND 58202, United States (c) Department of Computer Science and Software Engineering, Auburn University, Auburn, AL 36849, United States Article History: Received 22 October 2003; Revised 10 December 2004; Accepted 5 January 2005 Article Note: (miscellaneous) Responsible Editor: Y. Shavitt
- Published
- 2005
29. Two controlled experiments concerning the comparison of pair programming to peer review
- Author
-
Muller, Matthias M.
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.12.019 Byline: Matthias M. Muller Keywords: Pair programming; Peer reviews; Empirical software engineering; Controlled experiment Abstract: This paper reports on two controlled experiments comparing pair programming with single developers who are assisted by an additional anonymous peer code review phase. The experiments were conducted in the summer semester 2002 and 2003 at the University of Karlsruhe with 38 computer science students. Instead of comparing pair programming to solo programming this study aims at finding a technique by which a single developer produces similar program quality as programmer pairs do but with moderate cost. The study has one major finding concerning the cost of the two development methods. Single developers are as costly as programmer pairs, if both programmer pairs and single developers with an additional review phase are forced to produce programs of similar level of correctness. In conclusion, programmer pairs and single developers become interchangeable in terms of development cost. As this paper reports on the results of small development tasks the comparison could not take into account long time benefits of either technique. Author Affiliation: Department of Computer Science, Fakultat fur Informatik, Universitat Karlsruhe, Am Fasanengarten 5, 76131 Karlsruhe, Germany Article History: Received 3 June 2004; Revised 23 December 2004; Accepted 24 December 2004
- Published
- 2005
30. Grooming of multicast sessions in metropolitan WDM ring networks
- Author
-
Madhyastha, Harsha V., Chowdhary, Girish V., Srinivas, N., and Murthy, C. Siva Ram
- Subjects
Algorithm ,Fiber optics -- Equipment and supplies ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.11.027 Byline: Harsha V. Madhyastha (a), Girish V. Chowdhary (c), N. Srinivas (b), C. Siva Ram Murthy (c) Keywords: Optical WDM ring networks; Routing and wavelength assignment; Multicast sessions; Optical splitter; Traffic grooming; ILP formulation; Circle construction Abstract: With the introduction of WDM into the metro environment, the need to cost-effectively handle finer "sub-wavelength" capacities has become paramount. In this paper, we address the problem of routing and wavelength assignment of multicast sessions with sub-wavelength traffic demands, in the scenario of metropolitan WDM ring networks. In order to support multicasting, individual nodes need to have the capability to duplicate traffic. We consider two different node architectures which perform the duplication in optical and electronic domain, respectively. As traffic duplication at the electronic level is much more expensive than the optical alternative [X.-H. Jia, D.-Z. Du, X.-D. Hu, M.-K. Lee, J. Gu, Optimization of wavelength assignment for QoS multicast in WDM networks, IEEE Trans. Commun. 49 (2) (2001) 341-350], we study the problem of assigning routes and wavelengths to the multicast sessions so as to minimize electronic copying. We present an ILP formulation of this problem. The solution to this problem can be divided into three phases -- 1. routing of multicast sessions, 2. construction of circles by grouping non-overlapping arcs and 3. grouping these circles onto wavelengths. We propose a heuristic algorithm which implements the routing as well as circle construction phases simultaneously and then groups the circles. We present extensive simulation results to show that our approach leads to much lower equipment cost than that obtained by routing each multicast session along its minimum spanning tree and then using the best known heuristic for circle construction [X. Zhang, C. Qiao, An effective and comprehensive approach to traffic grooming and wavelength assignment in SONET/WDM rings, IEEE/ACM Trans. Networking 8 (5) (2000) 608-617]. Author Affiliation: (a) Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA (b) Microsoft India (R&D) Pvt. Ltd., 9th Floor, Room # 9008, Cyber Towers, HiTech City, Hyderabad 500 081, India (c) Department of Computer Science and Engineering, Indian Institute of Technology, Madras, Chennai 600 036, India Article History: Received 2 April 2004; Revised 15 September 2004; Accepted 29 November 2004 Article Note: (footnote) [star] This work was supported by the Department of Science and Technology, New Delhi, India.
- Published
- 2005
31. Adaptive row major order: a new space filling curve for efficient spatial join processing in the transform space
- Author
-
Lee, Min-Jae, Whang, Kyu-Young, Han, Wook-Shin, and Song, Il-Yeol
- Subjects
Algorithm ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2004.10.012 Byline: Min-Jae Lee (a), Kyu-Young Whang (a), Wook-Shin Han (b), Il-Yeol Song (c) Keywords: Spatial join; Corner transformation; Databases Abstract: A transform-space index indexes spatial objects represented as points in the transform space. An advantage of a transform-space index is that optimization of spatial join algorithms using these indexes can be more formal. The authors earlier proposed the Transform-Based Spatial Join algorithm that joins two transform-space indexes. It renders global optimization easy with little overhead by utilizing the characteristics of the transform space. In particular, it allows us to globally determine the order of accessing disk pages, which makes a significant impact on the performance of joins. For this purpose, we use various space filling curves. In this paper, we propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify excellence of the ARM order when used with the Transform-Based Spatial Join. The Transform-Based Spatial Join with the ARM order always outperforms those with other conventional space filling curves in terms of both measures used: the one-pass buffer size and the number of disk accesses. Specifically, it reduces the one-pass buffer size by up to 25.9 times and the number of disk accesses by up to 2.11 times. We conclude that we achieve these results mainly due to global optimization of the order of accessing disk pages using an adaptive space filling curve. Author Affiliation: (a) Department of Computer Science and Advanced Information Technology Research Center (AITrc), Korea Advanced Institute of Science and Technology (KAIST), 373-1, Koo-Sung Dong, Yoo-Sung Ku, Daejeon 305-701, Republic of Korea (b) Department of Computer Engineering, Kyungpook National University, Sankyuk-Dong, Buk-Gu, Daegu 702-701, Republic of Korea (c) College of Information Science and Technology, Drexel University, Philadelphia, PA, USA Article History: Received 4 December 2003; Revised 18 October 2004; Accepted 18 October 2004
- Published
- 2005
32. A risk minimization framework for information retrieval
- Author
-
Zhai, ChengXiang and Lafferty, John
- Subjects
Computer science ,Information storage and retrieval - Abstract
This paper presents a probabilistic information retrieval framework in which the retrieval problem is formally treated as a statistical decision problem. In this framework, queries and documents are modeled using statistical language models, user preferences are modeled through loss functions, and retrieval is cast as a risk minimization problem. We discuss how this framework can unify existing retrieval models and accommodate systematic development of new retrieval models. As an example of using the framework to model non-traditional retrieval problems, we derive retrieval models for subtopic retrieval, which is concerned with retrieving documents to cover many different subtopics of a general query topic. These new models differ from traditional retrieval models in that they relax the traditional assumption of independent relevance of documents.
- Published
- 2006
33. Best entry points for structured document retrieval -- Part I: Characteristics
- Author
-
Reid, Jane, Lalmas, Mounia, Finesilver, Karen, and Hertzum, Morten
- Subjects
Computer science ,Distance education - Abstract
Structured document retrieval makes use of document components as the basis of the retrieval process, rather than complete documents. The inherent relationships between these components make it vital to support users' natural browsing behaviour in order to offer effective and efficient access to structured documents. This paper examines the concept of best entry points, which are document components from which the user can browse to obtain optimal access to relevant document components. In particular this paper investigates the basic characteristics of best entry points.
- Published
- 2006
34. Memory latency consideration for load sharing on heterogeneous network of workstations
- Author
-
Wang, Yi-Min
- Subjects
Computer industry ,Microcomputer industry ,Computer industry ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.sysarc.2005.03.002 Byline: Yi-Min Wang Keywords: Heterogeneous network of workstations; Load sharing; Process migration; Memory latency Abstract: With the development of cheap personal computer and high-speed network, heterogeneous network of workstation has become the trend in high performance computing. This paper focuses on the load sharing problem for heterogeneous network of workstations. Load sharing means even workloads of all coordinated computers in the heterogeneous system without leaving any computer idle. When some nodes suffer from heavy loading, it is necessary to migrate some processes to the nodes with light loading. However, most load sharing policies focus only on different CPU speed and/or memory capacity without taking the effect of memory access latencies into consideration. In the paper, we propose a new load sharing policy, CPU-Memory-Power-based policy, to improve CPU-Memory-based policy. In addition to CPU speed and memory capacity, this policy also puts emphasis on memory access latency. Experimental results show that this method performs better than the other policies, and that memory access latency is actually an important consideration in the design of load sharing policies on heterogeneous network of workstation. Author Affiliation: Department of Computer Science and Information Management, Providence University, 200 Chung-chi Road, Shalu, Taichung Hsien, 433 Taiwan, ROC Article History: Received 22 October 2002; Revised 23 January 2004; Accepted 15 March 2005 Article Note: (footnote) [star] This research work was partially supported by the National Science Council of Republic of China under grant NSC91-2213-E-126-003.
- Published
- 2006
35. Routing and wavelength assignment in multifiber WDM networks with non-uniform fiber cost
- Author
-
Nomikos, Christos, Pagourtzis, Aris, Potika, Katerina, and Zachos, Stathis
- Subjects
Algorithm ,Fiber optics -- Equipment and supplies ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2004.11.028 Byline: Christos Nomikos (a), Aris Pagourtzis (b), Katerina Potika (b), Stathis Zachos (b)(c) Keywords: Graph theory; Wavelength assignment; Multifiber optical networks Abstract: Motivated by the increasing importance of multifiber WDM networks we study a routing and wavelength assignment problem in such networks. In this problem the number of wavelengths per fiber is given and the goal is to minimize the cost of fiber links that need to be reserved in order to satisfy a set of communication requests; we introduce a generalized setting where network pricing is non-uniform, that is the cost of hiring a fiber may differ from link to link. We consider two variations: undirected, which corresponds to full-duplex communication, and directed, which corresponds to one-way communication. Moreover, for rings we also study the problem in the case of pre-determined routing. We present exact or constant-ratio approximation algorithms for all the above variations in chain, ring and spider networks. Author Affiliation: (a) Department of Computer Science, University of Ioannina, P.O. Box 1186, 45 110 Ioannina, Greece (b) School of Electrical and Computer Engineering, National Technical University of Athens, 157 80 Athens, Greece (c) CIS Department, Brooklyn College and Graduate Center, City University of New York, NY 11210, USA Article History: Received 29 July 2004; Accepted 22 November 2004 Article Note: (footnote) [star] Research supported in part by the European Social Fund (75%) and the Greek Ministry of Education (25%) through grants "Pythagoras" and "Heraclitus", of the Operational Programme on Education and Initial Vocational Training. Some of the results presented in this paper have appeared in preliminary form in 'Fiber Cost Reduction and Wavelength Minimization in Multifiber WDM Networks', Proc. Networking 2004, LNCS 3042, pp. 150-161.
- Published
- 2006
36. AQoSM: Scalable QoS multicast provisioning in Diff-Serv networks
- Author
-
Cui, Jun-Hong, Lao, Li, Faloutsos, Michalis, and Gerla, Mario
- Subjects
Architecture ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.03.003 Byline: Jun-Hong Cui (a), Li Lao (b), Michalis Faloutsos (c), Mario Gerla (b) Keywords: Multicast; QoS; State scalability; Diff-Serv; MPLS Abstract: The deployment of IP multicast support is impeded by several factors among which are the state scalability problem, the cumbersome management and routing, and the difficulty of supporting QoS. In this paper, we propose an architecture, called Aggregated QoS Multicast (AQoSM), to provide scalable and efficient QoS multicast in Diff-Serv networks. The key idea of AQoSM is to separate the concept of groups from the concept of distribution tree by "mapping" many groups to one distribution tree. In this way, multicast groups can now be routed and rerouted very quickly by assigning different labels (e.g., tree IDs) to the packets. Therefore, we can have load-balancing and dynamic rerouting to meet QoS requirements. In addition, the aggregation of groups on fewer trees leads to routing state reduction and less tree management overhead. Thus, AQoSM enables multicast to be seamlessly integrated into Diff-Serv without violating the design principle of Diff-Serv of keeping network core "QoS stateless" and without sacrificing the efficiency of multicast. Finally, efficient resource utilization and strong QoS support can be achieved through statistical multiplexing at the level of aggregated trees. We design a detailed MPLS-based AQoSM protocol with efficient admission control and MPLS multicast tree management. By simulation studies, we show that our protocol achieves significant multicast state reduction (up to 82%) and tree maintenance overhead reduction (up to 86%) with modest (12%) bandwidth overhead. It also reduces the blocking ratio of user requests with strong QoS requirements due to its load balancing and statistical multiplexing capabilities. Author Affiliation: (a) Computer Science & Engineering Department, University of Connecticut, Storrs, CT, United States (b) Computer Science Department, University of California, Los Angeles, CA, United States (c) Computer Science & Engineering Department, University of California, Riverside, CA, United States Article History: Received 8 November 2004; Revised 17 February 2005; Accepted 29 March 2005 Article Note: (footnote) [star] This material is based upon work supported by the National Science Foundation under Grant No. CNS-0435515, CNS-0435230, ANIR-9985195 (CAREER award), and IDM-0208950.
- Published
- 2006
37. Longevity and detection of persistent foraging trails in Pharaoh's ants, Monomorium pharaonis (L.)
- Author
-
Jackson, Duncan E., Martin, Stephen J., Holcombe, Mike, and Ratnieks, Francis L.W.
- Subjects
Computer science - Abstract
Pheromone trails provide a positive feedback mechanism for many animal species, and facilitate the sharing of information about food, nest or mate location. How long pheromone trails persist determines how long these environmental memories are accessible to conspecifics. We determined the time frame over which Pharaoh's ant colonies can re-establish a long-lived trail and how the behaviour of individual workers contributes to trail re-establishment. We used artificially constrained pheromone trails on paper to investigate trail longevity and individual responses. Trails formed by traffic of 1000-2000 ant passages could be re-established after 24h, and after 48h for 4000-8000 ant passages. Only 27.5% of individual foragers were highly successful in a bioassay testing the ability to detect trails established 24h earlier. Trail-finding ability was significantly correlated with a low antennal position. Long-lived trail detection scores increased significantly in 57% of foragers after 5h of food deprivation and isolation, but declined again after feeding. In a control study, only 9% of foragers changed their scores, when isolated with food present. A high degree of conservatism was found such that, regardless of treatment, 21% always failed the bioassay and 17% always succeeded. Our demonstration of long-lived components in Pharaoh's ant trails and of a behavioural specialization in 'pathfinding' shows that pheromone trails are more complex at the individual level than is generally recognized.
- Published
- 2006
38. FPGA-based tool path computation
- Author
-
Jimeno, A., Sanchez, J.L., Mora, H., Mora, J., and Garcia-Chamizo, J.M.
- Subjects
Programmable logic array ,Algorithm ,Digital integrated circuits ,Algorithms ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.compind.2005.05.004 Byline: A. Jimeno, J.L. Sanchez, H. Mora, J. Mora, J.M. Garcia-Chamizo Keywords: Tool path computing; Virtual digitizing; Specific hardware architectures; Reconfigurable computing Abstract: Tool path generation is one of the most complex problems in computer aided manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. The algorithm called Virtual Digitizing computes the tool path by means of a virtually digitized model of the surface and a geometry specification of the tool and its motion, so it can even be used in non-standard machining. This algorithm is simple and robust and avoids the problem of tool-surface collision by its own definition. A Virtual Digitizing optimisation that makes the most of specific hardware in order to improve the algorithm efficiency is presented in this paper. A comparative study will show the gain achieved in terms of total computing time. We also present a FPGA-based architecture that can be used to produce rotations with more precision and speed than other well-known classic implementations. Author Affiliation: Computer Science Technology and Computation Department, University of Alicante, Apdo. Correos 99, 03080 Alicante, Spain Article History: Received 11 November 2004; Accepted 13 May 2005
- Published
- 2006
39. A new protocol to counter online dictionary attacks
- Author
-
Goyal, Vipul, Kumar, Virendra, Singh, Mayank, Abraham, Ajith, and Sanyal, Sugata
- Subjects
Protocol ,Cryptography ,Denial of service attacks ,Executions and executioners ,Computer science ,Computer network protocols - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cose.2005.09.003 Byline: Vipul Goyal (a), Virendra Kumar (a), Mayank Singh (a), Ajith Abraham (b), Sugata Sanyal (c) Abstract: The most popular method of authenticating users is through passwords. Though passwords are the most convenient means of authentication, they bring along themselves the threat of dictionary attacks. While offline dictionary attacks are possible only if the adversary is able to collect data for a successful protocol execution by eavesdropping on the communication channel and can be successfully countered by using public key cryptography, online dictionary attacks can be performed by anyone and there is no satisfactory solution to counter them. In this paper, we propose an authentication protocol which is easy to implement without any infrastructural changes and yet prevents online dictionary attacks. Our protocol uses only one way hash functions and eliminates online dictionary attacks by implementing a challenge-response system. This challenge-response system is designed in a fashion that it hardly poses any difficulty to a genuine user but is extremely burdensome, time consuming and computationally intensive for an adversary trying to launch as many as hundreds of thousands of authentication requests as in case of an online dictionary attack. The protocol is perfectly stateless and thus less vulnerable to denial of service (DoS) attacks. Author Affiliation: (a) Crypto Group, Institute of Technology, Banaras Hindu University, India (b) IITA Professorship Program, School of Computer Science and Engineering, Chung-Ang University, South Korea (c) School of Technology and Computer Science, Tata Institute of Fundamental Research, India Article History: Revised 25 August 2005; Accepted 21 September 2005
- Published
- 2006
40. An empirical study of process-related attributes in segmented software cost-estimation relationships
- Author
-
Cuadrado-Gallego, Juan J., Sicilia, Miguel-ANgel, Garre, Miguel, and Rodriguez, Daniel
- Subjects
Algorithm ,Software quality ,Algorithms ,Software ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.04.040 Byline: Juan J. Cuadrado-Gallego (a), Miguel-Angel Sicilia (a), Miguel Garre (a), Daniel Rodriguez (b) Keywords: Parametric software effort estimation; Clustering algorithms; Software cost drivers; EM algorithm Abstract: Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude. Author Affiliation: (a) Computer Science Department, Polytechnic School, University of Alcala. Ctra. Barcelona km. 33.6, 28871 Alcala de Henares, Madrid, Spain (b) Computer Science Department, University of Reading, Reading RG6 6AY, UK Article History: Received 15 February 2005; Revised 23 April 2005; Accepted 23 April 2005
- Published
- 2006
41. Adaptive schemes for location update generation in execution location-dependent continuous queries
- Author
-
Lam, Kam-Yiu and Ulusoy, AZguR
- Subjects
Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.07.015 Byline: Kam-Yiu Lam (a), Azgur Ulusoy (b) Keywords: Location-dependent continuous queries; Location update; Moving object database; Location management Abstract: An important feature that is expected to be owned by today's mobile computing systems is the ability of processing location-dependent continuous queries on moving objects. The result of a location-dependent query depends on the current location of the mobile client which has generated the query as well as the locations of the moving objects on which the query has been issued. When a location-dependent query is specified to be continuous, the result of the query can continuously change. In order to provide accurate and timely query results to a client, the location of the client as well as the locations of moving objects in the system has to be closely monitored. Most of the location generation methods proposed in the literature aim to optimize utilization of the limited wireless bandwidth. The issues of correctness and timeliness of query results reported to clients have been largely ignored. In this paper, we propose an adaptive monitoring method (AMM) and a deadline-driven method (DDM) for managing the locations of moving objects. The aim of our methods is to generate location updates with the consideration of maintaining the correctness of query evaluation results without increasing location update workload. Extensive simulation experiments have been conducted to investigate the performance of the proposed methods as compared to a well-known location update generation method, the plain dead-reckoning (pdr). Author Affiliation: (a) Department of Computer Science, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong (b) Department of Computer Engineering, Bilkent University, Bilkent, Ankara 06800, Turkey Article History: Received 27 January 2004; Revised 1 July 2005; Accepted 15 July 2005 Article Note: (footnote) [star] This work described in this paper was supported by a research grant from the Research Grant Council of Hong Kong Special Administration Region, China [Project No. CityU 1076/02E].
- Published
- 2006
42. Declarative programming of integrated peer-to-peer and Web based systems: the case of Prolog
- Author
-
Loke, Seng W.
- Subjects
Peer to peer computing ,Fighter planes ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.04.005 Byline: Seng W. Loke Keywords: Prolog; Integration; Logic programming; Web computing; Peer-to-peer computing; Declarative programming Abstract: Web and peer-to-peer systems have emerged as popular areas in distributed computing, and their integrated usage permits the benefits of both to be exploited. While much work in these areas have utilized the imperative programming paradigm, the need for declarative programming paradigms is increasingly being recognized, not only for the often cited advantages such as a higher level of abstraction and specialized language features, but also to tackle the querying and manipulation of knowledge and reasoning with semantics that will be the mainstay of the proposed next generation of the Web and peer-to-peer computing. This paper presents an approach towards integrative use of the Web and peer-to-peer systems within a declarative programming paradigm. We contend that logic programming can be useful in peer-to-peer computing, especially for querying and representing knowledge shared over peer networks, and for scripting applications that involve sophisticated search behaviour over peer networks. As an example of peer-to-peer querying expressed in a logic programming language, we propose a simple extension of Prolog, which we call LogicPeer, to enable goal evaluation over peers in a peer network. Using LogicPeer, we outline how a peer-to-peer version of a Yahoo-like system can be built and queried, and several other applications that involve decentralized knowledge sharing. We then show how LogicPeer can be used with LogicWeb, a Prolog extension to access Web pages, thereby integrating peer-to-peer querying and Web querying in a common declarative framework. Author Affiliation: School of Computer Science and Software Engineering, Monash University, 900 Dandenong Road, Caulfield East, Melbourne, Vic. 3145, Australia Article History: Received 20 August 2003; Revised 6 April 2005; Accepted 6 April 2005
- Published
- 2006
43. Universal IP multicast delivery
- Author
-
Zhang, Beichuan, Wang, Wenjie, Jamin, Sugih, Massey, Daniel, and Zhang, Lixia
- Subjects
Tunnels ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.07.016 Byline: Beichuan Zhang (a), Wenjie Wang (b), Sugih Jamin (b), Daniel Massey (c), Lixia Zhang (d) Keywords: IP multicast; End-host multicast; Overlay multicast Abstract: A ubiquitous and efficient multicast data delivery service is essential to the success of large-scale group communication applications. The original IP multicast design is to enhance network routers with multicast capability [S. Deering, D. Cheriton, Multicast routing in datagram internetworks and extended LANs, ACM Transactions on Computer Systems 8(2) (1990) 85-110]. This approach can achieve great transmission efficiency and performance but also poses a critical dependency on universal deployment. A different approach, overlay multicast, moves multicast functionality to end hosts, thereby removing the dependency on router deployment, albeit at the cost of noticeable performance penalty compared to IP multicast. In this paper we present the Universal Multicast (UM) framework, along with a set of mechanisms and protocols, to provide ubiquitous multicast delivery service on the Internet. Our design can fully utilize native IP multicast wherever it is available, and automatically build unicast tunnels to connect IP Multicast "islands" to form an overall multicast overlay. The UM design consists of three major components: an overlay multicast protocol (HMTP) for inter-island routing, an intra-island multicast management protocol (HGMP) to glue overlay multicast and native IP multicast together, and a daemon program to implement the functionality at hosts. In addition to performance evaluation through simulations, we have also implemented parts of the UM framework. Our prototype implementation has been used to broadcast several workshops and the ACM SIGCOMM 2004 conference live on the Internet. We present some statistics collected during the live broadcast and describe mechanisms we adopted to support end hosts behind Network Address Translation (NAT) gateways and firewalls. Author Affiliation: (a) Computer Science Department, University of Arizona, Tucson, AZ 85721-0077, USA (b) EECS Department, University of Michigan, Ann Arbor, MI 48109-2122, USA (c) Computer Science Department, Colorado State University, Fort Collins, CO 80523-1873, USA (d) Computer Science Department, UCLA, Los Angeles, CA 90095-1596, USA
- Published
- 2006
44. Consistency maintenance in dynamic peer-to-peer overlay networks
- Author
-
Liu, Xiaotao, Lan, Jiang, Shenoy, Prashant, and Ramaritham, Krithi
- Subjects
Peer to peer computing ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.07.010 Byline: Xiaotao Liu (a), Jiang Lan (b), Prashant Shenoy (a), Krithi Ramaritham (c) Keywords: P2P; Consistency; Overlay network Abstract: In this paper, we present techniques to maintain temporal consistency of replicated objects in data-centric peer-to-peer overlay applications. We consider both structured and unstructured overlay networks, represented by Chord and Gnutella, respectively, and present techniques for maintaining consistency of replicated data objects in the presence of dynamic joins and leaves. We present extensions to the Chord and Gnutella protocol to incorporate our consistency techniques and implement our extensions to Gnutella into a Gtk-Gnutella prototype. An experimental evaluation of our techniques shows that: (i) a push-based approach achieves near-perfect fidelity in a stable overlay network, (ii) a hybrid approach based on push and pull achieves high fidelity in highly dynamic overlay networks and (iii) the run-time overheads of our techniques are small, making them a practical choice for overlay networks. Author Affiliation: (a) Department of Computer Science, University of Massachusetts, Amherst, MA 01003, USA (b) Verizon Corp, GTE Labs, Verizon Data Services, 40 Sylvan Road, Waltham, MA 02451, USA (c) Department of Computer Science and Engineering, India Institute of Technology, Bombay, Powai, Mumbai 400 076, India Article Note: (footnote) [star] This research was supported by NSF grants CCR-9984030, CCR-0098060, CCR-0219520 and EIA-0080119.
- Published
- 2006
45. Block approach -- tabu search algorithm for single machine total weighted tardiness problem
- Author
-
BoA1, Grabowski, JoZef, and Wodecki, MieczysAAw
- Subjects
Algorithm ,Algorithms ,Management science ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cie.2005.12.001 Byline: Wojciech BoA1/4ejko (a), Jozef Grabowski (a), MieczysAaw Wodecki (b) Abstract: Problem of scheduling on a single machine to minimize total weighted tardiness of jobs can be described as follows: there are n jobs to be processed, each job has an integer processing time, a weight and a due date. The objective is to minimize the total weighted tardiness of jobs. The problem belongs to the class of NP-hard problems. Some new properties of the problem associated with the blocks have been presented and discussed. These properties allow us to propose a new fast local search procedure based on a tabu search approach with a specific neighborhood which employs blocks of jobs and a compound moves technique. A compound move consists in performing several moves simultaneously in a single iteration of algorithm and allows us to accelerate the convergence to good solutions. In the algorithm, we use an idea which decreases the complexity for the search of neighborhood from O(n.sup.3) to O(n.sup.2). Additionally, the neighborhood is reduced by using some elimination criteria. The method presented in this paper is deterministic one and has not any random element, as distinct from other effective but non-deterministic methods proposed for this problem, such as tabu search of Crauwels, H. A. J., Potts, C. N., & Van Wassenhove, L. N. (1998). Local search heuristics for the single machine total weighted tardiness Scheduling Problem. INFORMS Journal on Computing, 10(3), 341-350, iterated dynasearch of Congram, R. K., Potts C. N., & Van de Velde, S. L. (2002). An iterated dynasearch algorithm for the single-machine total weighted tardiness scheduling problem. INFORMS Journal on Computing, 14(1), 52-67 and enhanced dynasearch of Grosso, A., Della Croce, F., & Tadei, R. (2004). An enhanced dynasearch neighborhood for single-machine total weighted tardiness scheduling problem. Operations Research Letters, 32, 68-72. Computational experiments on the benchmark instances from OR-Library (http://people.brunel.ac.uk/mastjjb/jeb/info.html) are presented and compared with the results yielded by the best algorithms discussed in the literature. These results show that the algorithm proposed allows us to obtain the best known results for the benchmarks in a short time. The presented properties and ideas can be applied in any local search procedures. Author Affiliation: (a) Institute of Engineering Cybernetics, WrocAaw University of Technology, Janiszewskiego 11-17, 50-372 WrocAaw, Poland (b) Institute of Computer Science, University of WrocAaw, Przesmyckiego 20, 51-151 WrocAaw, Poland Article History: Received 25 May 2005; Revised 3 November 2005; Accepted 9 December 2005 Article Note: (footnote) [star] This paper was processed by Area Editor Maged M. Dessouky.
- Published
- 2006
46. An optimal resource utilization scheme with end-to-end congestion control for continuous media stream transmission
- Author
-
Luo, Hongli, Shyu, Mei-Ling, and Chen, Shu-Ching
- Subjects
Streaming media technology ,Computer science ,Streaming media - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.06.005 Byline: Hongli Luo (a), Mei-Ling Shyu (a), Shu-Ching Chen (b) Keywords: Adaptive transmission rate; Quality-of-service (QoS); End-to-end congestion control; Network delay; TCP-friendly; Network resource optimization Abstract: In this paper, we propose an optimal resource utilization scheme with end-to-end congestion control for continuous media stream transmission. This scheme can achieve minimal allocation of bandwidth for each client and maximal utilization of the client buffers. By adjusting the server transmission rate in response to client buffer occupancy, playback requirements in the client, variations in network delays and the packet loss rate, an acceptable quality-of-service (QoS) level can be maintained at the end systems. Our proposed scheme can also achieve end-to-end TCP-friendly congestion control, which the multimedia flows can share fairly with the TCP flows instead of starving the TCP flows and the packet loss rate is reduced. Simulations to compare with different approaches under different network congestion levels and to show how our approach achieves end-to-end congestion control and inter-protocol fairness are conducted. The simulation results show that our proposed scheme can generally achieve high network utilization, low losses and end-to-end congestion control. Author Affiliation: (a) Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL 33124, USA (b) Distributed Multimedia Information System Laboratory, School of Computer Science, Florida International University, Miami, FL 33199, USA Article History: Received 5 November 2004; Revised 24 February 2005; Accepted 29 June 2005 Article Note: (miscellaneous) Responsible Editor: G. Morabito
- Published
- 2006
47. An assessment of systems and software engineering scholars and institutions (2000-2004)
- Author
-
Tse, T.H., Chen, T.Y., and Glass, Robert L.
- Subjects
Software development/engineering ,Software quality ,Software engineering ,Software ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.08.018 Byline: T.H. Tse (a), T.Y. Chen (b), Robert L. Glass (c) Keywords: Top scholars; Top institutions; Research publications; Systems and software engineering Abstract: This paper presents the findings of a five-year study of the top scholars and institutions in the Systems and Software Engineering field, as measured by the quantity of papers published in the journals of the field in 2000-2004. The top scholar is Hai Zhuge of the Chinese Academy of Sciences, and the top institution is Korea Advanced Institute of Science and Technology. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period. Author Affiliation: (a) Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong (b) Faculty of Information and Communication Technologies, Swinburne University of Technology, John Street, Melbourne 3122, Australia (c) Computing Trends, 18 View St., Brisbane, QLD 4064, Australia Article History: Received 29 July 2005; Revised 2 August 2005; Accepted 13 August 2005
- Published
- 2006
48. Automatic generation of test cases from Boolean specifications using the MUMCUT strategy
- Author
-
Yu, Yuen Tak, Lau, Man Fai, and Chen, Tsong Yueh
- Subjects
Universities and colleges ,Computer science - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.08.016 Byline: Yuen Tak Yu (a), Man Fai Lau (b), Tsong Yueh Chen (b) Keywords: Black-box testing; Boolean specification; Fault-based testing; Specification-based testing; Test case generation Abstract: A recent theoretical study has proved that the MUMCUT testing strategy (1) guarantees to detect seven types of fault in Boolean specifications in irredundant disjunctive normal form, and (2) requires only a subset of the test sets that satisfy the previously proposed MAX-A and MAX-B strategies, which can detect the same types of fault. This paper complements previous work by investigating various methods for the automatic generation of test cases to satisfy the MUMCUT strategy. We evaluate these methods by using several sets of Boolean expressions, including those derived from real airborne software systems. Our results indicate that the greedy CUN and UCN methods are clearly better than others in consistently producing significantly smaller test sets, whose sizes exhibit linear correlation with the length of the Boolean expressions in irredundant disjunctive normal form. This study provides empirical evidences that the MUMCUT strategy is indeed cost-effective for detecting the faults considered in this paper. Author Affiliation: (a) Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong (b) Faculty of Information and Communication Technologies, Swinburne University of Technology, Hawthorn 3122, Australia Article History: Received 27 February 2003; Revised 17 August 2005; Accepted 19 August 2005 Article Note: (footnote) [star] The work is supported in part by a grant from City University of Hong Kong (project no. 7001233), a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC ref. no. CityU 1083/00E), and Australian Research Council Discovery Grants (project IDs: DP0345147 and DP0558597).
- Published
- 2006
49. MuSeQoR: Multi-path failure-tolerant security-aware QoS routing in Ad hoc wireless networks
- Author
-
Reddy, T. Bheemarjuna, Sriram, S., Manoj, B.S., and Murthy, C. Siva Ram
- Subjects
Protocol ,Universities and colleges ,Security management ,Computer science ,Computer network protocols - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2005.05.035 Byline: T. Bheemarjuna Reddy (a), S. Sriram (b), B.S. Manoj (c), C. Siva Ram Murthy (a) Keywords: Multi-path routing; QoS; Security; Dispersity routing; Diversity coding; Erasure channel; Corruption channel; Ad hoc wireless networks Abstract: In this paper, we present MuSeQoR: a new multi-path routing protocol that tackles the twin issues of reliability (protection against failures of multiple paths) and security, while ensuring minimum data redundancy. Unlike in all the previous studies, reliability is addressed in the context of both erasure and corruption channels. We also quantify the security of the protocol in terms of the number of eavesdropping nodes. The reliability and security requirements of a session are specified by a user and are related to the parameters of the protocol adaptively. This relationship is of central importance and shows how the protocol attempts to simultaneously achieve reliability and security. In addition, by using optimal coding schemes and by dispersing the original data, we minimize the redundancy. Finally, extensive simulations were performed to assess the performance of the protocol under varying network conditions. The simulation studies clearly indicate the gains in using such a protocol and also highlight the enormous flexibility of the protocol. Author Affiliation: (a) Department of Computer Science and Engineering, HPCN Lab, BSB 358, Indian Institute of Technology Madras, Chennai 600 036, Tamil Nadu, India (b) Department of Computer Science, University of California, Berkeley, CA 94720, USA (c) CalIT2, Jacobs School of Engineering, University of California, San Diego, CA 92093, USA Article History: Received 2 June 2004; Revised 5 May 2005; Accepted 23 May 2005 Article Note: (footnote) [star] This work was supported by the iNautix Technologies India Private Limited, Chennai, India and the Department of Science and Technology, New Delhi, India.
- Published
- 2006
50. Projecting registration error for accurate registration of overlapping range images
- Author
-
Liu, Yonghuai, Wei, Baogang, Li, Longzhuang, and Zhou, Hong
- Subjects
Robot ,Algorithm ,Robots ,Image processing ,Algorithms ,Computer science - Abstract
In this paper, we propose a novel algorithm for the automatic registration of two overlapping range images. Since it is relatively difficult to compare the registration errors of different point matches, we project them onto a virtual image plane for more accurate comparison using the classical pin-hole perspective projection camera model. While the traditional ICP algorithm is more interested in the points in the second image close to the sphere centred at the transformed point, the novel algorithm is more interested in the points in the second image as collinear as possible to the transformed point. The novel algorithm then extracts useful information from both the registration error and projected error histograms for the elimination of false matches without any feature extraction, image segmentation or the requirement of motion estimation from outliers corrupted data and, thus, has an advantage of easy implementation. A comparative study based on real images captured under typical imaging conditions has shown that the novel algorithm produces good registration results.
- Published
- 2006
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.