3,147 results
Search Results
2. Comp Con Spring 89 (digest of papers)
- Published
- 1989
3. The HSF Conditions Database Reference Implementation.
- Author
-
Mashinistov, Ruslan, Gerlach, Lino, Laycock, Paul, Formica, Andrea, Govi, Giacomo, and Pinkenburg, Chris
- Subjects
DATABASES ,COMPUTING platforms ,COMPUTER architecture ,METADATA ,REDUNDANCY in engineering - Abstract
Conditions data is the subset of non-event data that is necessary to process event data. It poses a unique set of challenges, namely a heterogeneous structure and high access rates by distributed computing. The HSF Conditions Databases activity is a forum for cross-experiment discussions inviting as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core conditions database API. This paper will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling. Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL for all but one method. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The service deployment using Helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running more than 25k cores at BNL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Web Crawler System for Distinct Author Identification in Bibliographic Databases.
- Author
-
Dau, Nancy, Russo, Marcial, Bouwsema, Eric, Özyer, Tansel, and Alhajj, Reda
- Subjects
INTERNET domain names ,AMBIGUITY ,SCHOLARLY electronic publishing ,DIGITAL library access control ,SOCIAL networks ,COMPUTER architecture ,DOCUMENT clustering - Abstract
A person's name is regularly used to uniquely identify himself/herself from others; unfortunately names are in no way unique and this leads to serious problems. For instance, when trying to retrieve papers from academic database repositories, it can be difficult to distinguish one author from another if the individuals in question have the exact same name. An author can also assume another name, for instance by using the full name. Thus, being able to differentiate which person a specific name is referring to can be tricky. In this paper, we propose a method to solve this ambiguity problem by gathering information from bibliographic databases and using this information to create a social network tree. Based on the relationships created among co-authors it is possible to disambiguate authors with a high-level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. Service-Oriented Architecture for Smart Environments (Short Paper).
- Author
-
Degeler, Viktoriya, Gonzalez, Luis I. Lopera, Leva, Mariano, Shrubsole, Paul, Bonomi, Silvia, Amft, Oliver, and Lazovik, Alexander
- Abstract
The advances of pervasive technology offer new standards for user comfort by adding intelligence to ubiquitous home and office appliances. With intelligence being the core of some newly constructed buildings, it is important to design a scalable, robust, context-aware architecture, which not only has enough longevity and evolving capabilities to sustain itself over the building's lifetime, but also provides enough potential for additional features to be added to the core Building Management Systems (BMS). Such features may include energy preservation system, or activity-recognition techniques. Service-Oriented Architecture (SOA) principles provide great tools that can be applied to the smart buildings design, however certain specifics of pervasive systems should be taken into account, such as high heterogeneity of available devices and capabilities. In this paper we propose an architecture for smart pervasive applications, which is based on SOA principles and is specifically designed for long-term applicability, scalability, and evolution capabilities of a BMS. We validate our proposal by implementing a smart office on the premises of the Technical University of Eindhoven and showing that it complies with the requirements of scalability and robustness, at the same time being a viable BMS. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
6. Architecture Definition and Evaluation Technical Evaluation Report.
- Author
-
Vant, Malcolm R.
- Subjects
CONFERENCES & conventions ,COMPUTER architecture ,ARCHITECTURAL models ,COST control ,CLOUD computing - Abstract
The CSO-IST-115 symposium on Architecture Definition and Evaluation was held in Toulouse, France May 13-14, 2013. The symposium addressed several key areas in the use and development of Architectural Frameworks such as the NAF (NATO Architectural Framework) and its associated standards DODAF (Department of Defence Architectural Framework), MODAF (Ministry of Defence Architectural Framework) and others. Standard Architectural Frameworks were first introduced by the United States in an effort to cut the costs involved in specifying and then building complex military systems. Other nations, and NATO, quickly saw the benefits and followed suit. Any of these frameworks provide a common set of viewpoints and way of describing systems of systems. Although they are similar, the frameworks are not the same, and in some cases their underpinning meta-models differ. Differences among them cause difficulties when assembling multinational Command Support Systems, such as the Afghanistan Mission Network. Furthermore, there is no specified methodology associated with the frameworks, and therefore there can be a steep learning curve when adopting them since each developer tends to develop their own methodology and adopt their own toolsets. This diversity of approach and lack of specified methods leads to a lack of interoperability among developers and a reduction in possible productivity. Other issues exist such as the difficulty in dealing with real-time or dynamic situations in some of the frameworks. The symposium covered various aspects of the use of architecture frameworks such as lessons learned, model-based approaches to development, methodologies for executable architecture, dealing with dynamics, re-engineering legacy systems and cloud architectures. During the Symposium, a very strong message came through that a common methodology is sorely needed and that a true single unified architecture framework would be very useful to all the nations. Many other positive lessons learned and successful methods were also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2013
7. Multi-Layer Faults in the Architectures of Mobile, Context-Aware Adaptive Applications: A Position Paper.
- Author
-
Sama, Michele, Rosenblum, David S., Zhimin Wang, and Elbaum, Sebastian
- Subjects
CELL phones ,COMPUTER software ,COMPUTER architecture ,DETECTORS ,TELEPHONES - Abstract
Five cellphones are sold every second, and there are four times more cellphones than computers, meaning there are some billions of mobile handheld devices in existence. Modern cellphones are equipped with multiple context sensors used by increasingly sophisticated software applications that exploit the sensors, allowing the applications to adapt automatically to changes in the surrounding environment, such as by responding to the location and speed of the user. The architecture of such applications is typically layered and incorporates a context-awareness middleware to support processing of context values. While this layered architecture is very natural for the design and implementation of applications, it gives rise to new kinds of faults and faulty behavior modes, which are difficult to detect using existing validation techniques. In this paper we provide scenarios illustrating such faults and exploring how they manifest in context-aware adaptive applications. [ABSTRACT FROM AUTHOR]
- Published
- 2008
8. Unifying Compliance Management in Adaptive Environments through Variability Descriptors (Short Paper).
- Author
-
Koetter, Falko, Kochanowski, Monika, Renner, Thomas, Fehling, Christoph, and Leymann, Frank
- Abstract
When managing IT environments and designing business processes, compliance regulations add challenges. Especially considering adaptive environments in the context of a service-oriented architecture in combination with exploiting the advantages of cloud technologies, maintaining compliance is cumbersome. Measures have to be taken on many application levels - including business processes, IT architecture, and business management. Although a lot of work has been done on various approaches covering compliance on one or more of these levels, in large companies more than one approach is likely to be employed. However, a unified approach for supporting the compliance tasks - like introduction, maintenance, and especially adaptation - on different levels of business and IT is missing. This work introduces this unifying approach, which links compliance requirements to implementing technology using variable compliance descriptors in order to comprehensively support compliance tasks. The advantage of this approach is that the impact of compliance on these different levels is tracked, thus enabling change propagation from changes in compliance requirements to infrastructure and business process reconfiguration. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
9. Name-Centric Service Architecture for Cyber-Physical Systems (Short Paper).
- Author
-
Hellbruck, Horst, Teubler, Torsten, and Fischer, Stefan
- Abstract
The goal of Service-Oriented Architectures (SOA) is to enable easy cooperation of a large number of computers and orchestration of services that are connected via a network. However, SOA for wireless sensor networks (WSN) and cyber-physical systems (CPS) is still a challenging task. Consequently, for design and development of large CPS like WSNs connected to clouds, SOA has not yet evolved as an integral technology. One of the limiting issues is service registration and discovery. In large CPS discovery of services is tedious, mostly due to the fact that services are often semantically bound to a region or an application function while SOA forces service endpoints to be based on addresses of nodes. Also, today, SOA technologies are not used for service composition within sensor nodes and between sensor nodes, and even worse, different methods exist for service access in a WSN and in the backend. Therefore, service development differs largely in WSN and cloud. To overcome this limitation, we suggest a name-centric service architecture for cyber-physical systems. Our architecture is based on (a) using URNs instead of URLs to provide a service-centric architecture instead of service-or location-centric networking, (b) using the well-known CCNx protocol as a basis for our architecture which supports location and access transparency, and (c) employing CCN-WSN as the resource-efficient lightweight implementation for WSNs to build a name-based service bus for CPS. We evaluate the architecture by implementing an example application for facility management. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
10. MDA-based network management information model transformation from UML to Web Services.
- Author
-
Bo Wang, Zhili Wang, and Qiu Xue-song
- Abstract
The definition of network management interface is generally divided into three phases, requirements, analysis and design. In analysis phase, UML is used as the modeling language to present technology independent models, and these models can be mapped into multiple technology specific models. With the development of Web Services applied in network management domain, it is required to define Web Services-based information models in design phase. This paper applied MDA approach to realize models transformation. As the main work, this paper proposed detailed mapping rules describing how to map the existing source UML models into the target Web Services-based models. An automatic transformation approach using XSLT was proposed to implement the mapping rules, and experiments were made for verification. The proposed mapping rules improved the defects in related work. Furthermore, the work of this paper could help resolve some issues in current models transformation work which is still being accomplished manually by standard developers. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
11. FlexRAM: Toward an advanced Intelligent Memory system: A retrospective paper.
- Author
-
Torrellas, Josep
- Abstract
The work that lead to our ICCD-1999 FlexRAM paper [4] started in 1996. At that time, there was great interest in the potential of integrating compute capabilities in large DRAM memories — an architecture called Processing-In-Memory (PIM) or Intelligent Memory. Prof. Kogge at the University of Notre Dame had been an early and persistent proponent of the technology since his EXECUBE work [6]. Prof. Patterson at UC Berkeley had been leading the Berkeley IRAM project [11], and co-organized a workshop on these architectures in June 1997 [12]. In addition, Dr. Lucas from DARPA was outlining plans for an effort in this area. Finally, some chip manufacturers were investing in a DRAM technology that could be compatible with high-speed logic — e.g., IBM's CMOS 7LD and Mitsubishi's ERAM. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
12. Utilizing DMAIC Process to Identify Successful Completion of SRAD Phases of Waterfall Development.
- Author
-
Hossain, Niamat Ullah Ibne, Sokolov, Alexandr M., Petersen, Tim, and Merrill, Brian
- Subjects
REQUIREMENTS engineering ,COMPUTER architecture ,PROJECT management ,WATERFALLS ,MANUFACTURING processes - Abstract
The Systems Requirements Analysis (SRA) and System Architecture Design (SAD) (often combined into one acronym SRAD) phases of projects in the waterfall development cycle often pass-through design gates without proper pass/fail criteria. In addition, completion of project designs is often put off onto later design phases (Preliminary Design and Critical Design) in favor of meeting schedule/budget early in the project lifecycle. Currently in industry, schedule, and budget dictate project phase completion over proper metric tracking/utilization. This is normally due to the fluidity of the design in early phases of project development. This thinking can be dangerous for organizations/industries as it consistently leads to defects late in the development cycle where fixes are costly. It is cheaper to change designs and defects as early in a design as possible. This paper will outline the DMAIC (Define, Measure, Analyze, Improve, Control) process to help track completion of the SRAD phases for proper completion of design review phase gates. By using DMAIC, projects will also be able to reduce latent defects in designs that can become costly to projects in later design phases such as testing, and production. These phase gate completion metrics can be implemented and refined, as the DMAIC process is an ongoing methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
13. IS ARCHITECTURE COMPLEXITY DYNAMICS IN M&A: DOES CONSOLIDATION REDUCE COMPLEXITY?
- Author
-
Onderdelinden, Eric, van den Hooff, Bart, and van Vliet, Mario
- Subjects
INFORMATION storage & retrieval systems ,DATA integration ,COMPUTER architecture ,MERGERS & acquisitions ,TECHNOLOGICAL complexity - Abstract
In this paper we aim to improve our understanding of the dynamics of IS architecture complexity (i.e, the change in this complexity over time) during the execution of a consolidation IS integration strategy (IIS). Based on two case studies, we find that unexpected levels of complexity emerge during IIS execution because of an underestimation of requisite complexity and an overestimation of the potential to reduce complexity. Our analysis shows that increased complexity is due to the fact that the intended consolidation IIS is only partially executed, and to increasingly emergent IIS execution. Additionally, we find that while complexity was reduced at the portfolio level, at more detailed levels of observation complexity was actually increased. Our paper contributes to knowledge in the field by providing a deeper insight into IS architecture complexity dynamics during the execution of a consolidation IIS, and the concept of IS architecture complexity in general. [ABSTRACT FROM AUTHOR]
- Published
- 2023
14. AN INNOVATIVE APPLICATION ARCHITECTURE TO REDUCE CONTACT IN WORK ENVIRONMENTS.
- Author
-
Vargün, Aycan, Polat, Emin Tolgahan, Macit, Özgür, and Taşkın, Uğur Serkan
- Subjects
COVID-19 pandemic ,WORK environment ,INFORMATION & communication technologies ,MOBILE apps ,TELEMEDICINE ,COMPUTER architecture - Abstract
COVID-19 has caused changes in working conditions and environments as well as in living conditions all over the world. The e-Health approach, on the other hand, can help employees to work in healthier environments with the appropriate use of information and communication technologies. In this paper, a solution architecture is presented to reduce the number of physical contacts that cause risk in work environments and to monitor the health status of employees. This architecture has been realized as a mobile application that employees can download on their phones. Usage results among employees have been included in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
15. Full custom datapath of 16-bit CORDIC.
- Author
-
Bi, Zhuo and Dai, Yijun
- Abstract
A radix-2 16 bits CORDIC (CoOrdinate Rotation DIgital Computer) architecture which includes pipelined and parallelism is presented in this paper. A full custom technology for CORDIC datapath which is used in the proposed architecture for 16-bit precision can improve the throughout and decrease the area. As a result, the silicon area of the data-path is 11699.877μm2 in the 45nm CMOS technology library and the critical path delay is 875ps at the SS (Slow-Slow) corners whose Voltage and Temperature are 1.1V and 75° respectively. Based on the layout level, the simulation results show that the design has characteristics of high speed and small area in full custom technology. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
16. Equipment Support Command Simulation Based on HLA.
- Author
-
QU Changzheng, LIU Jimin, SUN Fu, HE JianQian, and CHEN Jiansi
- Subjects
COMPUTER simulation ,COMPUTER architecture ,WORKFLOW software ,SIMULATION methods & models ,METHODS engineering - Abstract
This paper analyzes the operational process of Equipment Support Command (ESC) and its information exchange capability with external environment. Consequently, the ESC federate is developed according to High Level Architecture (HLA) standards. This paper develops graphical modeling tool for workflow, and creates workflow model for ESC based on timed colored Petri net. By setting simulating environment, this tool can automatically create simulation models that can be executed in executable specification tool-ExSpect. Simulation models are integrated into Run-Time Infrastructure (RTI) by ExSpect's Component Object Model (COM). By deploying resources dynamically during simulating process, this paper displays the transformation of system state in animated cartoon interface, and analyzes the bottleneck and operational efficiency of ESC business flow. Furthermore, it proposes improvement measures for workflow of ESC. [ABSTRACT FROM AUTHOR]
- Published
- 2009
17. DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder.
- Author
-
Haoyu Yang, Pathak, Piyush, Gennari, Frank, Ya-Chieh Lai, and Bei Yu
- Subjects
MACHINE learning ,LINEAR systems ,LITHOGRAPHY ,GENERATIVE adversarial networks ,COMPUTER architecture - Abstract
VLSI layout patterns provide critic resources in various design for manufacturability researches, from early technology node development to back-end design and sign-off flows. However, a diverse layout pattern library is not always available due to long logic-to-chip design cycle, which slows down the technology node development procedure. To address this issue, in this paper, we explore the capability of generative machine learning models to synthesize layout patterns. A transforming convolutional auto-encoder is developed to learn vector-based instantiations of squish pattern topologies. We show our framework can capture simple design rules and contributes to enlarging the existing squish topology space under certain transformations. Geometry information of each squish topology is obtained from an associated linear system derived from design rule constraints. Experiments on 7nm EUV designs show that our framework can more effectively generate diverse pattern libraries with DRC-clean patterns compared to a state-of-the-art industrial layout pattern generator. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Computerized management of time, tasks, and priorities
- Author
-
Miller, P [ed.]
- Published
- 1988
19. Design and implementation of an online self-training system for the Computer System Platform course.
- Author
-
Li, Yujun, Zhu, Limiao, and Wang, Xiaoying
- Abstract
As a newly designed major course in the direction of the "Information Technology", the "Computer System Platform" course involves comprehensive contents including computer hardware platforms, software platforms, operating platforms and also application platforms. Thus, it is difficult for students to learn this course since it covers a wide range several of knowledge. Hence, in this paper we designed and implemented an online learning and self-training system based on J2EE architecture to address these issues. The system has the functionality of importing the training questions into database in batch, and hence teachers can easily import a variety of formats of classified training questions into the database and it reduces the burden of single question importing. By using the system, students can review the knowledge learned in class, practice specifically of each chapter, and consolidate them in time. Moreover, students can also conduct a simulated examination through the automatic exam generation subsystem, which contributes to importing the effect of self-learning. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
20. Introducing parallel programming to traditional undergraduate courses.
- Author
-
de Freitas, Henrique Cota
- Abstract
Parallel programming is an important issue for current multi-core processors and necessary for new generations of many-core architectures. This includes processors, computers, and clusters. However, the introduction of parallel programming in undergraduate courses demands new efforts to prepare students for this new reality. This paper describes an experiment on a traditional Computer Science course during a two-year period. The main focus is the question of when to introduce parallel programming models in order to improve the quality of learning. The goal is to propose a method of introducing parallel programming based on OpenMP (a shared-variable model) and MPI (a message-passing model). Results show that when the OpenMP model is introduced before the MPI model the best results are achieved. The main contribution of this paper is the proposed method that correlates several concepts such as concurrency, parallelism, speedup, and scalability to improve student motivation and learning. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
21. 2/2 Training time optimization for balanced accuracy/complexity neural network models.
- Author
-
Ugalde, Hector M. Romero, Carmona, Jean-Claude, and Alvarado, Victor M.
- Abstract
Accuracy, complexity and computational cost are very important characteristics of a model. In this paper, a dedicated neural network design and a computational cost reduction approach are proposed in order to improve the balance between the quality and computational cost of black box non linear system identification models. The proposed architecture helps to reduce the number of parameters of the model after the training phase preserving the estimation accuracy of the non reduced model. Here, we focus on the fact that this particular design helps to reduce the computational cost required for the training phase. To validate the proposed approach, we identified the Wiener-Hammerstein benchmark nonlinear system proposed in SYSID2009 [1]. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
22. Possible schemes on baseband signal joint detection concept for multimode terminal.
- Author
-
Setiawan, Hendra and Firdaus
- Abstract
This paper presents an integration architectures concept for various wireless access networks detections. The process is done in physical layer after carrier sensing to collect as many as the available services around a multimode terminal. The main idea is recognition of the unique signals transmitted by different standard within a constant period. The research has found that the unique signals either synchronization or preamble signals have to be employed as a representation of service availabilities. Furthermore, the possibility of some correlation schemes also presented in order to detect those signals. From a complexity point of view, finally, this paper proposes the lowest complexity of joint detection architecture for multimode terminal to take care GSM, WiMAX OFDM, and Wireless LAN services. The result shows that the proposed architecture requires 60% of computational resources than employing cross-correlation to detect all services. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
23. A Decentralized Approach for Implementing Identity Management in Cloud Computing.
- Author
-
Chen, Jun, Wu, Xing, Zhang, Shilin, Zhang, Wu, and Niu, Yanping
- Abstract
Cloud computing is the next generation of computing paradigm. Along with cloud computing, many related problems come up. And these problems in turn slow the speed of the development of cloud computing down. Among these problems, e.g. interoperability and privacy, identity management and security are strong concerned. Many researchers and enterprises have already done a lot to optimize the identity management and strengthen the security in cloud computing. Most of these studies focus on the usability of identity management and various kinds of method to help improve security. But in this paper, we do some research from a new angle. While the federated solution of identity management helps relieve many problems, it's adopted by many platforms and enterprises. The general approach for deploying identity management is a centralized component processing authentication and authorization requests. But with the cloud growing in scale and the increasing number of users, this centralized solution will be the bottleneck of the cloud. In this paper, we propose a decentralized approach for implementing identity management in service oriented architecture in cloud computing and a grouping algorithm as the deploy strategy. Security is another problem involved in this paper. Since many researchers have done many detailed and fruitful studies in security, the security solution illustrated in this paper is specific in the proposed architecture. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
24. An Architecture of Cyber Physical System Based on Service.
- Author
-
Yu, Chengyuan, Jing, Song, and Li, Xuan
- Abstract
A CPS is defined as integration of computation and physical processes. In cps, downsized and embedded devices execute physical processes by monitoring and controlling entities in the physical world. Service-oriented architecture (SOA) provides the concept of packaging available functionalities as interoperable services. Services can be used to construct applications. But, in the context of cyber-physical systems, there are special characteristics and requirements. For example, physical entities are exclusive, but services can be used by several consumers at the same time through multi-threading. Lots of applications are Safety-Critical in cyber-physical systems. Taking all these characteristics and requirements into consideration, this paper introduce application rebuild framework and lease protocol into cyber-physical systems. The atomicity property of lease protocol guarantees that an application either is granted the leases for all requested services or it gets no lease at all. Applications which aren't granted the leases for all requested services will be rebuilt by application rebuild framework to ensure applications will be started in time. The architecture of cyber-physical systems proposed by this paper can improve the availability and adaptability of application. A prototype system is implemented to show the practicability of the architecture of cyber-physical systems. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
25. In-pixel generation of gaussian pyramid images by block reusing in 3D-CMOS.
- Author
-
Suarez, M., Brea, V.M., Cabello, D., Carmona-Galan, R., and Rodriguez-Vazquez, A.
- Abstract
This paper introduces an architecture of a switched-capacitor network for Gaussian pyramid generation. Gaussian pyramids are used in modern scale- and rotation-invariant feature detectors or in visual attention. Our switched-capacitor architecture is conceived within the framework of a CMOS-3D-based vision system. As such, it is also used during the acquisition phase to perform analog storage and Correlated Double Sampling (CDS). The paper addresses mismatch, and switching errors like feedthrough and charge injection. The paper also gives an estimate of the area occupied by each pixel on the 130nm CMOS-3D technology by Tezzaron. The validity of our proposal is assessed through object detection in a scale- and rotation-invariant feature detector. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
26. Invulnerability Studies of Substation LAN Architecture.
- Author
-
Xiong, X. P. and Tan, J. C.
- Abstract
This paper pioneers the application of deterministic and stochastic index studies in substation LAN architecture evaluation. The paper studies star, ring and hybrid star-ring configurations and their invulnerability, investigates their suitability for mission critical time stringent protection and control function applications in substation automation systems. The proposed invulnerability analysis method is simple, efficient and practical. Studies on the commonly used star, ring and hybrid star-ring configurations indicate that the hybrid star-ring architecture is of the highest invulnerability. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
27. Design and modeling of highly power efficient node-level DC UPS.
- Author
-
Kwon, Wonok
- Abstract
This paper presents design and modeling of a highly efficient node-level DC uninterruptible power supply (UPS) in rack-level DC power architecture. In the previous research, we proposed the architecture of the highly efficient rack-level DC system combined with node-level DC UPS. This paper deals with the design and modeling of proposed node-level DC UPS. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
28. Traffic Optimization at the Application Layer - A Cooperative Approach: The tussle between applications and the physical network infrastructure, and the Application-Layer Traffic Optimization (ALTO) system as solution.
- Author
-
Caldas, Paulo and Sousa, Pedro
- Subjects
INFORMATION retrieval ,TRAFFIC engineering ,SOCIAL networks ,PEER-to-peer file sharing ,COMPUTER architecture - Abstract
This paper aims to tackle the frictions between user applications and the physical infrastructure where they reside. In the face of rising network traffic and stricter application demands, a better understanding is needed on how ISPs should manage their resources, and likewise how applications need to act, so the Internet of the future can run accordingly to user and provider needs. As a solution, the main focus of this work is the Application-Layer Traffic Optimization working group, which was formed by the Internet Engineering Task Force to explore standardizations for network information retrieval. This paper firstly begins with an introduction of the historical tussle between applications and network providers. It then presents the Application-Layer Traffic Optimization project as a viable solution in the form of an implemented system, inspired and extending the original working group's specification, and then validates its usefulness in a simulated scenario, when compared to classical alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2021
29. FREE AND OPEN SOURCE SOFTWARE FOR GEOPORTALS CREATION.
- Author
-
Basista, Izabela
- Subjects
OPEN source software ,COMPUTER software ,COMMUNICATION ,COMPUTER programming ,COMPUTER architecture - Abstract
Geoportals has become more significant and more common communication platform, used between institutions, companies and clients. The most important feature of mapping portals is sharing of spatial data to a wide audience quickly and easily. However, despite of all benefits accrued through their use, the popularity of geoportals is still too small. Probably the reason for that is the conviction of the high cost of implementation, or the lack of knowledge about the possibilities of presenting spatial data in this form. Free and open source (FOS) software is developing rapidly, particularly for the construction of mapping portals. It can provide a feature-complete alternative to a proprietary software in most system designs. FOS software is adaptable to user's needs, without restrictions and fees, but customization can be difficult to achieve for users (lack of programming knowledge) or it can cost a lot (salary for a programmer). Therefore, the aim of this paper is to analyze the FOS software for the construction of geoportals and check whether the programming knowledge is necessary to build a functional geoportal. The author focuses on three applications: deegree3, geoserver, and MapGuide Open Source Platform. Each of them was tested and described in terms of ease of installation, commissioning and configuration of the individual components of the system. Availability and clarity of the user's manual and tutorials were checked. In the first part of the paper the aim, theory and detailed assumptions about the study are presented. Further the typical architecture of geoportals is described. The next section presents selected applications in terms of their practical use to create geoportals. The final part of the paper presents the conclusions of the researches carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. The Overall Architecture Design and Implementation of CET-4 Diagnostic Practice System.
- Author
-
Liu, D. Y., Zhang, H., and Wu, M.
- Subjects
COMPUTER architecture ,ENGLISH language education ,APPLICATION software ,TEACHING ,LEARNING ,TECHNOLOGICAL innovations - Published
- 2015
31. Evolution of the LHAASO Distributed Computing System based Cloud.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Huang, Qiulan, Li, Haibo, Cheng, Yaodong, Shi, Jingyan, Zheng, Wei, and Hu, Qingbao
- Subjects
DISTRIBUTED computing ,COMPUTER scheduling ,DATA integration ,COMPUTER architecture ,CLOUD computing ,MAINTENANCE costs - Abstract
In this paper we will describe the LHAASO distributed computing system based on virtualization and cloud computing technologies. Particularly, we discuss the key points of integrating distributed resources. A solution of integrating cross-domain resources is proposed, which adopt the Openstack+HTCondor to make the distributed resources work as a whole resource pool. A flexible resource scheduling strategy and a job scheduling policy are presented to realize the resource expansion on demand and the efficient job scheduling to remote sites transparently, so as to improve the overall resource utilization. We will also introduce the deployment of the computing system located in Daocheng, the LHAASO observation base using cloud-based architecture, which greatly helps to reduce the operation and maintenance cost as well as to make sure the system availability and stability. Finally, we will show running status of the system. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Winventory: microservices architecture case study.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Bukowiec, Sebastian, and Gomulak, Pawel Tadeusz
- Subjects
WEB-based user interfaces ,PYTHON programming language ,COMPUTER architecture ,DATA integration - Abstract
In the CERN laboratory, users have access to a large number of different licensed software assets. The landscape of such assets is very heterogeneous including Windows operating systems, office tools and specialized technical and engineering software. In order to improve management of the licensed software and to better understand the needs of the users, it was decided to develop a Winventory application. The Winventory is a tool that gathers and presents statistics of software assets on CERN Windows machines and facilitates interaction with their individual users. The system was built based on microservices architecture pattern, an increasingly popular approach to web application development. The microservices architecture pattern separates the application into multiple independently deployable units that can be individually developed, tested and deployed. This paper presents the microservices architecture and design choices made in order to achieve a modern, maintainable and extensible system for managing licensed software at CERN. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Optimization of Software on High Performance Computing Platforms for the LUX-ZEPLIN Dark Matter Experiment.
- Author
-
Doglioni, C., Kim, D., Stewart, G.A., Silvestris, L., Jackson, P., Kamleh, W., Ayyar, Venkitesh, Bhimji, Wahid, Monzani, Maria Elena, Naylor, Andrew, Patton, Simon, and Tull, Craig E.
- Subjects
PARTICLE physics ,HIGH performance computing ,COMPUTER software ,COMPUTER simulation ,COMPUTER architecture - Abstract
High Energy Physics experiments like the LUX-ZEPLIN dark matter experiment face unique challenges when running their computation on High Performance Computing resources. In this paper, we describe some strategies to optimize memory usage of simulation codes with the help of profiling tools. We employed this approach and achieved memory reduction of 10-30%. While this has been performed in the context of the LZ experiment, it has wider applicability to other HEP experimental codes that face these challenges on modern computer architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Modelling in a Central Architecture Repository -- Lessons Learned.
- Author
-
Stavnstrup, Jens and Møller, Alfred
- Subjects
MILITARY architecture ,DEFENSE industries ,COMPUTER architecture ,COMPUTER engineering - Abstract
The paper covers some lessons learned from architecture development in the Danish Defence over the past decade. The paper provides an overview of the challenges of creating a comprehensive view of the architecture for the defence enterprise, and will focus on an approach with modelling in a central architecture repository. The paper discusses benefits and drawbacks of the approach, and provides some of major lessons learned during the actual work. The emphasis is on Danish Defence with a need to cover architecture in a broad perspective, including relations to NATO, international missions, national, the government outside the defence. The paper also touch the need for cooperative efforts for progress of architecture work, and will provide some conclusions and plans to meet current and future challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2013
35. Machine Learning Methodology for Enhancing Automated Process in IT Incident Management.
- Author
-
Li, Haochen and Zhan, Zhiqiang
- Abstract
Operating system experienced a rise in number of incidents in recent years. Analysis and reemployment of past solution therefore may make a contribution in reducing service interrupt time and minimizing business losses. The training and retaining of human resources is another primary disbursement source for enterprise. Thus, it is of great significance for enterprises to find reasonable solutions automatically. Combined with keyword tokenization, data mining, numerical optimization and neural network, this paper presents a system that compares and finds the most similar incident solution in the past, based on the description provided by customers in natural language. We try to improve the automated process by increasing the efficiency and accuracy through machine learning methodology and also devote to presenting a practical decision support method. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
36. Aggregating distributed geo-processing workflows and web services as processing model web.
- Author
-
Yang, Chao, Shao, Yuanzheng, Chen, Nengcheng, and Di, Liping
- Abstract
Under the circumstance of earth observation data booming, it is important to develop an intelligent and automatic processing method for the geospatial data. Currently the OGC web services standard offer unique interfaces for the geospatial data planning, accessing, processing and publishing. This paper focuses on the Web processing model building and implementing — how to organize the distributed Web computing resources like workflows, OGC web services as a processing model. This paper presents a RESTful architecture for a loosely coupled processing model and the interoperation for the distributed Web resource. In the last, we give a user case of GOES-West image simulation process to testify the feasibility of the proposed processing model web. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
37. Study on the Cooperation Model between Traffic Guidance and Traffic Control Based on Intelligent and Information Technology.
- Author
-
Shumin, Song, Zhaosheng, Yang, and Yao, Yu
- Abstract
In this paper, it is mainly analyzes the significance of cooperation and summarizes the related researches. While, this paper puts forward the aims for some problems of cooperation. Under the director of these aims, this paper constructs a cooperation architecture on the basis of intelligent and cooperative information technique and establishes a series of cooperation models with different objective functions. Then, it makes a description of the test road network and the simulation environment. Some simulation results are brought out to prove the feasibility of the whole cooperation architecture and compare different models. At last, a best cooperation model is advised to be applied in practice. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
38. Towards Halos Networks ubiquitous networking and computing at the edge.
- Author
-
Manzalini, Antonio, Crespi, Noel, Goncalves, Vania, and Minerva, Roberto
- Abstract
This paper presents Halos Networks as an architectural paradigm for developing ubiquitous networking and computing services at the edge of the networks A Halos Network is like a wireless network spontaneously emerging through the interactions of distributed resources embedding wireless communication capabilities. Halos Networks are capable of delivering services and data virally through multiple devices, machines and objects interconnected with one other. This paper discusses business roles for Stakeholders in Halos Networks scenarios, specifically concluding with some remarks about the Operators' role. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
39. HDCRAM Proof-of-Concept for Opportunistic Spectrum Access.
- Author
-
Lazrak, Oussama, Leray, Pierre, and Moy, Christophe
- Abstract
This paper presents a proof-of-concept for opportunistic spectrum access. It particularly focuses on management requirements and how it impacts the design of cognitive radio equipments for secondary users in order to support this scenario. The proposed management architecture is called HDCRAM (Hierarchical and Distributed Cognitive Radio Architecture Management). This paper shows that depending on the behavior expected for different cognitive radio equipments, HDCRAM is deployed differently. It illustrates indeed that our approach can match any cognitive radio context. The demonstration is made with USRP platforms for the radio interface. The processing chain and the management are programmed in C++, using event programming so that the management is only activated when the cognitive radio environment changes. Otherwise, there is no overhead for a normal radio operation. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
40. A Virtual Platform for Performance Estimation of Many-core Implementations.
- Author
-
Marugan, Pablo Gonzalez de Aledo, Gonzalez-Bayon, Javier, and Espeso, Pablo Sanchez
- Abstract
This paper presents a prototype for a virtual platform to estimate performance of OpenMP parallelized programs in shared-memory many-core platforms at early stages of the design flow. This is a challenging problem because, at these stages, the particular details of the final platform are unknown, but early performance estimations are needed to choose between different parallel implementations. The tool presented enables fast modelling of the SW and HW components in a complete platform model and also has the advantage of enabling configurable models of any many-core platform. This can be achieved because of two novel ideas that are explained in this paper: a native simulation framework that enables the modelling of concurrent threads described in OpenMP and a novel use of "shared" and "private" clauses that models the data transfers. The advantages of using the proposed tool are explained with a specific example. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
41. Research trends on ICT convergence from the CaON cluster.
- Author
-
Figuerola, Sergi, Simeonidou, Dimitra, Palacios, Juan F., Di Giglio, Andrea, Ciulli, N., Garcia, J. A., Nejabati, R., Masip, X., Munoz, R., Landi, G., Yannuzzi, M., and Casellas, R.
- Abstract
This is a positioning paper that presents some of the trends in optical networks, considered within the CaON (Converged and Optical Networks) cluster. The trends exposed are focused on the convergence of optical networks and IT infrastructures, optical virtualisation and the control and management in support of emerging cloud computing applications for the Future Internet. The paper introduces the CaON reference model as a key enabler in support of the Future Internet, and proposes a high level, multilayer architecture that spans from the physical domain to the applications. The CaON reference model is the main outcome of the joint effort between the projects belonging to the CaON FP7 EC cluster, and reflects the level of agreement between all of them. The purpose of this reference model is to present the architecture that the cluster foresees for the Future Internet. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
42. From GMPLS to PCE/GMPLS to OpenFlow: How much benefit can we get from the technical evolution of control plane in optical networks?
- Author
-
Liu, Lei, Tsuritani, Takehiro, and Morita, Itsuro
- Abstract
Control plane techniques are very important for optical networks since they can enable dynamic lightpath provisioning and restoration, improve the network intelligence, and greatly reduce the processing latency and operational expenditure. In recent years, there have been great progresses in this area, ranged from the traditional generalized multi-protocol label switching (GMPLS) to a path computation element (PCE)/GMPLS-based architecture. The latest studies have focused on an OpenFlow-based control plane for optical networks, which is also known as software-defined networking. In this paper, we review our recent research activities related to the GMPLS-based, PCE/GMPLS-based, and OpenFlow-based control planes for a translucent wavelength switched optical network (WSON). We present enabling techniques for each control plane, and we summarize their advantages and disadvantages. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
43. A load balancing algorithm with QoS support over heterogeneous wireless networks.
- Author
-
Alam, Md. Golam Rabiul, Hong, Choong Seon, Seung Il Moon, and Eung Jun Cho
- Abstract
Coexistence of different wireless networks is a common phenomenon in today's smart communication infrastructure. Now, the big issue is to explore benefits from the heterogeneous nature of communication technology. Load balancing among the heterogeneous wireless networks is the primary goal of this paper. Load balancing without considering Quality of Service (QoS) merely inadequate in convergence of resource utilization and grade of service. So, this paper proposed a load balancing algorithm with QoS provisioning. This paper is based on a semi-distributed load balancing architecture. Firstly, IP-flow dividing ratio based soft load balancing approach is discussed for high speed features of next generation wireless networks. Secondly, an admission control function of QoS requirements is developed. Thirdly, a joint optimization function is derived and a load balancing algorithm is proposed by using the cost function. Finally, simulation results are presented for performance appraisal. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
44. Intercloud Architecture for interoperability and integration.
- Author
-
Demchenko, Yuri, Makkes, Marc X., Strijkers, Rudolf, and de Laat, Cees
- Abstract
This paper presents on-going research to develop the Intercloud Architecture Framework (ICAF) that addresses problems in multi-provider multi-domain heterogeneous cloud based infrastructure services and applications integration and interoperability. The paper refers to existing standards in Cloud Computing, in particular, recently published NIST Cloud Computing Reference Architecture (CCRA). The proposed ICAF defines four complementary components addressing Intercloud integration and interoperability: multilayer Cloud Services Model that combines commonly adopted cloud service models, such as IaaS, PaaS, SaaS, in one multilayer model with corresponding inter-layer interfaces; Intercloud Control and Management Plane that supports cloud based applications interaction; Intercloud Federation Framework, and Intercloud Operation Framework. The paper briefly describes the architectural framework for cloud based infrastructure services provisioned on-demand being developed in the framework of the GEYSERS project that is used as a basis for building multilayer cloud services integration framework that allows optimized provisioning of both computing, storage and networking resources. The proposed architecture is intended to provide an architectural model for developing Intercloud middleware and in this way will facilitate clouds interoperability and integration. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
45. Pool vs. Island Based Evolutionary Algorithms: An Initial Exploration.
- Author
-
Merelo, J.J., Mora, A.M., Fernandes, C.M., Esparcia-Alcazar, Anna I., and Laredo, Juan L.J.
- Abstract
This paper explores the scalability and performance of pool and island based evolutionary algorithms, both of them using as a mean of interaction an object store, we call this family of algorithms SofEA. This object store allows the different clients to interact asynchronously, the point of the creation of this framework is to build a system for spontaneous and voluntary distributed evolutionary computation. The fact that each client is autonomous leads to a complex behavior that will be examined in the work, so that the design can be validated, rules of thumb can be extracted, and the limits of scalability can be found. In this paper we advance the design of an asynchronous, fault-tolerant and scalable distributed evolutionary algorithm based on the object store CouchDB. We test experimentally the different options and show the trade-offs that pool and island-based solutions offer. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
46. Mobile agent based elastic executor service.
- Author
-
Bhattacharya, Anirban
- Abstract
Use of mobile agents [6, 11] is the topic of great discussion in the cloud computing space. Mobile agents in clouds [12] and Grid based system can work in such a way that the management of the elasticity is maintained by the mobile agents themselves. This paper proposes a concept of having an executor service which can grow by size and nodes and shrink by itself without any need of a specific node manager. In proposing the concept, JADE [5] platform is chosen to provide the reference architecture. Some similar work has been done on this topic like Cloud Agency [1], Runtime Efficiency of Adaptive Mobile Software Agents in Pervasive Computing Environments [4], Agent Teamwork: Coordinating Grid-Computing Jobs with Mobile Agents [2].These give a framework perspective of mobile agents for a grid and cloud based system. However, this paper provides a solution for elastic [13] distributed executor service which can be deployed on grid, cloud or any cluster. It provides reference architecture to develop such a service with mobile agents. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
47. Documenting Early Architectural Assumptions in Scenario-Based Requirements.
- Author
-
Van Landuyt, Dimitri, Truyen, Eddy, and Joosen, Wouter
- Abstract
In scenario-based requirement elicitation techniques such as quality attribute scenario elicitation and use case engineering, the requirements engineer is typically forced to make some implicit early architectural assumptions. These architectural assumptions represent initial architectural elements such as supposed building blocks of the envisaged system. Such implicitly specified assumptions are prone to ambiguity, vagueness, duplication, and contradiction. Furthermore, they are typically scattered across and tangled within the different scenario-based requirements. This lack of modularity hinders navigability of the requirement body as a whole. This paper discusses the need to explicitly document otherwise implicit architectural assumptions. Such an explicit intermediary between quality attribute scenarios and use cases enables the derivation and exploration of interrelations between these different requirements. This is essential to lower the mental effort required to navigate these models and facilitates a number of essential activities in the early development phases such as the selection of candidate drivers in attribute-driven design, architectural trade-off analysis and architectural change impact analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
48. A sparse sampling algorithm for self-optimisation of coverage in LTE networks.
- Author
-
Thampi, Ajay, Kaleshi, Dritan, Randall, Peter, Featherstone, Walter, and Armour, Simon
- Abstract
Coverage optimisation is an important self-organising capability that operators would like to have in LTE networks. This paper applies a Reinforcement Learning (RL) based Sparse Sampling algorithm for the self-optimisation of coverage through antenna tilting. This algorithm is better than supervised learning and Q-learning based algorithms as it has the ability to adapt to network environments without prior knowledge, handle large state spaces, perform self-healing and potentially focus on multiple coverage problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
49. Content adaptation of IPTV services in Interactive DVB-T systems.
- Author
-
Sideris, A., Markakis, E., Anapliotis, P., Pallis, E., and Skianis, C.
- Abstract
This paper discusses how Content Adaptation of IPTV Services may enable decentralized Interactive DVB-T systems (IDVB-T) to optimize the utilization of their network resources, while offering End Users the higher possible QoE. The paper describes the design and overall architecture of a regenerative IDVB-T infrastructure, where content adaptation processes are performed following either a centralized or a distributed approach, setting the basis for a real time accommodation of IPTV services to the available network resources (i.e. bandwidth) and capabilities of End Users terminals (i.e. processing power, screen resolution, codec support). Validity of both Content Adaptation approaches is experimentally verified, with the initial test-results indicating similar performance. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
50. Research and Implementation of the High-Availability Spatial Database Based on Oracle.
- Author
-
Wu, Xiaochun, Wang, Kai, Su, Zixuan, and Liu, Yanjun
- Abstract
With the technology of geographic information system (GIS) developing and the application of enterprise-level GIS gradually wide, it is necessary to build the high-availability spatial database. Now there are not commercial spatial database software which can realize the high-availability. The paper proposes the architecture of high-availability spatial database based on popular object-relation database software-Oracle and verify its feasibility by experiment. The paper provides a new idea to realize the high-availability spatial database. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.