502 results on '"macro-task"'
Search Results
2. Towards Artificial Intelligence Augmenting Facilitation: AI Affordances in Macro-Task Crowdsourcing
- Author
-
Gimpel, Henner, Graf-Seyfried, Vanessa, Laubacher, Robert, and Meindl, Oliver
- Published
- 2023
- Full Text
- View/download PDF
3. Unleashing the Potential of Crowd Work: The Need for a Post-Taylorism Crowdsourcing Model
- Author
-
Ioanna Lykourentzou, Lionel P. Robert, and Pierre-Jean Barlatier
- Subjects
crowd work ,post-taylorism ,macro-task ,distributed collaboration ,open innovation ,Management. Industrial management ,HD28-70 ,Business ,HF5001-6182 - Abstract
Paid crowdsourcing connects task requesters to a globalized, skilled workforce that is available 24/7. In doing so, this new labor model promises not only to complete work faster and more efficiently than any previous approach but also to harness the best of our collective capacities. Nevertheless, for almost a decade now, crowdsourcing has been limited to addressing rather straightforward and simple tasks. Large-scale innovation, creativity, and wicked problem-solving are still largely out of the crowd’s reach. In this opinion paper, we argue that existing crowdsourcing practices bear significant resemblance to the management paradigm of Taylorism. Although criticized and often abandoned by modern organizations, Taylorism principles are prevalent in many crowdsourcing platforms, which employ practices such as the forceful decomposition of all tasks regardless of their knowledge nature and the disallowing of worker interactions, which diminish worker motivation and performance. We argue that a shift toward post-Taylorism is necessary to enable the crowd address at scale the complex problems that form the backbone of today’s knowledge economy. Drawing from recent literature, we highlight four design rules that can help make this shift, namely, endorsing social crowd networks, encouraging teamwork, scaffolding ownership of one’s work within the crowd, and leveraging algorithm-guided worker self-coordination.
- Published
- 2021
- Full Text
- View/download PDF
4. Socio-economic research on fusion SERF 1997-98. Macro task E2: External costs and benefits. Task 2: Comparison of external costs. Report R2.2
- Author
-
Schleisner, Lotte and Korhonen, Riitta
- Subjects
Systemanalyse - Published
- 1998
5. Unleashing the Potential of Crowd Work: The Need for a Post-Taylorism Crowdsourcing Model
- Author
-
Lykourentzou, Ioanna, Robert Jr., Lionel P., Barlatier, Pierre-Jean, Sub Human-Centered Computing, Human-Centered Computing, Utrecht University [Utrecht], École des hautes études commerciales du Nord (EDHEC), Groupe de Recherche en Management - EA 4711 (GRM), Université Nice Sophia Antipolis (... - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Université Côte d'Azur (UCA), Sub Human-Centered Computing, and Human-Centered Computing
- Subjects
Post-Taylorism ,Open innovation ,HF5001-6182 ,Strategy and Management ,SDG 8 - Decent Work and Economic Growth ,macro-task ,HD28-70 ,General Business, Management and Accounting ,crowd work ,[SHS]Humanities and Social Sciences ,distributed collaboration ,open innovation ,Management. Industrial management ,post-taylorism ,Crowd work ,Distributed collaboration ,Business ,Macro-task - Abstract
International audience; Paid crowdsourcing connects task requesters to a globalized, skilled workforce that is available 24/7. In doing so, this new labor model promises not only to complete work faster and more efficiently than any previous approach but also to harness the best of our collective capacities. Nevertheless, for almost a decade now, crowdsourcing has been limited to addressing rather straightforward and simple tasks. Large-scale innovation, creativity, and wicked problem-solving are still largely out of the crowd’s reach. In this opinion paper, we argue that existing crowdsourcing practices bear significant resemblance to the management paradigm of Taylorism. Although criticized and often abandoned by modern organizations, Taylorism principles are prevalent in many crowdsourcing platforms, which employ practices such as the forceful decomposition of all tasks regardless of their knowledge nature and the disallowing of worker interactions, which diminish worker motivation and performance. We argue that a shift toward post-Taylorism is necessary to enable the crowd address at scale the complex problems that form the backbone of today’s knowledge economy. Drawing from recent literature, we highlight four design rules that can help make this shift, namely, endorsing social crowd networks, encouraging teamwork, scaffolding ownership of one’s work within the crowd, and leveraging algorithm-guided worker self-coordination.
- Published
- 2021
- Full Text
- View/download PDF
6. Self-Organizing Teams in Online Work Settings
- Subjects
Collaborative and social computing ,Human-centered computing ,distributedwork ,complexwork ,macro-task ,online teams - Abstract
As the volume and complexity of distributed online work increases, the collaboration among people who have never worked together in the past is becoming increasingly necessary. Recent research has proposed algorithms to maximize the performance of such teams by grouping workers according to a set of predefined decision criteria. This approach micro-manages workers, who have no say in the team formation process. Depriving users of control over who they will work with stifles creativity, causes psychological discomfort and results in less-than-optimal collaboration results. In this work, we propose an alternative model, called Self-Organizing Teams (SOTs), which relies on the crowd of online workers itself to organize into effective teams. Supported but not guided by an algorithm, SOTs are a new human-centered computational structure, which enables participants to control, correct and guide the output of their collaboration as a collective. Experimental results, comparing SOTs to two benchmarks that do not offer user agency over the collaboration, reveal that participants in the SOTs condition produce results of higher quality and report higher teamwork satisfaction. We also find that, similarly to machine learning-based self-organization, human SOTs exhibit emergent collective properties, including the presence of an objective function and the tendency to form more distinct clusters of compatible teammates.
- Published
- 2021
7. Hint: harnessing the wisdom of crowds for handling multi-phase tasks.
- Author
-
Fang, Yili, Chen, Pengpeng, and han, Tao
- Subjects
SWARM intelligence ,COMPUTER software development ,ORDER picking systems ,TRAVEL planning ,CROWDSOURCING ,QUALITY standards - Abstract
The resourcefulness of crowdsourcing can be used to handle a wide range of complex macro-tasks, such as travel planning, translation, and software development. Multi-phase tasks are a type of macro-task that consists of several subtasks distributed across multiple sequential phases. Due to the recent work's disregard for the task's sequential correlation, it is difficult for them to handle multi-stage tasks effectively. This work bridges this gap. We call this novel approach Hint, which incorporates task design, pre hoc worker coordination, and post hoc crowd work coordination. Starting with the task interface design, Hint makes workers aware of the relationship between phases in order to improve their processing abilities. Second, pre hoc coordination of workers is to organize the workers to do the tasks to lower the monetary costs required to meet a specific quality standard. Third, post hoc coordination of crowd work is through a decision tree-based coordination strategy. Extensive tests are carried out on real-world datasets to validate the desirable qualities of the suggested mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Multigrain automatic parallelization in Japanese Millennium Project IT21. Advanced Parallelizing Compiler.
- Author
-
Kasahara, H., Obata, M., Ishizaka, K., Kimura, K., Kaminaga, H., Nakano, H., Nagasawa, K., Murai, A., Itagaki, H., and Shirako, J.
- Published
- 2002
- Full Text
- View/download PDF
9. Fuzzy uncertainty modelling for project planning: application to helicopter maintenance.
- Author
-
Masmoudi, Malek and Haït, Alain
- Subjects
PROJECT management ,HELICOPTER maintenance & repair ,MATHEMATICAL models of uncertainty ,FUZZY logic ,ROUGH-cut capacity planning ,TASK analysis ,SCHEDULING ,MATHEMATICAL models - Abstract
Maintenance is an activity of growing interest, especially for critical systems. In particular, aircraft maintenance costs are becoming an important issue in the aeronautical industry. Managing an aircraft maintenance centre is a complex activity. One of the difficulties comes from the numerous uncertainties that affect the activity and disturb the plans in the short and medium term. Based on a helicopter maintenance planning and scheduling problem, we study in this paper the integration of uncertainties into tactical and operational multi-resource, multi-project planning (respectively Rough Cut Capacity Planning and the Resource Constraint Project Scheduling Problem). Our main contributions are in modelling the periodic workload on a tactical level considering uncertainties in macro-task work content, and modelling the continuous workload on the operational level considering uncertainties in task duration. We model uncertainties using a fuzzy/possibilistic approach instead of a stochastic approach since very limited data are available. We refer to the problems as the Fuzzy Rough Cut Capacity Problem (FRCCP) and the Fuzzy Resource Constraint Project Scheduling Problem (RCPSP). We apply our models to helicopter maintenance activity within the frame of the Helimaintenance project, an industrial project approved by the French Aerospace Valley cluster that aims at building a centre for civil helicopter maintenance. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
10. A review of digital assistants in production and logistics: applications, benefits, and challenges.
- Author
-
Zheng, Ting, Grosse, Eric H., Morana, Stefan, and Glock, Christoph H.
- Subjects
EYE tracking ,ARTIFICIAL intelligence ,WORKERS' compensation ,AUGMENTED reality ,INCLUSION (Disability rights) - Abstract
This study presents a systematic literature review to understand the applications, benefits, and challenges of digital assistants (DAs) in production and logistics tasks. Our conceptual framework covers three dimensions: information management, collaborative operations, and knowledge transfer. We evaluate human-DA collaborative tasks in the areas of product design, production, maintenance, quality management, and logistics. This allows us to expand upon different types of DAs, and reveal how they improve the speed and ease of production and logistic work, which was ignored in previous studies. Our results demonstrate that DAs improve the speed and ease of workers' interaction with machines/information systems in searching, processing, and demonstrating. Extant studies describe DAs with different levels of autonomy in decision-making; however, most DAs perform tasks as instructed or with workers' consent. Additionally, we observe that workers find it more intuitive to perform tasks and acquire knowledge when they receive multiple sensorial cues (e.g. auditory and visual cues). Consequently, future research can explore how DAs can be integrated with other technologies for robust multi-modal assistance such as eye tracking and augmented reality. This can provide customised DA support to workers with disabilities or conditions to facilitate more inclusive production and logistics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Static Coarse Grain Task Scheduling with Cache Optimization Using OpenMP.
- Author
-
Nakano, Hirofumi, Ishizaka, Kazuhisa, Obata, Motoki, Kimura, Keiji, and Kasahara, Hironori
- Subjects
CACHE memory ,PARALLEL programming ,COMPUTER storage devices ,COMPUTER input-output equipment ,COMPUTER programming - Abstract
Effective use of cache memory is getting more important with increasing gap between the processor speed and memory access speed. Also, use of multigrain parallelism is getting more important to improve effective performance beyond the limitation of loop iteration level parallelism. Considering these factors, this paper proposes a coarse grain task static scheduling scheme considering cache optimization. The proposed scheme schedules coarse grain tasks to threads so that shared data among coarse grain tasks can be passed via cache after task and data decomposition considering cache size at compile time. It is implemented on OSCAR Fortran multigrain parallelizing compiler and evaluated on Sun Ultra80 four-processor SMP workstation using Swim and Tomcatv from the SPEC fp 95. As the results, the proposed scheme gives us 4.56 times speedup for Swim and 2.37 times on 4 processors for Tomcatv respectively against the Sun Forte HPC Ver. 6 update 1 loop parallelizing compiler. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
12. Coarse Grain Task Parallelization of Earthquake Simulator GMS Using OSCAR Compiler on Various Cc-NUMA Servers.
- Author
-
Shimaoka, Mamoru, Wada, Yasutaka, Kimura, Keiji, and Kasahara, Hironori
- Published
- 2016
- Full Text
- View/download PDF
13. Static Coarse Grain Task Scheduling with Cache Optimization Using OpenMP.
- Author
-
Nakano, Hirofumi, Ishizaka, Kazuhisa, Obata, Motoki, Kimura, Keiji, and Kasahara, Hironori
- Abstract
Effective use of cache memory is getting more important with increasing gap between the processor speed and memory access speed. Also, use of multigrain parallelism is getting more important to improve effective performance beyond the limitation of loop iteration level parallelism. Considering these factors, this paper proposes a coarse grain task static scheduling scheme considering cache optimization. The proposed scheme schedules coarse grain tasks to threads so that shared data among coarse grain tasks can be passed via cache after task and data decomposition considering cache size at compile time. It is implemented on OSCAR Fortran multigrain parallelizing compiler and evaluated on Sun Ultra80 four-processor SMP workstation, using Swim and Tomcatv from the SPEC fp 95. As the results, the proposed scheme gives us 4.56 times speedup for Swim and 2.37 times on 4 processors for Tomcatv respectively against the Sun Forte HPC 6 loop parallelizing compiler. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
14. A new task scheduling method for distributed programs that require memory management.
- Author
-
Koide, Hiroshi and Oie, Yuji
- Subjects
PRODUCTION scheduling ,MANUFACTURING execution systems ,GANTT charts ,PROJECT management ,OPERATIONS research ,LINEAR programming ,PRODUCTION control ,PRODUCTION management (Manufacturing) - Abstract
In parallel and distributed applications, it is very likely that object-oriented languages, such as Java and Ruby, and large-scale semistructured data written in XML will be employed. However, because of their inherent dynamic memory management, parallel and distributed applications must sometimes suspend the execution of all tasks running on the processors. This adversely affects their execution on the parallel and distributed platform. In this paper, we propose a new task scheduling method called CP/MM (Critical Path/Memory Management) which can efficiently schedule tasks for applications requiring memory management. The underlying concept is to consider the cost due to memory management when the task scheduling system allocates ready (executable) coarse-grain tasks, or macro-tasks, to processors. We have developed three task scheduling modules, including CP/MM, for a task scheduling system which is implemented on a Java RMI (Remote Method Invocation) communication infrastructure. Our experimental results show that CP/MM can successfully prevent high-priority macro-tasks from being affected by the garbage collection arising from memory management, so that CP/MM can efficiently schedule distributed programs whose critical paths are relatively long. Copyright © 2005 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. Affordances and Agency: Toward the Clarification and Integration of Fractured Concepts.
- Author
-
Leonardi, Paul
- Published
- 2023
16. Long-Term Container Allocation via Optimized Task Scheduling Through Deep Learning (OTS-DL) And High-Level Security.
- Author
-
S., Muthakshi and K., Mahesh
- Subjects
DEEP learning ,OPTIMIZATION algorithms ,MACHINE learning ,TECHNOLOGICAL innovations ,RESOURCE allocation ,FAULT tolerance (Engineering) - Abstract
Cloud computing is a new technology that has adapted to the traditional way of service providing. Service providers are responsible for managing the allocation of resources. Selecting suitable containers and bandwidth for job scheduling has been a challenging task for the service providers. There are several existing systems that have introduced many algorithms for resource allocation. To overcome these challenges, the proposed system introduces an Optimized Task Scheduling Algorithm with Deep Learning (OTS-DL). When a job is assigned to a Cloud Service Provider (CSP), the containers are allocated automatically. The article segregates the containers as' Long-Term Container (LTC)' and 'Short-Term Container (STC)' for resource allocation. The system leverages an 'Optimized Task Scheduling Algorithm' to maximize the resource utilisation that initially inquires for micro-task and macro-task dependencies. The bottleneck task is chosen and acted upon accordingly. Further, the system initializes a 'Deep Learning' (DL) for implementing all the progressive steps of job scheduling in the cloud. Further, to overcome container attacks and errors, the system formulates a Container Convergence (Fault Tolerance) theory with high-level security. The results demonstrate that the used optimization algorithm is more effective for implementing a complete resource allocation and solving the large-scale optimization problem of resource allocation and security issues. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Examining the Impact of Entrepreneurial Orientation, Self-Efficacy, and Perceived Business Performance on Managers' Attitudes Towards AI and Its Adoption in Hospitality SMEs.
- Author
-
Kukanja, Marko
- Subjects
ATTITUDES toward technology ,EXECUTIVES' attitudes ,STRUCTURAL equation modeling ,TECHNOLOGY Acceptance Model ,SMALL business - Abstract
In the competitive hospitality sector, the adoption of Artificial Intelligence (AI) is essential for enhancing operational efficiency and improving customer experiences. This study explores how key entrepreneurial traits—Entrepreneurial Orientation (EO), Entrepreneurial Self-Efficacy (ESE), and Perceived Business Performance (PBP)—influence managers' attitudes toward adopting AI in small- and medium-sized enterprises (SMEs). Ts research utilizes data from 287 respondents, gathered through field research with a survey designed to measure the relationships among constructs, employing structural equation modeling (SEM) for analysis. Results reveal that PBP and certain ESE dimensions, such as Initiating Investor Relationships and Developing New Products, have only a modest positive effect on AI adoption. In contrast, EO—specifically Proactiveness and Innovativeness—exhibits a weak negative impact. Importantly, none of these factors directly affect managers' attitudes toward AI. Instead, this study highlights that managers' positive attitudes are the strongest predictors of AI adoption, aligning with the Technology Acceptance Model (TAM). The findings offer new insights into key entrepreneurial factors driving AI adoption and emphasize the need for targeted education and supportive policies to facilitate AI integration in hospitality SMEs. Fostering a positive perspective on AI is more important for adoption than overcoming skepticism, as negative attitudes do not influence AI adoption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Development and Application of an Integrated Index for Occupational Safety Evaluation.
- Author
-
Silva, Paulo, Carneiro, Mariana, Costa, Nélson, Loureiro, Isabel, Carneiro, Paula, Pires, Abel, and Ferreira, Cátia
- Subjects
INDUSTRIAL safety ,FOOD industry ,ORGANIZATIONAL structure ,OPERATIONAL risk ,SYSTEMIC risk (Finance) - Abstract
Occupational safety, reflecting the likelihood of work-related accidents, is crucial in work systems. A risk management model identifies, analyzes, and prioritizes risks, followed by the strategic application of resources to mitigate, monitor, and control the probability and impact of future events. Models integrating safety, ergonomics, and operational efficiency in risk management are non-existent, especially in the food retail sector. The proposed risk management model assigns the risk level to Safety using the Hazard Identification and Risk Assessment index (HIRA), an integral part of the Global Safety Index (GSI), both indices with five risk levels: 1 to 5 (acceptable to very critical). The organizational hierarchy of the evaluated company includes levels from microtask to insignia. The research aims to apply the HIRA index from the microtask to the area level. The HIRA application was conducted in a food retail company, starting with the identification and characterization of tasks in the "food" section and "fresh products" area (butchery, fishmonger, bakery, charcuterie/takeaway, and fruits and vegetables sections). The risk level of each microtask was assessed, then aggregated to higher organizational levels. Results showed that two new solutions reduced the safety risk in the mentioned sections proving the HIRA value as decision-making tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. An Interdisciplinary Double-Diamond Design Thinking Model for Urban Transport Product Innovation: A Design Framework for Innovation Combining Mixed Methods for Developing the Electric Microvehicle "Leonardo Project".
- Author
-
Viviani, Sara, Gulino, Michelangelo-Santo, Rinaldi, Alessandra, and Vangi, Dario
- Abstract
The increase in greenhouse gas emissions prompts the transport sector towards new technological perspectives on personal mobility. Addressing sustainable mobility through electric micromobility requires interdisciplinary design research methods and approaches. In the context of the LEONARDO project, funded under the Horizon 2020 framework, this paper addresses a critical literature review on the design thinking, design research models, tools, and mixed methods to be undertaken for driving product mobility innovation in a cross-disciplinary context. Following the "research through design" research strategy, the authors applied the Double-Diamond design thinking model to frame the design research process in four phases, aligning with three overarching objectives, four specific research objectives, and 24 research tasks, supported by a total of 71 mixed methods and tools. As a result, the transdisciplinary process provides a co-designed energy-efficient stand-alone microvehicle and a scalable interdisciplinary design model for urban transport product innovation. In conclusion, this case study suggests the value of the Double-Diamond design thinking model as a design research instrument capable of addressing sustainable mobility and guiding interdisciplinary design research, design practice, and education in the industrial engineering and design disciplinary sectors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Multi-skill aware task assignment in real-time spatial crowdsourcing.
- Author
-
Song, Tianshu, Xu, Ke, Li, Jiangneng, Li, Yiming, and Tong, Yongxin
- Subjects
CROWDSOURCING ,WIRELESS Internet ,ASSIGNMENT problems (Programming) ,SHARING economy ,TASKS - Abstract
With the development of mobile Internet and the prevalence of sharing economy, spatial crowdsourcing (SC) is becoming more and more popular and attracts attention from both academia and industry. A fundamental issue in SC is assigning tasks to suitable workers to obtain different global objectives. Existing works often assume that the tasks in SC are micro and can be completed by any single worker. However, there also exist macro tasks which need a group of workers with different kinds of skills to complete collaboratively. Although there have been a few works on macro task assignment, they neglect the dynamics of SC and assume that the information of the tasks and workers can be known in advance. This is not practical as in reality tasks and workers appear dynamically and task assignment should be performed in real time according to partial information. In this paper, we study the multi-skill aware task assignment problem in real-time SC, whose offline version is proven to be NP-hard. To solve the problem effectively, we first propose the Online-Exact algorithm, which always computes the optimal assignment for the newly appearing tasks or workers. Because of Online-Exact's high time complexity which may limit its feasibility in real time, we propose the Online-Greedy algorithm, which iteratively tries to assign workers who can cover more skills with less cost to a task until the task can be completed. We finally demonstrate the effectiveness and efficiency of our solutions via experiments conducted on both synthetic and real datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Performance Evaluation of Compiler Controlled Power Saving Scheme.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Labarta, Jesús, Joe, Kazuki, Sato, Toshinori, Shirako, Jun, and Yoshida, Munehiro
- Abstract
Multicore processors, or chip multiprocessors, which allow us to realize low power consumption, high effective performance, good cost performance and short hardware/software development period, are attracting much attention. In order to achieve full potential of multicore processors, cooperation with a parallelizing compiler is very important. The latest compiler extracts multilevel parallelism, such as coarse grain task parallelism, loop parallelism and near fine grain parallelism, to keep parallel execution efficiency high. It also controls voltage and clock frequency of processors carefully to reduce energy consumption during execution of an application program. This paper evaluates performance of compiler controlled power saving scheme which has been implemented in OSCAR multigrain parallelizing compiler. The developed power saving scheme realizes voltage/frequency control and power shutdown of each processor core during coarse grain task parallel processing. In performance evaluation, when static power is assumed as one-tenth of dynamic power, OSCAR compiler with the power saving scheme achieved 61.2 percent energy reduction for SPEC CFP95 applu without performance degradation on 4 processors and 87.4 percent energy reduction for mpeg2encode, 88.1 percent energy reduction for SPEC CFP95 tomcatv and 84.6 percent energy reduction for applu with real-time deadline constraint on 4 processors. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
22. Compiler Control Power Saving Scheme for Multi Core Processors.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Rangan, C. Pandu, Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Ayguadé, Eduard, Baumgartner, Gerald, Ramanujam, J., Sadayappan, Ponnuswamy, and Shirako, Jun
- Abstract
With the increase of transistors integrated onto a chip, multi core processor architectures have attracted much attention to achieve high effective performance, shorten development period and reduce the power consumption. To this end, the compiler for a multi core processor is expected not only to parallelize program effectively, but also to control the voltage and clock frequency of processors and storages carefully inside an application program. This paper proposes a compilation scheme for reduction of power consumption under the multigrain parallel processing environment that controls Voltage/Frequency and power supply of each processor core on a chip. In the evaluation, the OSCAR compiler with the proposed scheme achieves 60.7 percent energy savings for SPEC CFP95 applu without performance degradation on 4 processors, and 45.4 percent energy savings for SPEC CFP95 tomcatv with real-time deadline constraint on 4 processors, and 46.5 percent energy savings for SPEC CFP95 swim with the deadline constraint on 4 processors. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
23. Memory Management for Data Localization on OSCAR Chip Multiprocessor.
- Author
-
Nakano, H., Kodaka, T., Kimura, K., and Kasahara, H.
- Published
- 2004
- Full Text
- View/download PDF
24. A spatiotemporal optimization method for connected and autonomous vehicle operations in long tunnel constructions.
- Author
-
Jiang, Yangsheng, Xia, Kui, Jiang, Haoran, Chen, Fei, and Yao, Zhihong
- Subjects
- *
CONSTRUCTION projects , *HEURISTIC algorithms , *LINEAR programming , *GENETIC algorithms , *INTEGER programming , *RAILROAD tunnels , *TUNNELS - Abstract
With the advancement of technology, connected and autonomous vehicles (CAVs) can be applied to complex tunnel networks in long tunnel construction to enhance vehicle operation safety and efficiency. This paper proposes an optimization method for CAVs' operation in long tunnel constructions. Firstly, a spatiotemporal coordinated optimization model with decentralized time and hierarchical networks is proposed to minimize the total working time for completing transportation services. The model integrates macro task allocation and micro node control and optimizes the vehicle-space-time relationships of CAVs to prevent conflicts and collisions. Secondly, a heuristic algorithm named Search-Adjustment Genetic Algorithm (SAGA) is developed to solve the problem considering the model's complexity and engineering characteristics. Thirdly, numerical experiments are designed to validate the feasibility and efficiency of the proposed model and algorithm. The results indicate that (1) the proposed model can effectively deconflict CAVs in the road network to ensure safety and obtain a low total working time to fulfill the transportation demand. (2) Compared to the commercial solver Gurobi, the proposed algorithm demonstrates significantly superior solution accuracy and efficiency within an acceptable time limit. (3) The solution ensures the safety and efficiency of CAVs and increases their utilization compared with engineering-oriented methods, resulting in a 50 % reduction in CAV acquisition costs, a 29 % and 85 % reduction in running time and delay respectively, and a reduction in fuel consumption. (4) As the number of transportation services and the complexity of the road network increases, the efficiency gains become more prominent and better adapted to the needs of the actual long tunnel construction project. To sum up, the proposed model and algorithm can ensure the safety and efficiency of providing transportation services in future long tunnel construction. Moreover, it can be adapted for controlling CAVs in road networks such as other construction scenarios and urban road networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Multimodal music datasets? Challenges and future goals in music processing.
- Author
-
Christodoulou, Anna-Maria, Lartillot, Olivier, and Jensenius, Alexander Refsum
- Published
- 2024
- Full Text
- View/download PDF
26. Dataflow-based automatic parallelization of MATLAB/Simulink models for fitting modern multicore architectures.
- Author
-
Gasmi, Kaouther and Hasnaoui, Salam
- Subjects
MODERN architecture ,ARCHITECTURAL design ,TELECOMMUNICATION ,ALGORITHMS ,SEMANTICS - Abstract
In many fields including aerospace, automotive, and telecommunications, MathWorks' MATLAB/Simulink is contemporary standard for model-based design. The strengths of Simulink are rapid design and algorithm exploration. Models created with Simulink are just functional. Therefore, designers cannot effortlessly consider a Simulink model's architecture. As current architectures are optimized to run on multicore processors, software running on these processors needs to be parallelized in order to benefit from their natural performance. For instance, designers need to understand how a Simulink model could be parallelized and how an adequate multicore architecture is selected. This paper focuses on the dataflow-based parallelization of Simulink models and proposes a method based on dataflow to measure the performance of parallelized Simulink models running on multicore architectures. Throughout the parallelization process, the model is converted into a Hierarchical Synchronous DataFlow Graph (HSDFG) keeping its original semantics, and each composite node in the graph is flattened. Then, the graph is mapped and scheduled into a multicore architecture with the ultimate objective that minimizes the end-to-end latency. In the experiment of applying the proposed approach to a real Simulink model, latency of the parallelized model could be reduced successfully on a various multi-core architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A new task scheduling method for distributed programs which require memory management in grids.
- Author
-
Koide, H. and Oie, Y.
- Published
- 2004
- Full Text
- View/download PDF
28. Ambient and focal attention during complex problem-solving: preliminary evidence from real-world eye movement data.
- Author
-
Yuxuan Guo, Pannasch, Sebastian, Helmert, Jens R., and Kaszowska, Aleksandra
- Subjects
PROTOCOL analysis (Cognition) ,EYE movements ,SPATIAL orientation ,VISUAL perception ,PROBLEM solving ,INFORMATION-seeking behavior - Abstract
Time course analysis of eye movements during free exploration of real-world scenes often reveals an increase in fixation durations together with a decrease in saccade amplitudes, which has been explained within the two visual systems approach, i.e., a transition from ambient to focal. Short fixations and long saccades during early viewing periods are classified as ambient mode of vision, which is concerned with spatial orientation and is related to simple visual properties such as motion, contrast, and location. Longer fixations and shorter saccades during later viewing periods are classified as focal mode of vision, which is concentrated in the foveal projection and is capable of object identification and its semantic categorization. While these findings are mainly obtained in the context of image exploration, the present study endeavors to investigate whether the same pattern of interplay between ambient and focal visual attention is deployed when people work on complex real-world tasks--and if so, when? Based on a re-analysis of existing data that integrates concurrent think aloud and eye tracking protocols, the present study correlated participants' internal thinking models to the parameters of their eye movements when they planned solutions to an open-ended design problem in a real-world setting. We hypothesize that switching between ambient and focal attentional processing is useful when solvers encounter difficulty compelling them to shift their conceptual direction to adjust the solution path. Individuals may prefer different attentional strategies for information-seeking behavior, such as ambient-to-focal or focal-to-ambient. The observed increase in fixation durations and decrease in saccade amplitudes during the periods around shifts in conceptual direction lends support to the postulation of the ambient-to-focal processing; however, focal-to-ambient processing is not evident. Furthermore, our data demonstrate that the beginning of a shift in conceptual direction is observable in eye movement behavior with a significant prolongation of fixation. Our findings add to the conclusions drawn from laboratory settings by providing preliminary evidence for ambient and focal processing characteristics in realworld problem-solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Leveraging artificial intelligence to advance implementation science: potential opportunities and cautions.
- Author
-
Trinkley, Katy E., An, Ruopeng, Maw, Anna M., Glasgow, Russell E., and Brownson, Ross C.
- Subjects
ARTIFICIAL intelligence ,RESEARCH personnel ,WORLD health - Abstract
Background: The field of implementation science was developed to address the significant time delay between establishing an evidence-based practice and its widespread use. Although implementation science has contributed much toward bridging this gap, the evidence-to-practice chasm remains a challenge. There are some key aspects of implementation science in which advances are needed, including speed and assessing causality and mechanisms. The increasing availability of artificial intelligence applications offers opportunities to help address specific issues faced by the field of implementation science and expand its methods. Main text: This paper discusses the many ways artificial intelligence can address key challenges in applying implementation science methods while also considering potential pitfalls to the use of artificial intelligence. We answer the questions of "why" the field of implementation science should consider artificial intelligence, for "what" (the purpose and methods), and the "what" (consequences and challenges). We describe specific ways artificial intelligence can address implementation science challenges related to (1) speed, (2) sustainability, (3) equity, (4) generalizability, (5) assessing context and context-outcome relationships, and (6) assessing causality and mechanisms. Examples are provided from global health systems, public health, and precision health that illustrate both potential advantages and hazards of integrating artificial intelligence applications into implementation science methods. We conclude by providing recommendations and resources for implementation researchers and practitioners to leverage artificial intelligence in their work responsibly. Conclusions: Artificial intelligence holds promise to advance implementation science methods ("why") and accelerate its goals of closing the evidence-to-practice gap ("purpose"). However, evaluation of artificial intelligence's potential unintended consequences must be considered and proactively monitored. Given the technical nature of artificial intelligence applications as well as their potential impact on the field, transdisciplinary collaboration is needed and may suggest the need for a subset of implementation scientists cross-trained in both fields to ensure artificial intelligence is used optimally and ethically. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Digital Facilitation of Group Work to Gain Predictable Performance.
- Author
-
Gimpel, Henner, Lahmer, Stefanie, Wöhl, Moritz, and Graf-Drasch, Valerie
- Subjects
SWARM intelligence ,CHATBOTS ,GROUPOIDS ,TASK performance - Abstract
Group work is a commonly used method of working, and the performance of a group can vary depending on the type and structure of the task at hand. Research suggests that groups can exhibit "collective intelligence"—the ability to perform well across tasks—under certain conditions, making group performance somewhat predictable. However, predictability of task performance becomes difficult when a task relies heavily on coordination among group members or is ill-defined. To address this issue, we propose a technical solution in the form of a chatbot providing advice to facilitate group work for more predictable performance. Specifically, we target well-defined, high-coordination tasks. Through experiments with 64 virtual groups performing various tasks and communicating via text-based chat, we found a relationship between the average intelligence of group members and their group performance in such tasks, making performance more predictable. The practical implications of this research are significant, as the assembly of consistently performing groups is an important organizational activity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Learning Macro Actions from Instructional Videos Through Integration of Multiple Modalities.
- Author
-
Johnson, David and Agah, Arvin
- Subjects
HUMAN-robot interaction ,MACHINE learning ,INSTRUCTIONAL films ,TASK performance ,ROBOTS ,NATURAL language processing - Abstract
We propose an architecture for a system that will 'watch and listen to' an instructional video of a human performing a task and translate the audio and video information into a task for a robot to perform. This enables the use of readily available instructional videos from the Internet to train robots to perform tasks instead of programming them. We implemented an operational prototype based on the architecture and showed it could 'watch and listen to' two instructional videos on how to clean golf clubs and translate the audio and video information from the instructional video into tasks for a robot to perform. The key contributions of this architecture are: integration of multiple modalities using trees and pruning with filters; task decomposition into macro-tasks composed of parameterized task-primitives and other macro-tasks, where the task-primitive parameters are an action (e.g., dip, clean, dry) taken on an object (e.g., golf club) using a tool (e.g., pail of water, brush, towel); and context, for determining missing and implied task-primitive parameter values, as a set of canonical task-primitive parameter values with a confidence score based on the number of times the parameter value was detected in the video and audio information and how long ago it was detected. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
32. A data model for generalized scheduling for virtual enterprise.
- Author
-
Lecompte, T., Deschamps, J. C., and Bourrieres, J. P.
- Subjects
PRODUCTION scheduling ,VIRTUAL corporations - Abstract
This work aims to generalize and formalize the problem of macro task allocation to aggregated production resources in the virtual enterprise context. Firstly, the product/process constraints are identified without any consideration of the resources. Then, the current availability of the aggregated resources within the virtual enterprise is taken into account. This paper does not address the allocation problem as such but provides a generic formalization of the feasibility domain as a support for human decision-makers in a dynamic production network. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
33. An osmotic approach-based dynamic deadline-aware task offloading in edge–fog–cloud computing environment.
- Author
-
Reddy, Posham Bhargava and Sudhakar, Chapram
- Subjects
EDGE computing ,COMPUTER systems ,DEADLINES - Abstract
Edge–fog–cloud computing system can be divided into edge or IoT layer (tier 1), fog layer (tier 2) and cloud layer (tier 3). The devices at the edge layer generate different types of tasks which may be computation-intensive or communication intensive or having a combination of these properties. Depending on the characteristics of tasks, those may be scheduled to run at the edge or fog or cloud layers. There are many advantages of offloading some of the computationally intensive workloads, which includes improved response time, satisfying the deadlines of delay-sensitive tasks and overall reduced make span of the workloads. In this context, there is a need for designing a scheduling algorithm with the goal to minimize the overall execution time while satisfying the deadlines of the tasks and maximizing the resource utilization at fog layer. In this paper, we are proposing a task offloading and scheduling algorithm based on the osmotic approach. In the osmotic approach, the devices and tasks are classified, and the tasks are assigned to the most suitable devices based on their dynamically available capacity. The proposed scheduling algorithm is compared with traditional random task offloading and round robin task offloading algorithms using synthetic data sets and found that the proposed algorithm performance is significantly better than other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Essential elements, conceptual foundations and workflow design in crowd-powered projects.
- Author
-
Santos, Celso A S, Baldi, Alessandro M, de Assis Neto, Fábio R, and Barcellos, Monalessa P
- Subjects
CROWDSOURCING ,CONCEPTUAL models ,WORKFLOW ,STRUCTURAL models ,PROBLEM solving - Abstract
Crowdsourcing arose as a problem-solving strategy that uses a large number of workers to achieve tasks and solve specific problems. Although there are many studies that explore crowdsourcing platforms and systems, little attention has been paid to define what a crowd-powered project is. To address this issue, this article introduces a general-purpose conceptual model that represents the essential elements involved in this kind of project and how they relate to each other. We consider that the workflow in crowdsourcing projects is context-oriented and should represent the planning and coordination by the crowdsourcer in the project, instead of only facilitating decomposing a complex task into subtask sets. Since structural models are limited to cannot properly represent the execution flow, we also introduce the use of behavioural conceptual models, specifically Unified Modeling Language (UML) activity diagrams, to represent the user, tasks, assets, control activities and products involved in a specific project. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Parallelization of automotive engine control software on embedded multi-core processor using OSCAR compiler.
- Author
-
Kanehagi, Yohei, Umeda, Dan, Hayashi, Akihiro, Kimura, Keiji, and Kasahara, Hironori
- Published
- 2013
- Full Text
- View/download PDF
36. Family conversations about species change as support for children's developing understandings of evolution.
- Author
-
Hohenstein, Jill and Tenenbaum, Harriet R.
- Subjects
PARENT-child relationships ,CHILD support ,BIOLOGICAL evolution ,FAMILIES ,CONVERSATION ,REASONING in children - Abstract
To examine the ways that 6‐ to 11‐year‐old children's conversation with their parents support their developing understandings of evolution, 49 parent–child dyads participated in a study with two elicited discussion tasks: origins of species and potential species change. Conversational data were transcribed, coded, and qualitatively and quantitatively analyzed to compare the appearance of reasoning in each type of task. Quantitative analyses revealed correlations between tasks in informed naturalistic reasoning as well as differences in the way reasoning was expressed in each task. In addition, parent–child dyads with older children were more likely to use informed naturalistic reasoning than parent–child dyads with younger children. A subset of the data was analyzed qualitatively and showed that irrespective of how much evolution reference was present in the conversation, parents supported their children's learning through scaffolding. However, greater amounts of nonscientific reasoning appeared in the groups with less evolution talk. This study demonstrates that family talk about evolution varies with context both within and between families. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Multigrain Parallelization for Model-Based Design Applications Using the OSCAR Compiler.
- Author
-
Umeda, Dan, Suzuki, Takahiro, Mikami, Hiroki, Kimura, Keiji, and Kasahara, Hironori
- Published
- 2016
- Full Text
- View/download PDF
38. Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction.
- Author
-
Correia, António, Grover, Andrea, Schneider, Daniel, Pimentel, Ana Paula, Chaves, Ramon, de Almeida, Marcos Antonio, and Fonseca, Benjamim
- Subjects
ARTIFICIAL intelligence ,TAXONOMY ,SOCIAL interaction ,CROSS-sectional method ,CROWDSOURCING ,SWARM intelligence - Abstract
With the widespread availability and pervasiveness of artificial intelligence (AI) in many application areas across the globe, the role of crowdsourcing has seen an upsurge in terms of importance for scaling up data-driven algorithms in rapid cycles through a relatively low-cost distributed workforce or even on a volunteer basis. However, there is a lack of systematic and empirical examination of the interplay among the processes and activities combining crowd-machine hybrid interaction. To uncover the enduring aspects characterizing the human-centered AI design space when involving ensembles of crowds and algorithms and their symbiotic relations and requirements, a Computer-Supported Cooperative Work (CSCW) lens strongly rooted in the taxonomic tradition of conceptual scheme development is taken with the aim of aggregating and characterizing some of the main component entities in the burgeoning domain of hybrid crowd-AI centered systems. The goal of this article is thus to propose a theoretically grounded and empirically validated analytical framework for the study of crowd-machine interaction and its environment. Based on a scoping review and several cross-sectional analyses of research studies comprising hybrid forms of human interaction with AI systems and applications at a crowd scale, the available literature was distilled and incorporated into a unifying framework comprised of taxonomic units distributed across integration dimensions that range from the original time and space axes in which every collaborative activity take place to the main attributes that constitute a hybrid intelligence architecture. The upshot is that when turning to the challenges that are inherent in tasks requiring massive participation, novel properties can be obtained for a set of potential scenarios that go beyond the single experience of a human interacting with the technology to comprise a vast set of massive machine-crowd interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. CoVEffect: interactive system for mining the effects of SARS-CoV-2 mutations and variants based on deep learning.
- Author
-
Serna García, Giuseppe, Al Khalaf, Ruba, Invernici, Francesco, Ceri, Stefano, and Bernasconi, Anna
- Subjects
SARS-CoV-2 ,DEEP learning ,WEB-based user interfaces ,DATA integration ,LANGUAGE models ,PREDICTION models - Abstract
Background Literature about SARS-CoV-2 widely discusses the effects of variations that have spread in the past 3 years. Such information is dispersed in the texts of several research articles, hindering the possibility of practically integrating it with related datasets (e.g. millions of SARS-CoV-2 sequences available to the community). We aim to fill this gap, by mining literature abstracts to extract—for each variant/mutation—its related effects (in epidemiological, immunological, clinical, or viral kinetics terms) with labeled higher/lower levels in relation to the nonmutated virus. Results The proposed framework comprises (i) the provisioning of abstracts from a COVID-19–related big data corpus (CORD-19) and (ii) the identification of mutation/variant effects in abstracts using a GPT2-based prediction model. The above techniques enable the prediction of mutations/variants with their effects and levels in 2 distinct scenarios: (i) the batch annotation of the most relevant CORD-19 abstracts and (ii) the on-demand annotation of any user-selected CORD-19 abstract through the CoVEffect web application (http://gmql.eu/coveffect), which assists expert users with semiautomated data labeling. On the interface, users can inspect the predictions and correct them; user inputs can then extend the training dataset used by the prediction model. Our prototype model was trained through a carefully designed process, using a minimal and highly diversified pool of samples. Conclusions The CoVEffect interface serves for the assisted annotation of abstracts, allowing the download of curated datasets for further use in data integration or analysis pipelines. The overall framework can be adapted to resolve similar unstructured-to-structured text translation tasks, which are typical of biomedical domains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Conceptual Architectural Design at Scale: A Case Study of Community Participation Using Crowdsourcing.
- Author
-
Dortheimer, Jonathan, Yang, Stephen, Yang, Qian, and Sprecher, Aaron
- Subjects
ARCHITECTURAL design ,CONCEPTUAL design ,COMMUNITY involvement ,PUBLIC architecture ,CROWDSOURCING ,COMMUNITIES - Abstract
Architectural design decisions are primarily made through an interaction between an architect and a client during the conceptual design phase. However, in larger-scale public architecture projects, the client is frequently represented by a community that embraces numerous stakeholders. The scale, social diversity, and political layers of such collective clients make their interaction with architects challenging. A solution to address this challenge is using new information technologies that automate design interactions on an urban scale through crowdsourcing and artificial intelligence technologies. However, since such technologies have not yet been applied and tested in field conditions, it remains unknown how communities interact with such systems and whether useful concept designs can be produced in this way. To fill this gap in the literature, this paper reports the results of a case study architecture project where a novel crowdsourcing system was used to automate interactions with a community. The results of both quantitative and qualitative analyses revealed the effectiveness of our approach, which resulted in high-level stakeholder satisfaction and yielded conceptual designs that better reflect stakeholders' preferences. Along with identifying opportunities for using advanced technologies to automate design interactions in the concept design phase, we also highlight the challenges of such technologies, thus warranting future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Model of megalopolises in the tool path optimisation for CNC plate cutting machines.
- Author
-
Chentsov, Alexander G., Chentsov, Pavel A., Petunin, Alexander A., and Sesekin, Alexander N.
- Subjects
CUTTING machines ,CUTTING machines manufacturing ,PRECEDENCE ,DYNAMIC programming ,PRODUCTION management (Manufacturing) ,PRODUCTION planning ,SUSTAINABLE design - Abstract
We consider the issues of tool path optimisation under constraints and formulate a mathematical problem of visiting megalopolises. The megalopolises model is the result of the discretisation of the tool path problem for CNC plate cutting machines. The order of visits is subject to precedence constraints. In addition, the cost functions depend on the set of pending tasks. The quality criterion is a variety of the additive criterion. The problem is established within the dynamic programming framework, however, a heuristic is proposed and implemented to solve practical problems of large dimensionality. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. A Scientometric Exploration of Crowdsourcing: Research Clusters and Applications.
- Author
-
Ozcan, Sercan, Boye, David, Arsenyan, Jbid, and Trott, Paul
- Subjects
CROWDSOURCING ,OPERATIONS research ,INDUSTRIAL research ,CITIZEN science ,QUALITATIVE research - Abstract
Crowdsourcing is a multidisciplinary research area that represents a rapidly expanding field where new applications are constantly emerging. Research in this area has investigated its use for citizen science in data gathering for research and crowdsourcing for industrial innovation. Previous studies have reviewed and categorized crowdsourcing research using qualitative methods. This has led to the limited coverage of the entire field, using smaller discrete parts of the literature and mostly reviewing the industrial aspects of crowdsourcing. This study uses a scientometric analysis of 7059 publications over the period 2006–2019 to map crowdsourcing research to identify clusters and applications. Our results are the first in the literature to map crowdsourcing research holistically. In this article, we classify its usage in the three domains of innovation, engineering, and science, where 11 categories and 26 subcategories are further developed. The results of this article reveal that the most active scientific clusters where crowdsourcing is used are environmental sciences and ecology. For the engineering domain, it is computer science, telecommunication, and operations research. In innovation, idea crowdsourcing, crowdfunding, and crowd creation are the most frequent areas. The findings of this study map crowdsourcing usage across different fields and illustrate emerging crowdsourcing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Eye movement patterns in complex tasks: Characteristics of ambient and focal processing.
- Author
-
Guo, Yuxuan, Helmert, Jens R., Graupner, Sven-Thomas, and Pannasch, Sebastian
- Subjects
EYE movements ,SPATIAL orientation ,VISUAL perception ,TASK performance ,CUBES - Abstract
Analyzing the time course of eye movements during scene viewing often indicates that people progress through two distinct modes of visual processing: an ambient mode, which is associated with overall spatial orientation in a scene, followed by a focal mode, which requires central vision of an object. However, the shifts between ambient and focal processing modes have mainly been identified relative to changes in the environment, such as relative to the onset of various visual stimuli but also following scene cuts or subjective event boundaries in dynamic stimuli. The results so far do not allow conclusions about the nature of the two processing mechanisms beyond the influence of externally triggered events. It remains unclear whether people shift back and forth from ambient to focal processing also based on internal triggers, such as switching between different tasks while no external event is given. The present study therefore investigated ambient to focal processing shifts in an active task solving paradigm. The Rubik's Cube task introduced here is a multi-step task, which can be broken down into smaller sub-tasks that are performed serially. The time course of eye movements was analyzed at multiple levels of this Rubik's Cube task, including when there were no external changes to the stimuli but when internal representations of the task were hypothesized to change (i.e., switching between different sub-tasks). Results suggest that initial ambient exploration is followed by a switch to more focal viewing across various levels of task processing with and without external changes to the stimuli. More importantly, the present findings suggest that ambient and focal eye movement characteristics might serve as a probe for the attentional state in task processing, which does not seem to be influenced by changes in task performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Design and implementation of a modular supervisory control system of a batch process.
- Author
-
Ferrarini, L. and Piroddi, L.
- Published
- 2001
- Full Text
- View/download PDF
45. Heuristic techniques for allocating and scheduling communicating periodic tasks in distributed real-time systems.
- Author
-
Faucou, S., Deplanche, A.-M., and Beauvais, J.-P.
- Published
- 2000
- Full Text
- View/download PDF
46. The Claremont serial killer and the production of class-based suburbia in serial killer mythology.
- Author
-
Glitsos, Laura and Taylor, Jessica
- Subjects
SERIAL murderers on television ,GENDER ,CULTURE - Abstract
This is an investigation into the ways in which serial killer mythology and notions of place are often co-created. In this study, we focus on the mythos of the serial killer and its relationship to the construct of Australian suburbia. We focus on the ways in which the tension between working-class suburbia and upper-middle-class suburbia plays out through the serial killer narrative. Politically, the serial killer narrative is also intertwined with the production of race-based, class-based and gendered definitions of space. We show how culture is deeply invested in 'making sense' of serial killing through several political manoeuvres, including the privileging of certain victims over others, such as the way in which women of colour are rendered invisible in these mythologies. To argue these assertions, we draw from a case study located in Perth, Western Australia, dubbed by the media as the Claremont serial killings. By tracing several sub-narratives, we perform qualitative discourse analysis on diverse media texts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. COMPOSING MULTI-RELATIONS ASSOCIATION RULES FROM CROWDSOURCING REMUNERATION DATA.
- Author
-
SALLEH, SITI SALWA, ZAKARIA, NURHAYATI, JANOM, NORJANSALIKA, SYED ARIS, SYARIPAH RUZAINI, and ARSHAD, NOOR HABIBAH
- Subjects
CROWDSOURCING ,WAGES ,INFORMATION & communication technologies ,FOCUS groups ,DIGITAL technology - Abstract
In crowdsourcing, requesters are companies that require external workers to execute specific tasks, whereas a platform acts as a mediator to match and allocate the tasks to digital workers. To assign it to a worker, the platform must first identify the types of tasks and match them to the appropriate workers based on their level of competency. Each worker has different ICT competencies which affect work quality and remuneration. However, general practise frequently assumes a single level of worker's capability for all tasks, hence the categorisation of difficulty of tasks is unclear and inconsistent. Apart from causing dissatisfaction among workers, this also implies an absence of incentive standardisation. Therefore, this study explores this matter and which aims to identify and visualise the parameters that affect remuneration determination. To gather the data, focus group discussions and interviews with crowdsourcing players have been conducted. The data comprise a lot of redundancies, therefore an apriori algorithm is used to normalise it by removing redundancies and then extracting significant patterns. Next, an association rule is used to uncover correlations between parameters. To gain a more understandable insight, the data relationship is visualised using an alluvial chart that manages to illustrate the flow. Findings show that task type, outcome variation, and competency requirements demonstrate a degree of interdependence. It is suggested that there is a significant pattern showing that the remuneration scheme is determined by five levels of DW, which are expert, advanced, intermediate, novice, and basic. Advance workers are most likely to participate in the crowdsourcing, and the remuneration scale is suggested to be wider compared to others. The study's findings provide input for remuneration strategy in future work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Availability evaluation of system service hosted in private cloud computing through hierarchical modeling process.
- Author
-
Clemente, Danilo, Pereira, Paulo, Dantas, Jamilson, and Maciel, Paulo
- Subjects
SYSTEMS availability ,COMPUTER systems ,STOCHASTIC models ,SENSITIVITY analysis ,CLOUD computing ,CAPACITY requirements planning - Abstract
Cloud computing provides an abstraction of the physical tiers, allowing a sense of infinite resources. However, the physical resources are not unlimited and need to be used more assertively. The challenge of cloud computing is to improve the use of resources without jeopardizing the availability of environments. Stochastic models can efficiently evaluate cloud computing systems, which is needed for proper capacity planning. This paper proposes an availability evaluation from a system hosted on a private cloud. To achieve this goal, we created hierarchical models to represent the studied environment. Sensitivity analysis is performed to identify the most influential parameters and components that must be compatible with improving system availability. A case study supports the demonstration of the accuracy and utility of our methodology. We propose structural changes in the environment using different redundancies in the components to obtain satisfactory results. Finally, we analyze scenarios regarding DC's redundancy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Evolutionary research of optimal strategies for exclusive positioned clustering in simulated environment of collective robotics
- Author
-
Abdessemed, Mohamed Rida and Bilami, Azeddine
- Subjects
- *
CLUSTER analysis (Statistics) , *SIMULATED environment (Teaching method) , *ROBOTICS , *GENETIC algorithms , *INSECT societies , *EVOLUTIONARY theories , *SIMULATION methods & models - Abstract
Abstract: One of the captivating characteristics of social insects, in spite of their rudimentary individual constitution, is the ability by which they have to solve complicated problems in an elastic and robust way. This includes elasticity which ensures the adaptation of the insect system to the unpredictable changes of their environment, and robustness which guarantees a functioning continuity of the system, in spite of the possible failure of a certain number of its elements in the achievement of their individual missions. From this point of view, fields of research have emerged over the past decades, with the aim of trying to reveal the secret behind the relationship between individual and society, so perfectly designed in nature. Collective robotics is one of those fields where we try to find microscopic rules allowing a group of autonomous robots, mobile and with limited capacity, to carry out a specific macro-task, such as exclusive positioned heap formation. The idea, behind this, is to use a model of oriented reactive agent simulation, to seek the relations which can link the local perceptions of the simulated robots with their basic actions, in order to make the above mentioned gathering task a success. An evolutionary approach is used for this purpose, making it possible to discover the functional control relations of these simulated robots. An analogy with the precepts specific to the ant community is established and results of simulation indicating the effectiveness of the detected rules are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Availability model for edge-fog-cloud continuum: an evaluation of an end-to-end infrastructure of intelligent traffic management service.
- Author
-
Pereira, Paulo, Melo, Carlos, Araujo, Jean, Dantas, Jamilson, Santos, Vinícius, and Maciel, Paulo
- Subjects
REAL-time computing ,SYSTEMS availability ,DISTRIBUTED computing ,FAULT trees (Reliability engineering) ,EDGE computing ,CLOUD computing ,SMARTPHONES - Abstract
Our world is being transformed by connectivity and technology as time goes by, which requires continuous improvement of quality of service (QoS) levels in the systems. Currently, many emerging technologies demand latency-aware networks for real-time data processing, and we are becoming more dependent on those technologies day by day. Cloud computing environments provide high availability, reliability, and performance; however, cloud computing may not be suitable for latency-sensitive applications, such as disaster risk minimization, intelligent traffic management, and crime prevention, for instance. Two complementary paradigms, namely edge and fog computing, have been proposed to overcome the latency issues and increase the computing power between the cloud and edge devices (e.g., controllers, sensors, and smartphones). However, evaluating availability aspects is still a significant concern in those distributed computing environments since many challenges must be faced to guarantee the required QoS for those systems. Therefore, this study addresses the edge-fog-cloud continuum's availability, where we propose a hierarchical availability model using fault tree and Markov chains. Also, we propose analytical availability models for the components in our environment, which may be used to support scalability and capacity planning of edge, fog, and cloud computing environments. Using our proposed hierarchical model, we investigated several scenarios to improve the system's availability. In one of the case studies, we could investigate how to improve the availability of a baseline intelligent traffic management infrastructure, which was 98.47%, and we improved to 99.91%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.