68,706 results
Search Results
202. Development of topological optimization schemes controlling the trajectories of multiple particles in fluid
- Author
-
Gil Ho Yoon and Hongyun So
- Subjects
Control and Optimization ,0211 other engineering and technologies ,02 engineering and technology ,SHAKE algorithm ,Particle-fluid interaction ,0203 mechanical engineering ,Transient adjoint sensitivity analysis ,Topology optimization ,Sensitivity (control systems) ,021106 design practice & management ,Physics ,Laminar flow ,Mechanics ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,020303 mechanical engineering & transports ,Control and Systems Engineering ,Drag ,Particle ,Transient (oscillation) ,Particle separation ,Current (fluid) ,Material properties ,Software ,Research Paper - Abstract
This paper describes the development of a new topology optimization framework that controls, captures, isolates, switches, or separates particles depending on their material properties and initial locations. Controlling the trajectories of particles in laminar fluid has several potential applications. The fluid drag force, which depends on the fluid and particle velocities and the material properties of particles, acts on the surfaces of the particles, thereby affecting the trajectories of the particles whose deformability can be neglected. By changing the drag or inertia force, particles can be controlled and sorted depending on their properties and initial locations. In several engineering applications, the transient motion of particles can be controlled and optimized by changing the velocity of the fluid. This paper presents topology optimization schemes to determine optimal pseudo rigid domains in fluid to control the motion of particles depending on their properties, locations, and geometric constraints. The transient sensitivity analysis of the positions of particles can be derived with respect to the spatial distributed design variables in topology optimization. The current optimization formulations are evaluated for effectiveness based on different conditions. The experimental results indicate that the formulations can determine optimal fluid layouts to control the trajectories of multiple particles.
- Published
- 2021
203. Preface to the special issue of selected papers from FCS/VERIFY 2002
- Author
-
Serge Autexier, Heiko Mantel, and Iliano Cervesato
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Cryptography ,Safety, Risk, Reliability and Quality ,Software engineering ,business ,Formal methods ,Computer communication networks ,Software ,Information Systems - Published
- 2005
204. Special issue - selected papers from the ICDAR'01 conference
- Author
-
Karl Tombre and A. Lawrence Spitz
- Subjects
Computer science ,business.industry ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,computer.software_genre ,business ,computer ,Software ,Natural language processing ,Computer Science Applications - Published
- 2003
205. Runtime translation of OCL-like statements on Simulink models: Expanding domains and optimising queries
- Author
-
Jason Hampson, Justin C. Cooper, Horacio Hoyos Rodriguez, Richard F. Paige, Athanasios Zolotas, Beatriz A. Sanchez, and Dimitris Kolovos
- Subjects
QA75 ,Measure (data warehouse) ,Programming language ,Process (engineering) ,Computer science ,Special Section Paper ,Epsilon ,Eclipse Modelling Framework ,Interoperability ,computer.software_genre ,Model driven engineering ,Domain (software engineering) ,Set (abstract data type) ,Modeling and Simulation ,Query optimisation ,Leverage (statistics) ,MATLAB ,Representation (mathematics) ,computer ,MATLAB Simulink ,Software ,De facto standard ,computer.programming_language - Abstract
Open-source model management frameworks such as OCL and ATL tend to focus on manipulating models built atop the Eclipse Modelling Framework (EMF), a de facto standard for domain specific modelling. MATLAB Simulink is a widely used proprietary modelling framework for dynamic systems that is built atop an entirely different technical stack to EMF. To leverage the facilities of open-source model management frameworks with Simulink models, these can be transformed into an EMF-compatible representation. Downsides of this approach include the synchronisation of the native Simulink model and its EMF representation as they evolve; the completeness of the EMF representation, and the transformation cost which can be crippling for large Simulink models. We propose an alternative approach to bridge Simulink models with open-source model management frameworks that uses an “on-the-fly” translation of model management constructs into MATLAB statements. Our approach does not require an EMF representation and can mitigate the cost of the upfront transformation on large models. To evaluate both approaches we measure the performance of a model validation process with Epsilon (a model management framework) on a sample of large Simulink models available on GitHub. Our previous results suggest that, with our approach, the total validation time can be reduced by up to 80%. In this paper, we expand our approach to support the management of Simulink requirements and dictionaries, and we improve the approach to perform queries on collections of model elements more efficiently. We demonstrate the use of the Simulink requirements and dictionaries with a case study and we evaluate the optimisations on collection queries with an experiment that compares the performance of a set of queries on models with different sizes. Our results suggest an improvement by up to 99% on some queries.
- Published
- 2021
206. A method for transforming knowledge discovery metamodel to ArchiMate models
- Author
-
Andrea Delgado, Mario Piattini, Francisco Ruiz, Virginia Bacigalupe, and Ricardo Pérez-Castillo
- Subjects
MDE ,Source code ,Computer science ,business.industry ,Enterprise architecture ,Model transformation ,media_common.quotation_subject ,Digital transformation ,ArchiMate ,ATL ,Knowledge discovery metamodel ,Modeling and Simulation ,Common knowledge ,Regular Paper ,Information system ,Software engineering ,business ,computer ,Software ,computer.programming_language ,Knowledge Discovery Metamodel ,media_common - Abstract
Enterprise architecture has become an important driver to facilitate digital transformation in companies, since it allows to manage IT and business in a holistic and integrated manner by establishing connections among technology concerns and strategical/motivational ones. Enterprise architecture modelling is critical to accurately represent business and their IT assets in combination. This modelling is important when companies start to manage their enterprise architecture, but also when it is remodelled so that the enterprise architecture is realigned in a changing world. Enterprise architecture is commonly modelled by few experts in a manual way, which is error-prone and time-consuming and makes continuous realignment difficult. In contrast, other enterprise architecture modelling proposal automatically analyses some artefacts like source code, databases, services, etc. Previous automated modelling proposals focus on the analysis of individual artefacts with isolated transformations toward ArchiMate or other enterprise architecture notations and/or frameworks. We propose the usage of Knowledge Discovery Metamodel (KDM) to represent all the intermediate information retrieved from information systems’ artefacts, which is then transformed into ArchiMate models. Thus, the core contribution of this paper is the model transformation between KDM and ArchiMate metamodels. The main implication of this proposal is that ArchiMate models are automatically generated from a common knowledge repository. Thereby, the relationships between different-nature artefacts can be exploited to get more complete and accurate enterprise architecture representations.
- Published
- 2021
207. On the Use of Virtual Reality for Medical Imaging Visualization
- Author
-
Filipi Pires, Carlos Costa, and Paulo Dias
- Subjects
Diagnostic Imaging ,Original Paper ,Augmented Reality ,Radiological and Ultrasound Technology ,Workstation ,Computer science ,business.industry ,Virtual Reality ,Virtual reality ,Field (computer science) ,Mixed reality ,Computer Science Applications ,law.invention ,Visualization ,Radiography ,DICOM ,Software ,law ,Human–computer interaction ,Medical imaging ,Humans ,Radiology, Nuclear Medicine and imaging ,business - Abstract
Advanced visualization of medical imaging has been a motive for research due to its value for disease analysis, surgical planning, and academical training. More recently, attention has been turning toward mixed reality as a means to deliver more interactive and realistic medical experiences. However, there are still many limitations to the use of virtual reality for specific scenarios. Our intent is to study the current usage of this technology and assess the potential of related development tools for clinical contexts. This paper focuses on virtual reality as an alternative to today’s majority of slice-based medical analysis workstations, bringing more immersive three-dimensional experiences that could help in cross-slice analysis. We determine the key features a virtual reality software should support and present today’s software tools and frameworks for researchers that intend to work on immersive medical imaging visualization. Such solutions are assessed to understand their ability to address existing challenges of the field. It was understood that most development frameworks rely on well-established toolkits specialized for healthcare and standard data formats such as DICOM. Also, game engines prove to be adequate means of combining software modules for improved results. Virtual reality seems to remain a promising technology for medical analysis but has not yet achieved its true potential. Our results suggest that prerequisites such as real-time performance and minimum latency pose the greatest limitations for clinical adoption and need to be addressed. There is also a need for further research comparing mixed realities and currently used technologies.
- Published
- 2021
208. A CNN-based scheme for COVID-19 detection with emergency services provisions using an optimal path planning
- Author
-
Ahmed Barnawi, Neeraj Kumar, Mehrez Boulares, Prateek Chhikara, and Rajkumar Tekchandani
- Subjects
Scheme (programming language) ,Coronavirus disease 2019 (COVID-19) ,Computer Networks and Communications ,business.industry ,Computer science ,Real-time computing ,COVID-19 ,Cryptography ,Unmanned Aerial Vehicle ,Field (computer science) ,Transfer learning ,Computer graphics ,Hardware and Architecture ,Regular Paper ,Media Technology ,Computer vision ,Motion planning ,Architecture ,Transfer of learning ,business ,computer ,Path planning ,Software ,Information Systems ,computer.programming_language - Abstract
Unmanned Air Vehicles (UAVs) are becoming popular in real-world scenarios due to current advances in sensor technology and hardware platform development. The applications of UAVs in the medical field are broad and may be shared worldwide. With the recent outbreak of COVID-19, fast diagnostic testing has become one of the challenges due to the lack of test kits. UAVs can help in tackling the COVID-19 by delivering medication to the hospital on time. In this paper, to detect the number of COVID-19 cases in a hospital, we propose a deep convolution neural architecture using transfer learning, classifying the patient into three categories as COVID-19 (positive) and normal (negative), and pneumonia based on given X-ray images. The proposed deep-learning architecture is compared with state-of-the-art models. The results show that the proposed model provides an accuracy of 94.92%. Further to offer time-bounded services to COVID-19 patients, we have proposed a scheme for delivering emergency kits to the hospitals in need using an optimal path planning approach for UAVs in the network.
- Published
- 2021
209. Performance and availability evaluation of an smart hospital architecture
- Author
-
Igor Gonçalves, Patricia Takako Endo, Francisco Airton Silva, Iure Fe, and Laécio Rodrigues
- Subjects
Service (systems architecture) ,Computer science ,Performance ,media_common.quotation_subject ,Internet of Things ,Theoretical Computer Science ,Stochastic petri net ,Regular Paper ,Redundancy (engineering) ,Quality (business) ,Latency (engineering) ,media_common ,Parametric statistics ,Numerical Analysis ,Availability ,Computer Science Applications ,Reliability engineering ,Computational Mathematics ,Computational Theory and Mathematics ,High availability ,Stochastic Petri net ,Smart hospital ,Wireless sensor network ,Software - Abstract
Low latency and high availability of resources are essential characteristics to guarantee the quality of services in health systems. Hospital systems must be efficient to prevent loss of human life. Smart hospitals promise a health revolution by capturing and transmitting patient data to doctors in real-time via a wireless sensor network. However, there is a significant difficulty in assessing the performance and availability of such systems in real contexts due to failures not being tolerated and high implementation costs. This paper adopts analytical models to assess the performance and availability of intelligent hospital systems without having to invest in real equipment beforehand. Two Stochastic Petri Net models were proposed to represent intelligent hospital architectures. One model is used to assess performance, and another to assess availability. The models are pretty parametric, making it possible to calibrate the resources, service times, times between failures, and times between repairs. The availability model, for example, allows you to define 48 parameters, allowing you to evaluate a large number of scenarios. The analysis showed that the arrival rate in the performance model is an impacting parameter. It was possible to observe the close relationship between MRT, resource utilization, and discard rate in different scenarios, especially for high arrival rates. Three scenarios were explored considering the second model. The highest availability results were observed in scenario A, composed of server redundancy (local and remote). Such scenario—with redundancy—presented an availability of 99.9199%, that is, 7.01 h/year of inactivity. In addition, this work presents a sensitivity analysis that identifies the most critical components of the architecture. Therefore, this work can help hospital system administrators plan more optimized architectures according to their needs.
- Published
- 2021
210. Call for papers
- Author
-
Witold Pedrycz, Thanos Vasilakos, and Massimo Feroldi
- Subjects
Soft computing ,Computer engineering ,Computer science ,Geometry and Topology ,Software ,Theoretical Computer Science - Published
- 2002
211. Interactive freehand sketching as the means for online communication of design intent in conceptual design conducted by Brainwriting
- Author
-
Sergio Rizzuti and Luigi De Napoli
- Subjects
Sketching ,0209 industrial biotechnology ,Computer science ,Short Original Paper ,0211 other engineering and technologies ,02 engineering and technology ,Communicate design intent ,Industrial and Manufacturing Engineering ,Interactive design ,Semiology ,020901 industrial engineering & automation ,Software ,Conceptual design ,Human–computer interaction ,Industrial design ,Modelling and Simulation ,Curriculum ,021106 design practice & management ,business.industry ,Brainwriting ,6-3-5 Brainwriting ,Product design ,Engineering studies ,Modeling and Simulation ,Design process ,Engineering design process ,business - Abstract
Sketching is becoming an irrelevant activity of engineering studies. The availability of many software that aids designers in all phases of design, not only analytic but synthetic, push technicians, designers to use such tools, giving up the employment of a simple pencil and eraser on a sheet of paper. The productivity of software tools is obliged to speed and manage the whole design process; even freehand sketching remains the fundamental means to communicate the first ideas immediately. During Brainwriting sessions, the ability to explain by sketches first elaborations of a possible solution, that must be understood by co-designers, is the first step that allows more fruitful discussion and immediate adjustment towards a quick embodiment of valid proposals. The paper describes how such techniques has been introduced in the mechanical engineering curriculum. The case of study reports the experience of the Brainwriting online, which has been tested during lockdown due to the pandemic disease of COVID-19. Further in the paper it is suggested a new interpretation of the de Saussure general linguistic studies, in term of a communication that is associated to a drawing.
- Published
- 2020
212. Design and Development of a Medical Image Databank for Assisting Studies in Radiomics
- Author
-
Surajit Kundu, Santam Chakraborty, Jayanta Mukhopadhyay, Syamantak Das, Sanjoy Chatterjee, Rimpa Basu Achari, Indranil Mallick, Partha Pratim Das, Moses Arunsingh, Tapesh Bhattacharyyaa, and Soumendranath Ray
- Subjects
Original Paper ,Radiology Information Systems ,Databases, Factual ,Radiological and Ultrasound Technology ,Neoplasms ,Humans ,Pilot Projects ,Radiology, Nuclear Medicine and imaging ,Radiology ,Software ,Computer Science Applications - Abstract
CompreHensive Digital ArchiVe of Cancer Imaging - Radiation Oncology (CHAVI-RO) is a multi-tier WEB-based medical image databank. It supports archiving de-identified radiological and clinical datasets in a relational database. A semantic relational database model is designed to accommodate imaging and treatment data of cancer patients. It aims to provide key datasets to investigate and model the use of radiological imaging data in response to radiation. This domain of research area addresses the modeling and analysis of complete treatment data of oncology patient. A DICOM viewer is integrated for reviewing the uploaded de-identified DICOM dataset. In a prototype system we carried out a pilot study with cancer data of four diseased sites, namely breast, head and neck, brain, and lung cancers. The representative dataset is used to estimate the data size of the patient. A role-based access control module is integrated with the image databank to restrict the user access limit. We also perform different types of load tests to analyze and quantify the performance of the CHAVI databank.
- Published
- 2022
213. BDCNet: multi-classification convolutional neural network model for classification of COVID-19, pneumonia, and lung cancer from chest radiographs
- Author
-
Malik, Hassaan, Anees, Tayyaba, and Mui-zzud-din
- Subjects
Coronavirus ,Chest radiographs ,Computer Networks and Communications ,Hardware and Architecture ,Regular Paper ,Media Technology ,COVID-19 ,Deep learning ,Software ,Information Systems - Abstract
Globally, coronavirus disease (COVID-19) has badly affected the medical system and economy. Sometimes, the deadly COVID-19 has the same symptoms as other chest diseases such as pneumonia and lungs cancer and can mislead the doctors in diagnosing coronavirus. Frontline doctors and researchers are working assiduously in finding the rapid and automatic process for the detection of COVID-19 at the initial stage, to save human lives. However, the clinical diagnosis of COVID-19 is highly subjective and variable. The objective of this study is to implement a multi-classification algorithm based on deep learning (DL) model for identifying the COVID-19, pneumonia, and lung cancer diseases from chest radiographs. In the present study, we have proposed a model with the combination of Vgg-19 and convolutional neural networks (CNN) named BDCNet and applied it on different publically available benchmark databases to diagnose the COVID-19 and other chest tract diseases. To the best of our knowledge, this is the first study to diagnose the three chest diseases in a single deep learning model. We also computed and compared the classification accuracy of our proposed model with four well-known pre-trained models such as ResNet-50, Vgg-16, Vgg-19, and inception v3. Our proposed model achieved an AUC of 0.9833 (with an accuracy of 99.10%, a recall of 98.31%, a precision of 99.9%, and an f1-score of 99.09%) in classifying the different chest diseases. Moreover, CNN-based pre-trained models VGG-16, VGG-19, ResNet-50, and Inception-v3 achieved an accuracy of classifying multi-diseases are 97.35%, 97.14%, 97.15%, and 95.10%, respectively. The results revealed that our proposed model produced a remarkable performance as compared to its competitor approaches, thus providing significant assistance to diagnostic radiographers and health experts.
- Published
- 2022
214. Usability of a telehealth solution based on TV interaction for the elderly: the VITASENIOR-MT case study
- Author
-
Gabriel Pires, Ana Lopes, Pedro Correia, Luis Almeida, Luis Oliveira, Renato Panda, Dario Jorge, Diogo Mendes, Pedro Dias, Nelson Gomes, and Telmo Pereira
- Subjects
Remote patient monitoring ,User-centred design ,Computer Networks and Communications ,TV interaction ,Usability ,Heart rate ,Weight ,Human-Computer Interaction ,Telehealth ,Elderly ,ComputerApplications_MISCELLANEOUS ,Blood pressure ,Long Paper ,Oximetry ,Glycaemia ,Biometric and environmental data ,Software ,Information Systems - Abstract
Remote monitoring of biometric data in the elderly population is an important asset for improving the quality of life and level of independence of elderly people living alone. However, the design and implementation of health technological solutions often disregard the elderly physiological and psychological abilities, leading to low adoption of these technologies. We evaluate the usability of a remote patient monitoring solution, VITASENIOR-MT, which is based on the interaction with a television set. Twenty senior participants (over 64 years) and a control group of 20 participants underwent systematic tests with the health platform and assessed its usability through several questionnaires. Elderly participants scored high on the usability of the platform, very close to the evaluation of the control group. Sensory, motor and cognitive limitations were the issues that most contributed to the difference in usability assessment between the elderly group and the control group. The solution showed high usability and acceptance regardless of age, digital literacy, education and impairments (sensory, motor and cognitive), which shows its effective viability for use and implementation as a consumer product in the senior market. This work has been financially supported by the Portuguese foundation for science and technology (FCT) and European funds through Project VITASENIOR-MT with grant CENTRO-01-0145-FEDER-023659. CENTRO-01-0145-FEDER-023659 info:eu-repo/semantics/publishedVersion
- Published
- 2022
215. The assault on paper
- Author
-
Peter Horsman, Theo Thomassen, and Eric Ketelaar
- Subjects
Cultural heritage ,History ,Anthropology ,Building and Construction ,Sociology ,Library and Information Sciences ,Software - Published
- 2001
216. Running online experiments using web-conferencing software
- Author
-
Jiawei Li, Damian R. Beil, Izak Duenyas, and Stephen Leider
- Subjects
Computer science ,media_common.quotation_subject ,Web conferencing ,computer.software_genre ,Experimental methodology ,Subject performance ,Online experiment ,Software ,C90 ,C91 ,D91 ,Noisy data ,Web-conferencing software ,media_common ,Selection bias ,Protocol (science) ,Original Paper ,Multimedia ,business.industry ,ZTREE unleashed ,Subject (documents) ,Subject Characteristics ,Webcam ,Sample size determination ,business ,computer - Abstract
We report the results of a novel protocol for running online experiments using a combination of an online experimental platform in parallel with web-conferencing software in two formats—with and without subject webcams—to improve subjects’ attention and engagement. We compare the results between our online sessions with the offline (lab) sessions of the same experiment. We find that both online formats lead to comparable subject characteristics and performance as the offline (lab) experiment. However, the webcam-on protocol has less noisy data, and hence better statistical power, than the protocol without a webcam. The webcam-on protocol can detect reasonable effect sizes with a comparable sample size as in the offline (lab) protocol. Supplementary Information The online version contains supplementary material available at 10.1007/s40881-021-00112-w.
- Published
- 2021
217. Seg-CapNet: A Capsule-Based Neural Network for the Segmentation of Left Ventricle from Cardiac Magnetic Resonance Imaging
- Author
-
Jie Li, Nan Lin, Cong Yang, Chang Liu, Yangjie Cao, Shuang Wu, and Yuan Wang
- Subjects
capsule neural network ,Computer science ,Feature vector ,cardiac magnetic resonance imaging ,02 engineering and technology ,Grayscale ,left ventricle segmentation ,Theoretical Computer Science ,Sørensen–Dice coefficient ,Regular Paper ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,image segmentation ,Artificial neural network ,business.industry ,020207 software engineering ,Pattern recognition ,Image segmentation ,Computer Science Applications ,Computational Theory and Mathematics ,Hardware and Architecture ,Feature (computer vision) ,Artificial intelligence ,business ,Encoder ,Software - Abstract
Deep neural networks (DNNs) have been extensively studied in medical image segmentation. However, existing DNNs often need to train shape models for each object to be segmented, which may yield results that violate cardiac anatomical structure when segmenting cardiac magnetic resonance imaging (MRI). In this paper, we propose a capsule-based neural network, named Seg-CapNet, to model multiple regions simultaneously within a single training process. The Seg-CapNet model consists of the encoder and the decoder. The encoder transforms the input image into feature vectors that represent objects to be segmented by convolutional layers, capsule layers, and fully-connected layers. And the decoder transforms the feature vectors into segmentation masks by up-sampling. Feature maps of each down-sampling layer in the encoder are connected to the corresponding up-sampling layers, which are conducive to the backpropagation of the model. The output vectors of Seg-CapNet contain low-level image features such as grayscale and texture, as well as semantic features including the position and size of the objects, which is beneficial for improving the segmentation accuracy. The proposed model is validated on the open dataset of the Automated Cardiac Diagnosis Challenge 2017 (ACDC 2017) and the Sunnybrook Cardiac Magnetic Resonance Imaging (MRI) segmentation challenge. Experimental results show that the mean Dice coefficient of Seg-CapNet is increased by 4.7% and the average Hausdorff distance is reduced by 22%. The proposed model also reduces the model parameters and improves the training speed while obtaining the accurate segmentation of multiple regions. Supplementary Information The online version contains supplementary material available at 10.1007/s11390-021-0782-5.
- Published
- 2021
218. State of the art methods for combined water and energy systems optimisation in Kraft pulp mills
- Author
-
Ignacio E. Grossmann, Zdravko Kravanja, Nidret Ibrić, Luciana Savulescu, and Elvis Ahmetović
- Subjects
021103 operations research ,Control and Optimization ,Process (engineering) ,business.industry ,Mechanical Engineering ,Fossil fuel ,0211 other engineering and technologies ,Aerospace Engineering ,Biomass ,02 engineering and technology ,Energy consumption ,Kraft process ,Biofuel ,Greenhouse gas ,Environmental science ,021108 energy ,Electrical and Electronic Engineering ,Process engineering ,business ,Software ,Kraft paper ,Civil and Structural Engineering - Abstract
This paper presents a state-of-the-art overview of water and energy optimisation methods with applications to Kraft pulp mills. The main conclusions are highlighted, and several research gaps are identified and proposed for future research. Kraft processes have the potential to be adapted to biorefineries for producing biofuels and other high-value products from wood biomass. Biorefineries enable opportunities to increase the revenue of the process, reduce fossil fuels usage and greenhouse gas emissions. However, to ensure an effective Kraft process transformation, the existing mill infrastructure needs to be consolidated. In this sense, the water system, the heat exchanger network and the utility system should all be optimised together. A series of systematic methods (process integration-conceptual and mathematical programming) have been identified in the literature, along with the results of several case studies that reduce water and energy consumption in Kraft processes. Initial studies in this field considered and solved separate water and energy integration problems, but recent works have been focused on the development of methods for combined water and energy integration and their application to various processes. Typical savings lead to freshwater consumption decreases between 20 and 80% and energy consumption reductions between 15 and 40%.
- Published
- 2021
219. UAS Safety Operation – Legal Issues on Reporting UAS Incidents
- Author
-
Piotr Jan Kasprzyk and Anna Konert
- Subjects
Computer science ,Aviation ,business.industry ,Process (engineering) ,Mechanical Engineering ,media_common.quotation_subject ,Doctrine ,Context (language use) ,Invited Paper ,Industrial and Manufacturing Engineering ,Drone ,UAS regulations ,Action (philosophy) ,Aeronautics ,Artificial Intelligence ,Control and Systems Engineering ,Content analysis ,UAS incidents ,UAS ,Electrical and Electronic Engineering ,business ,Software ,Drones ,UAS safety ,media_common - Abstract
Introduction. This paper examines regulations which govern procedures for reporting incidents other than accidents or serious incidents related to unmanned aircraft system (UAS) operations. The regulations are discussed in the context of available data and the paper included an analysis of them from both a European and national perspective. The goal of the paper is to provide a series of recommendations with regard to the procedures for reporting and analyzing UAS incidents in order to improve the safe integration of unmanned and manned aviation. This article also explores the legal consequences that arise from the midair collision between a UAS and a manned aircraft. Material and methods: The method of study comprises a content analysis of existing legislations. The current doctrine was confronted with existing regulations, documents and materials. Results: The results of the study show that there is a practical problem of objectively identifying operators of a UAS as well as in defining what exactly constitutes an “incident”. It can be reasonably concluded that reporting and analyzing UAS-related incidents allows for the assessment and development of strategies for integrating manned and unmanned aviation. It is worth mentioning that drones and UAS technology requires refinement, especially in technological terms. It is reasonable to take action aimed at raising awareness amongst UAS users of the need to report incidents, as well as engaging UAS users in the investigative process which follows such occurrences.
- Published
- 2021
220. Challenges and opportunities to improve the accessibility of YouTube for people with visual impairments as content creators
- Author
-
Hyunggu Jung and Woosuk Seo
- Subjects
Computer Networks and Communications ,business.industry ,05 social sciences ,Perspective (graphical) ,Internet privacy ,050301 education ,Human-Computer Interaction ,Position paper ,0501 psychology and cognitive sciences ,Social media ,business ,Psychology ,Content (Freudian dream analysis) ,0503 education ,Computer communication networks ,050107 human factors ,Software ,Information Systems - Abstract
This position paper proposes technology opportunities for supporting people with visual impairments (PVI) as content creators, rather than just content consumers. While previous studies examined accessibility issues with functions and visual content on various social media platforms, little is known about experiences of PVI beyond just watching visual content. Our recent study revealed that there is a community of video bloggers (vloggers) with visual impairments who actively produce multiple categories of video content on YouTube. From the perspective of those vloggers with visual impairments, we would like to discuss challenges and opportunities to improve the accessibility of video-based social media platforms to support those video creators with visual impairments.
- Published
- 2021
221. Optimization of feedback bits using firefly algorithm for interference reduction in LTE femtocell networks
- Author
-
S. Fouziya Sulthana, A. Rajesh, S. Hariharan, T. Shankar, and Kartiki Chikte
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,02 engineering and technology ,Interference (wave propagation) ,Theoretical Computer Science ,Base station ,020901 industrial engineering & automation ,Channel state information ,0202 electrical engineering, electronic engineering, information engineering ,Femtocell ,Bandwidth (computing) ,Microcell ,020201 artificial intelligence & image processing ,Dirty paper coding ,Firefly algorithm ,Geometry and Topology ,business ,Software ,Computer network - Abstract
Femtocells are the feasible solutions to extend the network coverage of indoor users and to enhance the network capacity in long-term evolution advanced (LTE-A)-based 5G networks. However, the femtocell base station shares the same frequency spectrum of microcell base station in unplanned manner. Hence, interference mitigation is a crucial problem in densely deployed femtocell environment and it is more severe with the deployment of femtocells in LTE-A network. In this paper, a modified dirty paper coding is proposed for interference mitigation along with the optimization of feedback bits using natural inspired meta-heuristic firefly algorithm. The proposed meta-heuristic algorithm reduces the interference by periodically unicasting the channel state information. Since the bandwidth of feedback system is limited, it is optimized in such a way that it does not affect the performance of the system. As compared to the conventional zero-forcing pre-coding, the proposed modified dirty paper coding along with firefly algorithm scheme offers improved sum rate of 70% and 64% with increase in the number of feedback bits and number of users, respectively.
- Published
- 2020
222. A toolkit for quantification of biological age from blood chemistry and organ function test data: BioAge
- Author
-
Dayoon Kwon and Daniel W. Belsky
- Subjects
Aging ,CALERIE ,Computer science ,Longevity ,Methods Paper ,Population ,Machine learning ,computer.software_genre ,law.invention ,Software ,Randomized controlled trial ,law ,Humans ,education ,Caloric Restriction ,education.field_of_study ,Geroscience ,business.industry ,Clinical trial ,Blood chemistry ,Artificial intelligence ,Geriatrics and Gerontology ,business ,computer ,Biomarkers ,Test data - Abstract
Methods to quantify biological aging are emerging as new measurement tools for epidemiology and population science and have been proposed as surrogate measures for healthy lifespan extension in geroscience clinical trials. Publicly available software packages to compute biological aging measurements from DNA methylation data have accelerated dissemination of these measures and generated rapid gains in knowledge about how different measures perform in a range of datasets. Biological age measures derived from blood chemistry data were introduced at the same time as the DNA methylation measures and, in multiple studies, demonstrate superior performance to these measures in prediction of healthy lifespan. However, their dissemination has been slow by comparison, resulting in a significant gap in knowledge. We developed a software package to help address this knowledge gap. The BioAge R package, available for download at GitHub (http://github.com/dayoonkwon/BioAge), implements three published methods to quantify biological aging based on analysis of chronological age and mortality risk: Klemera-Doubal biological age, PhenoAge, and homeostatic dysregulation. The package allows users to parametrize measurement algorithms using custom sets of biomarkers, to compare the resulting measurements to published versions of the Klemera-Doubal method and PhenoAge algorithms, and to score the measurements in new datasets. We applied BioAge to safety lab data from the CALERIE™ randomized controlled trial, the first-ever human trial of long-term calorie restriction in healthy, non-obese adults, to test effects of intervention on biological aging. Results contribute evidence that CALERIE intervention slowed biological aging. BioAge is a toolkit to facilitate measurement of biological age for geroscience. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11357-021-00480-5.
- Published
- 2021
223. Simultaneous serotonin and dopamine monitoring across timescales by rapid pulse voltammetry with partial least squares regression
- Author
-
Katie A Perrotta, Anne M. Andrews, Miguel Alcañiz Fillol, Hongyan Yang, Rahul Iyer, Xinyi Cheng, Merel Dagher, and Cameron S Movassaghi
- Subjects
Serotonin ,Analyte ,Dopamine ,Capacitive sensing ,Biochemistry ,Analytical Chemistry ,TECNOLOGIA ELECTRONICA ,Machine Learning ,Escitalopram ,Carbon Fiber ,In vivo ,Partial least squares regression ,Electrochemistry ,medicine ,Animals ,Least-Squares Analysis ,Voltammetry ,Neurotransmitter Agents ,Background subtraction ,Chemistry ,Pulse (signal processing) ,Brain ,Signal Processing, Computer-Assisted ,Electrochemical Techniques ,Neurotransmitters ,Mice, Inbred C57BL ,Calibration ,Female ,Biological system ,Microelectrodes ,Selective Serotonin Reuptake Inhibitors ,Software ,Research Paper ,medicine.drug - Abstract
[EN] Many voltammetry methods have been developed to monitor brain extracellular dopamine levels. Fewer approaches have been successful in detecting serotonin in vivo. No voltammetric techniques are currently available to monitor both neurotransmitters simultaneously across timescales, even though they play integrated roles in modulating behavior. We provide proof-of-concept for rapid pulse voltammetry coupled with partial least squares regression (RPV-PLSR), an approach adapted from multi-electrode systems (i.e., electronic tongues) used to identify multiple components in complex environments. We exploited small differences in analyte redox profiles to select pulse steps for RPV waveforms. Using an intentionally designed pulse strategy combined with custom instrumentation and analysis software, we monitored basal and stimulated levels of dopamine and serotonin. In addition to faradaic currents, capacitive currents were important factors in analyte identification arguing against background subtraction. Compared to fast-scan cyclic voltammetry-principal components regression (FSCV-PCR), RPV-PLSR better differentiated and quantified basal and stimulated dopamine and serotonin associated with striatal recording electrode position, optical stimulation frequency, and serotonin reuptake inhibition. The RPV-PLSR approach can be generalized to other electrochemically active neurotransmitters and provides a feedback pipeline for future optimization of multi-analyte, fit-for-purpose waveforms and machine learning approaches to data analysis., Funding from the National Institute on Drug Abuse (DA045550) and National Institute of Mental Health (MH106806) was received. CSM was supported by the National Science Foundation Graduate Research Fellowship Program (DGE-1650604 and DGE-2034835). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
- Published
- 2021
224. Physiologically Based Pharmacokinetic Modeling of Rosuvastatin to Predict Transporter-Mediated Drug-Drug Interactions
- Author
-
Naoki Ishiguro, José David Gómez-Mantilla, Peter Stopfer, Valerie Nock, and Nina Hanke
- Subjects
Male ,Pharmaceutical Science ,Pharmacology ,Feces ,Ethnicity ,ATP Binding Cassette Transporter, Subfamily G, Member 2 ,Gemfibrozil ,Drug Interactions ,Pharmacology (medical) ,Rosuvastatin Calcium ,media_common ,Liver-Specific Organic Anion Transporter 1 ,Probenecid ,Age Factors ,Neoplasm Proteins ,Liver ,Area Under Curve ,Molecular Medicine ,Rifampin ,Research Paper ,Biotechnology ,medicine.drug ,Adult ,Drug ,Physiologically based pharmacokinetic modelling ,Drug-drug interactions (DDIs) ,Organic anion transporting polypeptide 1B1/1B3 (OATP1B1/1B3) ,media_common.quotation_subject ,Cmax ,Models, Biological ,Rosuvastatin ,Solute Carrier Organic Anion Transporter Family Member 1B3 ,Sex Factors ,Pharmacokinetics ,Physiologically based pharmacokinetic modeling (PBPK) ,medicine ,Humans ,CYP2C9 ,business.industry ,Body Weight ,Organic Chemistry ,Model-informed drug discovery and development (MID3) ,nutritional and metabolic diseases ,Biological Transport ,Body Height ,business ,Software - Abstract
Purpose To build a physiologically based pharmacokinetic (PBPK) model of the clinical OATP1B1/OATP1B3/BCRP victim drug rosuvastatin for the investigation and prediction of its transporter-mediated drug-drug interactions (DDIs). Methods The Rosuvastatin model was developed using the open-source PBPK software PK-Sim®, following a middle-out approach. 42 clinical studies (dosing range 0.002–80.0 mg), providing rosuvastatin plasma, urine and feces data, positron emission tomography (PET) measurements of tissue concentrations and 7 different rosuvastatin DDI studies with rifampicin, gemfibrozil and probenecid as the perpetrator drugs, were included to build and qualify the model. Results The carefully developed and thoroughly evaluated model adequately describes the analyzed clinical data, including blood, liver, feces and urine measurements. The processes implemented to describe the rosuvastatin pharmacokinetics and DDIs are active uptake by OATP2B1, OATP1B1/OATP1B3 and OAT3, active efflux by BCRP and Pgp, metabolism by CYP2C9 and passive glomerular filtration. The available clinical rifampicin, gemfibrozil and probenecid DDI studies were modeled using in vitro inhibition constants without adjustments. The good prediction of DDIs was demonstrated by simulated rosuvastatin plasma profiles, DDI AUClast ratios (AUClast during DDI/AUClast without co-administration) and DDI Cmax ratios (Cmax during DDI/Cmax without co-administration), with all simulated DDI ratios within 1.6-fold of the observed values. Conclusions A whole-body PBPK model of rosuvastatin was built and qualified for the prediction of rosuvastatin pharmacokinetics and transporter-mediated DDIs. The model is freely available in the Open Systems Pharmacology model repository, to support future investigations of rosuvastatin pharmacokinetics, rosuvastatin therapy and DDI studies during model-informed drug discovery and development (MID3).
- Published
- 2021
225. Benchmarking Various Radiomic Toolkit Features While Applying the Image Biomarker Standardization Initiative toward Clinical Translation of Radiomic Analysis
- Author
-
Steven Cen, Vinay Duddalwar, Darryl Hwang, Bhushan Desai, Bino Varghese, Mingxi Lei, Afshin Azadikhah, Assad A. Oberai, and Xiaomeng Lei
- Subjects
Standardization ,Computer science ,Feature extraction ,computer.software_genre ,law.invention ,Software ,law ,Histogram ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Original Paper ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,business.industry ,Benchmarking ,Reference Standards ,Computer Science Applications ,Feature (computer vision) ,Benchmark (computing) ,Venn diagram ,Data mining ,business ,computer ,Biomarkers - Abstract
The image biomarkers standardization initiative (IBSI) was formed to address the standardization of extraction of quantifiable imaging metrics. Despite its effort, there remains a lack of consensus or established guidelines regarding radiomic feature terminology, the underlying mathematics and their implementation across various software programs. This creates a scenario where features extracted using different toolboxes cannot be used to build or validate the same model leading to a non-generalization of radiomic results. In this study, IBSI-established phantom and benchmark values were used to compare the variation of the radiomic features while using 6 publicly available software programs and 1 in-house radiomics pipeline. All IBSI-standardized features (11 classes, 173 in total) were extracted. The relative differences between the extracted feature values from the different software programs and the IBSI benchmark values were calculated to measure the inter-software agreement. To better understand the variations, features are further grouped into 3 categories according to their properties: 1) morphology, 2) statistic/histogram and 3)texture features. While a good agreement was observed for a majority of radiomics features across the various tested programs, relatively poor agreement was observed for morphology features. Significant differences were also found in programs that use different gray-level discretization approaches. Since these software programs do not include all IBSI features, the level of quantitative assessment for each category was analyzed using Venn and UpSet diagrams and quantified using two ad hoc metrics. Morphology features earned lowest scores for both metrics, indicating that morphological features are not consistently evaluated among software programs. We conclude that radiomic features calculated using different software programs may not be interchangeable. Further studies are needed to standardize the workflow of radiomic feature extraction. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10278-021-00506-6.
- Published
- 2021
226. Breaking TrustZone memory isolation and secure boot through malicious hardware on a modern FPGA-SoC
- Author
-
Mathieu Gross, Georg Sigl, Andreas Zankl, and Nisha Jacob
- Subjects
Flexibility (engineering) ,Computer Networks and Communications ,business.industry ,Computer science ,Cryptography ,Regular Paper ,FPGA-SoCs ,Memory and peripherals isolation ,Hardware trojan ,DMA attack ,Trusted execution environment ,Secure boot ,Reconfigurable computing ,ddc ,Isolation (database systems) ,Central processing unit ,business ,Field-programmable gate array ,Direct memory access ,Software ,Computer hardware ,Block (data storage) - Abstract
FPGA-SoCs are heterogeneous embedded computing platforms consisting of reconfigurable hardware and high-performance processing units. This combination offers flexibility and good performance for the design of embedded systems. However, allowing the sharing of resources between an FPGA and an embedded CPU enables possible attacks from one system on the other. This work demonstrates that a malicious hardware block contained inside the reconfigurable logic can manipulate the memory and peripherals of the CPU. Previous works have already considered direct memory access attacks from malicious logic on platforms containing no memory isolation mechanism. In this work, such attacks are investigated on a modern platform which contains state-of-the-art memory and peripherals isolation mechanisms. We demonstrate two attacks capable of compromising a Trusted Execution Environment based on ARM TrustZone and show a new attack capable of bypassing the secure boot configuration set by a device owner via the manipulation of Battery-Backed RAM and eFuses from malicious logic.
- Published
- 2021
227. An analysis of the utility of digital materials for high school students with intellectual disability and their effects on academic success
- Author
-
Deveci Topal, Arzu, Kolburan Geçer, Aynur, and Çoban Budak, Esra
- Subjects
Computer Networks and Communications ,Turkish ,Process (engineering) ,Special education ,Syllabus ,Single-subject research models ,Intellectual disability ,ComputingMilieux_COMPUTERSANDEDUCATION ,medicine ,Mathematics education ,And digital materials ,Computer Applications ,business.industry ,Usability ,medicine.disease ,language.human_language ,Human-Computer Interaction ,Vocational education ,language ,Students with intellectual disability ,Long Paper ,Computer applications ,business ,Software ,Information Systems - Abstract
This study thoroughly examines the usability of digital materials, to establish a classroom environment in which technology is integrated into teaching practices using tablet computers and interactive smart boards. The study was conducted at a special education vocational school, where students with intellectual disability receive training. The integration of technology was made to the Natural Disasters unit (erosion, landslide, flood, earthquake, and digital story developed on the subject of flood) in the Social Science syllabus. This study also aims to develop multimedia applications and apply these to teaching activities, and additionally to increase the learning competencies of students in the subject of Social Sciences. This study involved eight students who have mild intellectual disability at a vocational high school. A thorough multiple probe design was used among single-subject research models. Comparison of the results revealed that students' post-test scores increased significantly when compared to the pre-test scores, and that the teaching materials had a significantly positive impact on their learning process. Moreover, the effect of the prepared digitalized materials on learning was determined to be high in terms of its application in special education schools. Students indicated that they liked these activities which they engaged on computers, as well as the interactive multiple choices questions, and wished to have such creative applications made available for other subjects such as Turkish, Mathematics, Music, and Art.
- Published
- 2021
228. Comparison of methods for quantitative biomolecular interaction analysis
- Author
-
Monika Conrad, Peter Fechner, Günter Gauglitz, and Günther Proll
- Subjects
Biomolecular interaction analysis ,Computer science ,business.industry ,Spectrum Analysis ,Biochemistry ,Antibodies ,Association rate constant ,Analytical Chemistry ,Kinetics ,Noise ,Software ,Reflectometric interference spectroscopy ,Binding kinetics ,Data Interpretation, Statistical ,Black box ,Simulated data ,business ,Biological system ,Focus (optics) ,Pseudo-first-order kinetics ,Kinetic rate constant ,Research Paper - Abstract
In order to perform good kinetic experiments, not only the experimental conditions have to be optimized, but the evaluation procedure as well. The focus of this work is the in-depth comparison of different approaches and algorithms to determine kinetic rate constants for biomolecular interaction analysis (BIA). The different algorithms are applied not only to flawless simulated data, but also to real-world measurements. We compare five mathematical approaches for the evaluation of binding curves following pseudo-first-order kinetics with different noise levels. In addition, reflectometric interference spectroscopy (RIfS) measurements of two antibodies are evaluated to determine their binding kinetics. The advantages and disadvantages of the individual approach will be investigated and discussed in detail. In summary, we will raise awareness on how to evaluate and judge results from BIA by using different approaches rather than having to rely on “black box” closed (commercial) software packages. Supplementary Information The online version contains supplementary material available at 10.1007/s00216-021-03623-x.
- Published
- 2021
229. Papers in this issue
- Author
-
Howard James
- Subjects
Artificial Intelligence ,Management science ,Computer science ,Computational Science and Engineering ,Software - Published
- 1997
230. A Novel ABRM Model for Predicting Coal Moisture Content
- Author
-
Zhang, Fan, Li, Hao, Xu, ZhiChao, and Chen, Wei
- Subjects
Coal moisture content ,Mechanical Engineering ,Deep learning ,complex mixtures ,Meteorological elements ,Industrial and Manufacturing Engineering ,respiratory tract diseases ,Artificial Intelligence ,Control and Systems Engineering ,otorhinolaryngologic diseases ,Regular Paper ,Electrical and Electronic Engineering ,LSTM ,CNN ,Software - Abstract
Coal moisture content monitoring plays an important role in carbon reduction and clean energy decisions of coal transportation-storage aspects. Traditional coal moisture content detection mechanisms rely heavily on detection equipment, which can be expensive or difficult to deploy under field conditions. To achieve fast prediction of coal moisture content, a novel neural network model based on attention mechanism and bidirectional ResNet-LSTM structure (ABRM) is proposed in this paper. The prediction of coal moisture content is achieved by training the model to learn the relationship between changes of coal moisture content and meteorological conditions. The experimental results show that the proposed method has superior performance in terms of moisture content prediction accuracy compared with other state-of-the-art methods, and that ABRM model approaches appear to have the greatest potential for predicting coal moisture content shifts in the face of meteorological elements.
- Published
- 2022
231. Best student paper award
- Author
-
Günter Mayer and Vladik Kreinovich
- Subjects
Computational Mathematics ,Computer science ,Applied Mathematics ,Software - Published
- 1996
232. Load step reduction for adjoint sensitivity analysis of finite strain elastoplasticity
- Author
-
Wenjia Wang, Peter M. Clausen, and Kai-Uwe Bletzinger
- Subjects
Research Paper ,Adjoint sensitivity analysis ,Finite strain ,Elastoplasticity ,Multiplicative decomposition ,Load step reduction ,Work hardening ,Control and Optimization ,Control and Systems Engineering ,Computer Graphics and Computer-Aided Design ,Software ,ddc ,Computer Science Applications - Abstract
In this paper, load step reduction techniques are investigated for adjoint sensitivity analysis of path-dependent nonlinear finite element systems. In particular, the focus is on finite strain elastoplasticity with typical hardening models. The aim is to reduce the computational cost in the adjoint sensitivity implementation. The adjoint sensitivity formulation is derived with the multiplicative decomposition of deformation gradient, which is applicable to finite strain elastoplasticity. Two properties of adjoint variables are investigated and theoretically proved under certain prerequisites. Based on these properties, load step reduction rules in the sensitivity analysis are discussed. The efficiency of the load step reduction and the applicability to isotropic hardening and kinematic hardening models are numerically demonstrated. Examples include a small-scale cantilever beam structure and a large-scale conrod structure under huge plastic deformations.
- Published
- 2021
233. Canadian professor's interval paper chosen best in fuzzy theory and technology
- Author
-
Paul P. Wang
- Subjects
Computational Mathematics ,business.industry ,Applied Mathematics ,Fuzzy set ,Interval (mathematics) ,Artificial intelligence ,Type-2 fuzzy sets and systems ,business ,Fuzzy logic ,Software ,Mathematics - Published
- 1995
234. HELF (Haptic Encoded Language Framework): a digital script for deaf-blind and visually impaired
- Author
-
Simerneet Singh, Vasu Goel, and Nishtha Jatana
- Subjects
Computer Networks and Communications ,Computer science ,Visually impaired ,business.industry ,media_common.quotation_subject ,Vibrotactile ,Haptics ,People with visual and hearing impairment ,USable ,Blind ,Digital media ,Human-Computer Interaction ,Assistive technology ,Human–computer interaction ,Reading (process) ,Short Paper ,business ,Deaf blind ,Software ,Information Systems ,media_common ,Haptic technology ,Gesture - Abstract
Purpose Digital media has brought a revolution, making the world a global village. For people who are visually impaired and people with visual and hearing impairment, navigating through the digital world can be as precarious as moving through the real world. To enable them to connect with the digital world, we propose a solution, Haptic Encoded Language Framework (HELF), that uses haptic technology to enable them to write digital text using swiping gestures and understand the text through vibrations. Method We developed an Android application to present the concept of HELF and evaluate its performance. We tested the application on 13 users (five visually impaired and eight sighted individuals). Results The preliminary exploratory analysis of the proposed framework using the Android application developed reveals encouraging results. Overall, the reading accuracy has been found to be approximately 91%, and the average CPM is found to be 25.7. Conclusion The volunteering users of the HELF Android application found it useful as a means of using the digital media and recommended its usage as an assistive technology for the visually challenged. The results of their performance of using the application motivate further research and development in the proposed work to make HELF more usable by people who are visually impaired and people with visual and hearing impairment.
- Published
- 2021
235. Software-Based Method for Automated Segmentation and Measurement of Wounds on Photographs Using Mask R-CNN: a Validation Study
- Author
-
Hannah Syrek, Michael W. Müller, Sven Yves Vetter, Eric Mandelka, Paul Alfred Grützner, Nils Beisemann, Jan Siad El Barbari, and Maxim Privalov
- Subjects
Original Paper ,Validation study ,Radiological and Ultrasound Technology ,Point (typography) ,business.industry ,Computer science ,Automated segmentation ,Repeated measures design ,Pattern recognition ,Convolutional neural network ,Computer Science Applications ,Software ,Consistency (statistics) ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Neural Networks, Computer ,Artificial intelligence ,business - Abstract
In clinical routine, wound documentation is one of the most important contributing factors to treating patients with acute or chronic wounds. The wound documentation process is currently very time-consuming, often examiner-dependent, and therefore imprecise. This study aimed to validate a software-based method for automated segmentation and measurement of wounds on photographic images using the Mask R-CNN (Region-based Convolutional Neural Network). During the validation, five medical experts manually segmented an independent dataset with 35 wound photographs at two different points in time with an interval of 1 month. Simultaneously, the dataset was automatically segmented using the Mask R-CNN. Afterwards, the segmentation results were compared, and intra- and inter-rater analyses performed. In the statistical evaluation, an analysis of variance (ANOVA) was carried out and dice coefficients were calculated. The ANOVA showed no statistically significant differences throughout all raters and the network in the first segmentation round (F = 1.424 and p > 0.228) and the second segmentation round (F = 0.9969 and p > 0.411). The repeated measure analysis demonstrated no statistically significant differences in the segmentation quality of the medical experts over time (F = 6.05 and p > 0.09). However, a certain intra-rater variability was apparent, whereas the Mask R-CNN consistently provided identical segmentations regardless of the point in time. Using the software-based method for segmentation and measurement of wounds on photographs can accelerate the documentation process and improve the consistency of measured values while maintaining quality and precision.
- Published
- 2021
236. Production and optimization of biodiesel from parsley seed oil using KOH as catalyst for automobiles technology
- Author
-
Mohamed Belaid, Tien-Chien Jen, and Sarah Oluwabunmi Bitire
- Subjects
Reaction conditions ,0209 industrial biotechnology ,Potassium hydroxide ,Biodiesel ,Materials science ,Mechanical Engineering ,food and beverages ,02 engineering and technology ,Raw material ,Pulp and paper industry ,complex mixtures ,Industrial and Manufacturing Engineering ,Computer Science Applications ,Catalysis ,chemistry.chemical_compound ,020901 industrial engineering & automation ,chemistry ,Control and Systems Engineering ,Yield (chemistry) ,Response surface methodology ,Industrial and production engineering ,Software - Abstract
The production and optimization of biodiesel from a novel feedstock (parsley seed oil) using catalytic potassium hydroxide were investigated in this work with respect to the influence of the parametric reaction: temperature of reaction, time, and the ratio of alcohol-to-oil on the biodiesel yield. The best optimized conditions to generate parsley biodiesel were investigated with the aid of response surface methodology design tool while the analysis of variance showed that the reaction parameters employed had a significant influence on the yield of biodiesel from parsley. The biodiesel percentage yield of 98% was the optimum yield achieved from the experiment conducted with reaction conditions of 60°C, 30 min, and 6:1 for the temperature of the reaction, time of reaction, and the ratio of alcohol-to-oil respectively while the biodiesel percentage yield of 98.18% was predicted from the data analysis as the optimal biodiesel yield. This study developed an optimal temperature of reaction, time of reaction, and the ratio of alcohol-to-oil parametric conditions for producing biodiesel from parsley seed oil and also identified the fatty acid methyl esters (FAME) present in the biodiesel. The biodiesel’s fuel properties were within the requirements of D6751 of the American Society for Testing and Materials (ASTM)
- Published
- 2021
237. Morse glasses: an IoT communication system based on Morse code for users with speech impairments
- Author
-
Nayera Tarek, Mariam Abo Mandour, Reem Ali, Sara Yahia, Sara El-Metwally, Nada El-Madah, Bassant Mohamed, and Dina Mostafa
- Subjects
Motor neuron diseases ,Inclusion (disability rights) ,Computer science ,Wearable computer ,02 engineering and technology ,Morse code ,Communications system ,IoT technology ,Theoretical Computer Science ,law.invention ,law ,Human–computer interaction ,Regular Paper ,0202 electrical engineering, electronic engineering, information engineering ,Eye's blinking tracking system ,Android (operating system) ,Computer communication networks ,Numerical Analysis ,business.industry ,020206 networking & telecommunications ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,020201 artificial intelligence & image processing ,Internet of Things ,business ,Software - Abstract
The advent of internet of things has opened the opportunities for people with disabilities, increased their inclusion and productivity in their living society. Most of the invented smart sensing devices including the wearable ones for users with speech impairments are expensive and not affordable for patients in the low income countries such as Egypt. Morse Glasses is a cost efficient wearable device based on IoT technology and a modified Morse code that tracks the patient’s eyes blinks and translates it into a generated speech. A sequence of Morse encoded alphabets/sentences along with the frequently used ones is displayed and heard on any android supported device that is installed Morse Glasses mobile application. With cost less than 30$, patients with motor neuron diseases such as Amyotrophic Lateral Sclerosis (ALS) can communicate easily with the others, express their needs and simply live their life normally.
- Published
- 2021
238. Guest editors preface: Selected papers from the ISMIS 1993 symposium
- Author
-
J. Komorowski and Z. W. Ras
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Hardware and Architecture ,Computer science ,Library science ,Software ,Information Systems - Published
- 1993
239. Call for papers for a special issue on Content Based Retrieval
- Author
-
A. Desai Narasimhalu
- Subjects
Information retrieval ,Computer Networks and Communications ,Computer science ,business.industry ,Multimedia database ,Cryptography ,Computer graphics ,Hardware and Architecture ,Content analysis ,Human–computer information retrieval ,Media Technology ,business ,Computer communication networks ,Software ,Information Systems ,Content based retrieval - Abstract
Content analysis and understanding is an emerging area of multimedia research. Multimedia database applications are ready to leapfrog from handling simple BLOBS to retrieving information based on content understanding. The Multimedia Systems journal identifies this area as a significant research topic and invites original, unpublished contributions highlighting the issue, research results or applications in the area of Content Based Retrieval.
- Published
- 1993
240. A Holographic Augmented Reality Interface for Visualizing of MRI Data and Planning of Neurosurgical Procedures
- Author
-
Ernst L. Leiss, Ioannis Seimenis, Andrew G. Webb, Nikolaos V. Tsekos, Aaron T. Becker, Eleftherios P. Pappas, Theodosios Birbilis, Cristina Marie Morales Mojica, and Jose D. Velazco-Garcia
- Subjects
Intervention planning ,Computer science ,Interface (computing) ,Anatomical structures ,Holography ,Neurosurgery ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Neurosurgical Procedures ,030218 nuclear medicine & medical imaging ,law.invention ,User-Computer Interface ,03 medical and health sciences ,Imaging, Three-Dimensional ,Magnetic resonance imaging ,0302 clinical medicine ,Software ,law ,Human–computer interaction ,Radiology, Nuclear Medicine and imaging ,ComputingMethodologies_COMPUTERGRAPHICS ,Original Paper ,Augmented Reality ,Holographic visualization ,Radiological and Ultrasound Technology ,business.industry ,Perspective (graphical) ,Computer Science Applications ,Visualization ,Surgery, Computer-Assisted ,HoloLens ,Surgery ,Augmented reality ,business ,030217 neurology & neurosurgery ,Gesture - Abstract
The recent introduction of wireless head-mounted displays (HMD) promises to enhance 3D image visualization by immersing the user into 3D morphology. This work introduces a prototype holographic augmented reality (HAR) interface for the 3D visualization of magnetic resonance imaging (MRI) data for the purpose of planning neurosurgical procedures. The computational platform generates a HAR scene that fuses pre-operative MRI sets, segmented anatomical structures, and a tubular tool for planning an access path to the targeted pathology. The operator can manipulate the presented images and segmented structures and perform path-planning using voice and gestures. On-the-fly, the software uses defined forbidden-regions to prevent the operator from harming vital structures. In silico studies using the platform with a HoloLens HMD assessed its functionality and the computational load and memory for different tasks. A preliminary qualitative evaluation revealed that holographic visualization of high-resolution 3D MRI data offers an intuitive and interactive perspective of the complex brain vasculature and anatomical structures. This initial work suggests that immersive experiences may be an unparalleled tool for planning neurosurgical procedures.
- Published
- 2021
241. Using Interteaching to Promote Online Learning Outcomes
- Author
-
Adam T. Brewer, James W. Diller, Christopher A. Krebs, and Stephanie A. C. Kuhn
- Subjects
Original Paper ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Best practice ,Online learning ,05 social sciences ,Quality education ,050301 education ,Interteaching ,Data science ,Feedback ,Education ,Variety (cybernetics) ,Software ,Asynchronous communication ,Developmental and Educational Psychology ,Online teaching ,0501 psychology and cognitive sciences ,Active student responding ,Psychology ,business ,0503 education ,050104 developmental & child psychology - Abstract
Due to the COVID-19 pandemic, educators have been forced to rapidly transition away from in-person learning environments to completely online formats. Many of these educators have had little or no training and experience teaching online, contributing to stress and anxiety. To compound this problem even further, there are a multitude of online learning technologies from which to choose that can be relatively costly and require an intensive production process. In an effort to provide immediate relief to those dealing with this problem, we detail how interteaching, an empirically supported behavioral teaching technique, can be used to cultivate an interactive online learning environment in either an asynchronous or synchronous format. Specifically, we describe some best practices and provide some examples on how to generate active student responding (ASR) as well as provide pinpointed performance-based feedback. We specifically reference the relatively easy-to-use online software Kaltura, but it is hoped that our suggestions inspire others to develop and use these strategies across a variety of platforms in effort to provide evidence-based quality education during this crisis.
- Published
- 2021
242. Finite element analysis of bone remodelling with piezoelectric effects using an open-source framework
- Author
-
Bansod, Yogesh Deepak, Kebbach, Maeruan, Kluess, Daniel, Bader, Rainer, and van Rienen, Ursula
- Subjects
0301 basic medicine ,Open-source ,Materials science ,Bone density ,Finite Element Analysis ,Osteoporosis ,Piezoelectricity ,02 engineering and technology ,Bone tissue ,Models, Biological ,Bone remodeling ,03 medical and health sciences ,Bone remodelling ,Electricity ,Bone Density ,medicine ,Humans ,Hounsfield units (HU) ,Femur ,Bone mineral ,Original Paper ,Mechanical Engineering ,021001 nanoscience & nanotechnology ,medicine.disease ,Electric Stimulation ,Finite element method ,Finite element modelling ,030104 developmental biology ,medicine.anatomical_structure ,Open source ,Electrical stimulation ,Modeling and Simulation ,Bone Remodeling ,Stress, Mechanical ,Tomography, X-Ray Computed ,0210 nano-technology ,Software ,Biotechnology ,Biomedical engineering - Abstract
Bone tissue exhibits piezoelectric properties and thus is capable of transforming mechanical stress into electrical potential. Piezoelectricity has been shown to play a vital role in bone adaptation and remodelling processes. Therefore, to better understand the interplay between mechanical and electrical stimulation during these processes, strain-adaptive bone remodelling models without and with considering the piezoelectric effect were simulated using the Python-based open-source software framework. To discretise numerical attributes, the finite element method (FEM) was used for the spatial variables and an explicit Euler scheme for the temporal derivatives. The predicted bone apparent density distributions were qualitatively and quantitatively evaluated against the radiographic scan of a human proximal femur and the bone apparent density calculated using a bone mineral density (BMD) calibration phantom, respectively. Additionally, the effect of the initial bone density on the resulting predicted density distribution was investigated globally and locally. The simulation results showed that the electrically stimulated bone surface enhanced bone deposition and these are in good agreement with previous findings from the literature. Moreover, mechanical stimuli due to daily physical activities could be supported by therapeutic electrical stimulation to reduce bone loss in case of physical impairment or osteoporosis. The bone remodelling algorithm implemented using an open-source software framework facilitates easy accessibility and reproducibility of finite element analysis made.
- Published
- 2021
243. Effective and Safe Trajectory Planning for an Autonomous UAV Using a Decomposition-Coordination Method
- Author
-
Mohamed A. M′Barki, Mohammed Mestari, Imane Nizar, Zineb Hidila, El Hossein Illoussamen, and Adil Jaafar
- Subjects
Optimization problem ,business.industry ,Computer science ,Mechanical Engineering ,Stability (learning theory) ,Control unit ,Trajectory planning ,Optimal control ,Unmanned aerial vehicles ,Industrial and Manufacturing Engineering ,symbols.namesake ,Artificial Intelligence ,Control and Systems Engineering ,Control theory ,Lagrange multiplier ,Convergence (routing) ,Trajectory ,symbols ,Short Paper ,Wireless ,Electrical and Electronic Engineering ,Autonomous navigation ,business ,Software - Abstract
In this paper, we present a Decomposition Coordination (DC) method applied to solve the problem of safe trajectory planning for autonomous Unmanned Aerial Vehicle (UAV) in a dynamic environment. The purpose of this study is to make the UAV more reactive in the environment and ensure the safety and optimality of the computed trajectory. In this implementation, we begin by selecting a dynamic model of a fixed-arms quadrotor UAV. Then, we define our multi-objective optimization problem, which we convert afterward into a scalar optimization problem (SOP). The SOP is subdivided after that into smaller sub-problems, which will be treated in parallel and in a reasonable time. The DC principle employed in our method allows us to treat non-linearity at the local level. The coordination between the two levels is achieved after that through the Lagrange multipliers. Making use of the DC method, we can compute the optimal trajectory from the UAV’s current position to a final target practically in real-time. In this approach, we suppose that the environment is totally supervised by a Ground Control Unit (GCU). To ensure the safety of the trajectory, we consider a wireless communication network over which the UAV may communicate with the GCU and get the necessary information about environmental changes, allowing for successful collision avoidance during the flight until the intended goal is safely attained. The analysis of the DC algorithm’s stability and convergence, as well as the simulation results, are provided to demonstrate the advantages of our method and validate its potential.
- Published
- 2021
244. Extraction of graphic primitives from images of paper based line drawings
- Author
-
Rangachar Kasturi and Ching-chuan Shih
- Subjects
Computer science ,business.industry ,Binary image ,Computer Graphics Metafile ,Image processing ,Linked list ,computer.file_format ,Computer Science Applications ,Vector graphics ,Hardware and Architecture ,Computer graphics (images) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Graphics ,Raster graphics ,business ,computer ,Software ,3D computer graphics - Abstract
A typical paper based document consists of regions of text, graphics, and halftone images. Developing algorithms for automating the input of such documents is the goal of this research. The scope of this paper is the design of algorithms to process raster oriented binary images of paper based graphics to obtain vector oriented description files. The image is preprocessed to suppress noise and other digitization artifacts. The graphical components are then represented as a union of maximal squares using the maximal square moving algorithm. The pointers describing the connectivity of the maximal squares are analyzed to generate a linked list of squares. Straight line segments, curves, junctions, and large areas are then identified after extensive processing of this linked list. The output of the algorithm is a graphics description file which is then used as input to a graphics recognition system.
- Published
- 1989
245. An efficient characterization of submodular spanning tree games
- Author
-
Zhuan Khye Koh, Laura Sanità, and Discrete Mathematics
- Subjects
FOS: Computer and information sciences ,Computer Science::Computer Science and Game Theory ,05C57 Games on graphs (graph-theoretic aspects) ,Class (set theory) ,Theoretical computer science ,Discrete Mathematics (cs.DM) ,QA75 Electronic computers. Computer science ,General Mathematics ,Open problem ,05C05 Trees ,0211 other engineering and technologies ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Outcome (game theory) ,Convexity ,Submodular set function ,Set (abstract data type) ,91A12 Cooperative games ,Computer Science - Computer Science and Game Theory ,Mathematics ,021103 operations research ,Spanning tree ,Full Length Paper ,ComputingMilieux_PERSONALCOMPUTING ,05C05 TREES, 05C57 GAMES ON GRAPHS (GRAPH-THEORETIC ASPECTS), 91A12 COOPERATIVE GAMES ,010201 computation theory & mathematics ,Game theory ,Software ,Computer Science and Game Theory (cs.GT) ,Computer Science - Discrete Mathematics - Abstract
Cooperative games form an important class of problems in game theory, where a key goal is to distribute a value among a set of players who are allowed to cooperate by forming coalitions. An outcome of the game is given by an allocation vector that assigns a value share to each player. A crucial aspect of such games is submodularity (or convexity). Indeed, convex instances of cooperative games exhibit several nice properties, e.g. regarding the existence and computation of allocations realizing some of the most important solution concepts proposed in the literature. For this reason, a relevant question is whether one can give a polynomial-time characterization of submodular instances, for prominent cooperative games that are in general non-convex. In this paper, we focus on a fundamental and widely studied cooperative game, namely the spanning tree game. An efficient recognition of submodular instances of this game was not known so far, and explicitly mentioned as an open question in the literature. We here settle this open problem by giving a polynomial-time characterization of submodular spanning tree games.
- Published
- 2020
246. Design of experiment (DOE) applied to artificial neural network architecture enables rapid bioprocess improvement
- Author
-
Daniel Rodriguez-Granrose, Lara Silverman, Kevin T. Foley, Amanda Jones, Will Heaton, Hannah Loftus, and Terry Tandeski
- Subjects
Artificial neural network ,Process modeling ,Computer science ,Bioengineering ,Machine learning ,computer.software_genre ,Models, Biological ,Set (abstract data type) ,Software ,Leverage (statistics) ,Bioprocess ,business.industry ,MachineLearning ,Design of experiments ,Regression analysis ,General Medicine ,Design of experiments (DOE) ,Neural Networks, Computer ,Artificial intelligence ,business ,computer ,Research Paper ,Biotechnology - Abstract
Modern bioprocess development employs statistically optimized design of experiments (DOE) and regression modeling to find optimal bioprocess set points. Using modeling software, such as JMP Pro, it is possible to leverage artificial neural networks (ANNs) to improve model accuracy beyond the capabilities of regression models. Herein, we bridge the gap between a DOE skill set and a machine learning skill set by demonstrating a novel use of DOE to systematically create and evaluate ANN architecture using JMP Pro software. Additionally, we run a mammalian cell culture process at historical, one factor at a time, standard least squares regression, and ANN-derived set points. This case study demonstrates the significant differences between one factor at a time bioprocess development, DOE bioprocess development and the relative power of linear regression versus an ANN-DOE hybrid modeling approach. Supplementary Information The online version contains supplementary material available at 10.1007/s00449-021-02529-3.
- Published
- 2021
247. Temperature of the 45 steel in the minimum quantity lubricant milling with different biolubricants
- Author
-
Fengmin Zhou, Zhengjing Duan, Xiaopeng Li, Wei Gao, Lan Dong, Changhe Li, Xiaojie Lv, Xiufang Bai, and Fengbiao Zhang
- Subjects
0209 industrial biotechnology ,Materials science ,Mechanical Engineering ,Base oil ,02 engineering and technology ,Pulp and paper industry ,Environmentally friendly ,Industrial and Manufacturing Engineering ,Computer Science Applications ,Cottonseed ,Viscosity ,020901 industrial engineering & automation ,Vegetable oil ,Control and Systems Engineering ,Surface roughness ,Industrial and production engineering ,Lubricant ,Software - Abstract
The conventional flood cooling is neither economically viable nor eco-friendly in the cutting process. The vegetable oil is biodegradable and environmentally friendly used as base oil for the minimum quantity lubricant (MQL) cutting, but the temperature field in the milling zone with different vegetable oils as lubricants remains unclear. The temperature of the MQL milling of the 45 steel was studied with cottonseed, palm, castor, soybean, and peanut oils as base oils. The effects of the fatty acid composition, carbon chain length, thermal conductivity, and viscosity on the milling temperature were also considered. The temperature distribution of the milling of the 45 steel with five different vegetable oils was simulated, showing that the cottonseed and the palm oils had good cooling effect. Experimental results were consistent with the findings obtained using the temperature simulation analysis. Compared with that of the flood milling, the temperature of the MQL milling with different vegetable oils decreased, and the temperature was reduced by 67.4% when the cottonseed oil was used. The surface quality of the workpiece was improved during the MQL milling. When the cottonseed oil was used as lubricant, the surface roughness (Ra) values decreased by 41.5%, 53%, and 50.2% when the cottonseed, palm, and castor oils, respectively, were used as lubricating fluids, which indicated that the advantages of biolubricants as base oils especially cottonseed, palm, and castor oils could be used as base oils for the MQL milling.
- Published
- 2021
248. How does WeChat’s active engagement with health information contribute to psychological well-being through social capital?
- Author
-
Eun Hwa Jung and Lianshan Zhang
- Subjects
Bridging (networking) ,Computer Networks and Communications ,Context (language use) ,Psychological well-being ,Structural equation modeling ,WeChat ,Social capital ,0501 psychology and cognitive sciences ,Social media ,050107 human factors ,Health information engagement ,Mediation (Marxist theory and media studies) ,05 social sciences ,050301 education ,Affordance ,Human-Computer Interaction ,Capital (economics) ,Long Paper ,Mobile social media ,Psychology ,0503 education ,Social psychology ,Software ,Information Systems - Abstract
This study aims to examine how the users’ engagement with health information benefits their well-being and to demonstrate the underlying mechanism of the relationships through bonding and bridging social capital. An online survey was conducted with 522 WeChat users in China. Structural equation modeling using the maximum likelihood of estimation was employed to test the study’s hypothesized model. Bootstrapping methods were used to examine mediation effects. The results revealed that users’ liking, sharing, and commenting behaviors were positively related to the bonding and bridging capital accumulated on WeChat. These two forms of social capital were also positively associated with users’ psychological well-being, though bridging capital exerted more power in our research model. Moreover, both bonding and bridging capital mediated the relationship between WeChat affordances and psychological well-being. The findings shed new light on directions for leveraging mobile social media as an alternative means to bring about improvements in well-being in mobile-phone-saturated China. This is likely to be the first study that examines the mediating roles of bonding and bridging social capital on the relationship between users’ health information engagement and users’ psychological well-being. By providing robust findings by adopting the variable-centered approach in a health context, the findings of this study are promising for the extension and theoretical development of mobile social media research in the context of health information engagement.
- Published
- 2021
249. Model-driven engineering city spaces via bidirectional model transformations
- Author
-
Zhenjiang Hu, Christos Tsigkanos, Carlo Ghezzi, and Ennio Visconti
- Subjects
Theoretical computer science ,Computer science ,Special Section Paper ,CityGML ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Digital twins ,01 natural sciences ,Domain (software engineering) ,Development (topology) ,Bidirectional model transformations ,Modeling and Simulation ,Informatics ,0103 physical sciences ,Physical space ,Synchronization (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Model-driven engineering ,Model-driven architecture ,010306 general physics ,computer ,Software ,computer.programming_language - Abstract
Engineering cyber-physical systems inhabiting contemporary urban spatial environments demands software engineering facilities to support design and operation. Tools and approaches in civil engineering and architectural informatics produce artifacts that are geometrical or geographical representations describing physical spaces. The models we consider conform to the CityGML standard; although relying on international standards and accessible in machine-readable formats, such physical space descriptions often lack semantic information that can be used to support analyses. In our context, analysis as commonly understood in software engineering refers to reasoning on properties of an abstracted model—in this case a city design. We support model-based development, firstly by providing a way to derive analyzable models from CityGML descriptions, and secondly, we ensure that changes performed are propagated correctly. Essentially, a digital twin of a city is kept synchronized, in both directions, with the information from the actual city. Specifically, our formal programming technique and accompanying technical framework assure that relevant information added, or changes applied to the domain (resp. analyzable) model are reflected back in the analyzable (resp. domain) model automatically and coherently. The technique developed is rooted in the theory of bidirectional transformations, which guarantees that synchronization between models is consistent and well behaved. Produced models can bootstrap graph-theoretic, spatial or dynamic analyses. We demonstrate that bidirectional transformations can be achieved in practice on real city models.
- Published
- 2021
250. Rapid Segmentation of Renal Tumours to Calculate Volume Using 3D Interpolation
- Author
-
Maria A. Woodruff, Michael Y. Chen, Boon Kua, and Nicholas J. Rukin
- Subjects
medicine.medical_specialty ,medicine.medical_treatment ,Ellipsoid method ,Kidney ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Original Paper ,Radiological and Ultrasound Technology ,business.industry ,Trilinear interpolation ,Ellipsoid ,Kidney Neoplasms ,Nephrectomy ,Tumor Burden ,Computer Science Applications ,Concordance correlation coefficient ,Radiology ,business ,Software ,030217 neurology & neurosurgery ,Volume (compression) - Abstract
Small renal masses are commonly diagnosed with modern medical imaging. Renal tumour volume has been explored as a prognostic tool to help decide when intervention is needed and appears to provide additional prognostic information for smaller tumours compared with tumour diameter. However, the current method of calculating tumour volume in clinical practice uses the ellipsoid equation (π/6 × length × width × height) which is an oversimplified approach. Some research groups trace the contour of the tumour in every image slice which is impractical for clinical use. In this study, we demonstrate a method of using 3D segmentation software and the 3D interpolation method to rapidly calculate renal tumour volume in under a minute. Using this method in 27 patients that underwent radical or partial nephrectomy, we found a 10.07% mean absolute difference compared with the traditional ellipsoid method. Our segmentation volume was closer to the calculated histopathological tumour volume than the traditional method (p = 0.03) with higher Lin’s concordance correlation coefficient (0.79 vs 0.72). 3D segmentation has many uses related to 3D printing and modelling and is becoming increasingly common. Calculation of tumour volume is one additional benefit it provides. Further studies on the association between segmented tumour volume and prognosis are needed.
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.