307 results
Search Results
2. Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in Digital Age (CELDA) (Madrid, Spain, October 19-21, 2012)
- Author
-
International Association for Development of the Information Society (IADIS)
- Abstract
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a fast pace and affecting academia and professional practice in many ways. Paradigms such as just-in-time learning, constructivism, student-centered learning and collaborative approaches have emerged and are being supported by technological advancements such as simulations, virtual reality and multi-agents systems. These developments have created both opportunities and areas of serious concerns. This conference aimed to cover both technological as well as pedagogical issues related to these developments. The IADIS CELDA 2012 Conference received 98 submissions from more than 24 countries. Out of the papers submitted, 29 were accepted as full papers. In addition to the presentation of full papers, short papers and reflection papers, the conference also includes a keynote presentation from internationally distinguished researchers. Individual papers contain figures, tables, and references.
- Published
- 2012
3. Development of Paper with Dhaincha Fiber (Sesbania aculeata)
- Author
-
Anita Rani and Surabhi Das
- Subjects
Retting ,biology ,Computer science ,Pulp (paper) ,engineering ,Sesbania ,Combing ,Agricultural engineering ,Fiber ,Raw material ,engineering.material ,biology.organism_classification - Abstract
The current study revolves around development of paper from dhaincha fiber (Sesbania aculeata). Dhaincha is a basically cultivated for improvisation of soil quality. Due to deforestation worldwide, various sources of raw materials are used in paper industries. Indian pulp and paper industry is the world's major paper industry. In this study extraction of dhaincha fiber was done followed by retting, combing, chemical analysis and by making of pulp with dhaincha fiber. The dhaincha pulp was then used for making a paper from dhaincha fiber. The thickness of the dhaincha paper was then tested with paper thickness tester. By doing further processing dhaincha paper can be used commercially by paper industry.
- Published
- 2021
4. Signs & Traces: Model Indicators of College Student Learning in the Disciplines.
- Author
-
Office of Educational Research and Improvement (ED), Washington, DC. and Adelman, Clifford
- Abstract
This report examines possible national indicators for learning outcomes in the individual disciplines of higher education. It consists of versions of five project final reports, each underscoring a distinct approach to developing a model in the context of a specific discipline. Each model applies, however, to several similar disciplines. Stressed in all the models are creative approaches to student assessment which include both criterion-referenced and norm-referenced information. Papers have the following titles and authors: "Models for Developing Computer-Based Indicators of College Student Learning in Computer Science" (Jerilee Grandy); "A Model for Assessing Undergraduate Learning in Mechanical Engineering" (Jonathan Warren); "Model Indicators of Student Learning in Undergraduate Biology" (Gary Peterson and Patricia Hayward); "A Study of Indicators of College Student Learning in Physics" (James Terwilliger, J. Woods Halley, and Patricia Heller); "Model Indicators of Undergraduate Learning in Chemistry" (George Bodner). Papers are followed by references, and appendixes, which detail model recommendations and criteria. (DB)
- Published
- 1989
5. Non-wood plant fibres, will there be a come-back in paper-making?
- Author
-
Manfred Judt
- Subjects
Product design ,biology ,Computer science ,Plant fibre ,Pulp (paper) ,engineering ,Paper based ,Agricultural engineering ,engineering.material ,Wood fibre ,biology.organism_classification ,Agronomy and Crop Science ,Kenaf - Abstract
This is a review paper based on the author's long experience of annual crops as raw material for pulp and paper use. The paper starts with a general world scan touching upon such subjects as world production of crop fibres for paper, different crops suitable for the purpose, and the outlook for the future. An attempt is then made to compare the better known wood plant fibres with non-wood plant fibres and to develop criteria which should help the paper product designer to make better use of the existing plant fibres. The paper thus ends by pointing out the niche as far as price/performance is concerned for pulps from annual crops.
- Published
- 1993
6. BIODEGRADATION OF USED BABY DIAPERS USING CELLULOLITIC FUNGUS AND BACTERIA WITH SOLID FERMENTATION
- Author
-
Haryati Haryati, Riryn Novianty, Nurul Iflah Nasution, and Andi Dahliaty
- Subjects
biology ,Computer science ,Compost ,Pulp (paper) ,fungi ,food and beverages ,Cellulase ,Fungus ,Biodegradation ,engineering.material ,biology.organism_classification ,chemistry.chemical_compound ,chemistry ,engineering ,biology.protein ,Fermentation ,Food science ,Cellulose ,Bacteria - Abstract
Diapers are made by cotton and pulp containing cellulose that can be used as a substrate in the production of cellulose enzymes. The purpose of this research is to determine the ability of cellulolytic fungi Trichoderma asperellum LBKURCC1 and cellulolytic bacteria S-22 to degrade the used diapers containing urine by solid fermentation for 10, 20, and 30 days. The activity of crude extract of cellulose enzyme was observed with CMC 2% as substrate at pH 5,5 (Trichoderma asperellum LBKURCC1) and pH 7 (bacteria S-22 isolate), incubation temperature 40ºC during 30 minutes by using Nelson Somogyi’s method. The result showed that used diapers can be used as substrate for cellulose enzyme production by fungi Trichoderma asperellum LBKURCC1 with the highest activity of cruide extract cellulose enzyme obtained at 10th days solid fermentation of (1,891 ± 1,453) × 10-3 U / mL and by bacteria S-22 isolate at 20th days solid fermentation of (2,854 ± 0,019) × 10-3 U/mL. It can be concluded that bacteria S-22 isolate would degrade the used diapers better than fungi Trichoderma asperellum LBKURCC1. The result of this research compared to quality standards compost of SNI: 19-7030-2004 have not fulfilled, so it can not be used as compost.
- Published
- 2020
7. Classifying Adult Mango Pulp Weevil Activity using Support Vector Machine
- Author
-
Geovane R. Faulve, Gerald John R. Pascasio, Juan Carlos F. Centeno, Ivane Ann P. Banlawe, and Jennifer C. Dela Cruz
- Subjects
0106 biological sciences ,biology ,Mems microphone ,Computer science ,Weevil ,Pulp (paper) ,Feature extraction ,Agricultural engineering ,engineering.material ,biology.organism_classification ,01 natural sciences ,Support vector machine ,010602 entomology ,engineering ,010606 plant biology & botany - Abstract
After the discovery of the existence of mango pulp weevil in Palawan the island has been under quarantine for exporting mangoes. Detection of the pest prove to be a difficult task as the pest do not leave a physical sign that a mango has been damaged by the pests. Infested mangoes are wasted as it cannot be sold due to the damages. This study serves as a base study for a non-invasive mango pulp weevil detection by using machine learning and audio feature extraction tools of MATLAB. Audio is recorded using a MEMS microphone and is placed inside a soundproof chamber to minimize the noise. The study was able to achieve a high accuracy on characterizing the adult mango pulp weevil activity by using MFCC as features extraction for identifying its activity.
- Published
- 2020
8. Microbial Recovery of Manganese using Staphylococcus Epidermidis
- Author
-
Nilotpala Pradhan, Alok Prasad Das, and Lala Behari Sukla
- Subjects
biology ,Computer science ,Pulp (paper) ,chemistry.chemical_element ,Manganese ,engineering.material ,biology.organism_classification ,Bacterial strain ,chemistry ,Staphylococcus epidermidis ,Bioleaching ,Reagent ,engineering ,Particle size ,Food science ,Incubation - Abstract
Manganese minerals are widely distributed throughout the globe. The most important industrial uses of Mn are in the manufacture of steel, non-ferrous alloys, carbon-zinc batteries and some chemical reagents. Microbial recovery of manganese from low grade manganese ores using bioleaching was investigated in this paper. A bacterial strain, Staphylococcus epidermidis (MTCC-435) was collected from microbial type culture collection, IMTECH Chandigarh and used for the experiment. The experimental results for bioleaching with S. epidermidis showed that under pH 5.5, particle size –150 μm, pulp density 10%, temperature 35℃ and agitation 200 rpm, about 80% of Mn was recovered within 20 days of incubation.
- Published
- 2012
9. Efficacy of Bacterial Adaptation on Copper Biodissolution from a Low Grade Chalcopyrite Ore by A. ferrooxidans
- Author
-
K D Mehta, Bansi D. Pandey, and Abhilash
- Subjects
biology ,Hydrometallurgy ,Computer science ,Chalcopyrite ,Pulp (paper) ,chemistry.chemical_element ,Acidithiobacillus ,engineering.material ,biology.organism_classification ,Copper ,chemistry ,Bioleaching ,visual_art ,Bench scale ,engineering ,visual_art.visual_art_medium ,Pyrite ,Nuclear chemistry - Abstract
A low-grade ore containing ~0.3% Cu, remains unutilized for want of a viable process at Malanjkhand Copper Project (MCP), India in which copper is present as chalcopyrite associated with pyrite in quartz veins and granitic rocks. In order to extract copper from this material, bioleaching has been attempted on bench scale using Acidithiobacillus fer-rooxidans (A. ferrooxidans) isolated from the native mine water. The enriched culture containing A. ferrooxidans when adapted to the ore and employed for the bioleaching at 5% (w/v) pulp density, pH 2.0 and 25°C with three particle sizes viz.150 -76 μm, 76 - 50 μm and SCE) from 530 to 654 mV in 35 days. Under similar conditions, the unadapted strains gave a recovery of 44.0% for SCE from 525 to 650 mV. On using unadapted bacte-rial culture directly in shake flask at pH 2.0 and 35°C temperature and 5% (w/v) pulp density (PD) for 9 cells/mL in 35 days. The higher bio-recovery of copper with the adapted bacterial culture may be attributed to the improved iron oxidation (Fe2+ to Fe3+) exhibiting higher ESCE as compared to that of unadapted strains.
- Published
- 2012
10. Novel Method for Pairing Wood Samples in Choice Tests
- Author
-
Sebastian Oberst, Joseph C. S. Lai, and Theodore A. Evans
- Subjects
Phytochemistry ,Insecta ,Phytochemicals ,lcsh:Medicine ,Biomass ,Plant Science ,Physical Chemistry ,Biochemistry ,Choice Behavior ,Analytical Chemistry ,Trees ,chemistry.chemical_compound ,Engineering ,Materials Chemistry ,Statistical Signal Processing ,Cluster Analysis ,Lignin ,lcsh:Science ,Flowering Plants ,Multidisciplinary ,Ecology ,Plant Biochemistry ,Plant Anatomy ,Applied Mathematics ,Physics ,Statistics ,Applied Chemistry ,Plants ,Pulp and paper industry ,Wood ,Substrate (marine biology) ,Insects ,Chemistry ,Tree (data structure) ,Physicochemical Properties ,Plant Physiology ,Materials Characterization ,Organic Materials ,Algorithms ,Research Article ,Biotechnology ,food.ingredient ,General Science & Technology ,Materials Science ,Material Properties ,Biophysics ,Plant Morphology ,Biostatistics ,Biomaterials ,Natural Materials ,Food Preferences ,food ,Chemical Analysis ,Chemical Biology ,Botany ,Animals ,Cellulose ,Sugar ,Biology ,Chemical Ecology ,Vascular Plants ,Plant Ecology ,Food additive ,Organic Chemistry ,lcsh:R ,Computational Biology ,Reproducibility of Results ,Probability Theory ,Probability Distribution ,Pinus ,Diet ,Probability Density ,Chemical Properties ,chemistry ,Events (Probability Theory) ,Pairing ,Computer Science ,Signal Processing ,Plant Biotechnology ,lcsh:Q ,Zoology ,Entomology ,Mathematics - Abstract
Choice tests are a standard method to determine preferences in bio-assays, e.g. for food types and food additives such as bait attractants and toxicants. Choice between food additives can be determined only when the food substrate is sufficiently homogeneous. This is difficult to achieve for wood eating organisms as wood is a highly variable biological material, even within a tree species due to the age of the tree (e.g. sapwood vs. heartwood), and components therein (sugar, starch, cellulose and lignin). The current practice to minimise variation is to use wood from the same tree, yet the variation can still be large and the quantity of wood from one tree may be insufficient. We used wood samples of identical volume from multiple sources, measured three physical properties (dry weight, moisture absorption and reflected light intensity), then ranked and clustered the samples using fuzzy c-means clustering. A reverse analysis of the clustered samples found a high correlation between their physical properties and their source of origin. This suggested approach allows a quantifiable, consistent, repeatable, simple and quick method to maximize control over similarity of wood used in choice tests. © 2014 Oberst et al.
- Published
- 2014
11. AHaH Computing–From Metastable Switches to Attractors to Machine Learning.
- Author
-
Nugent, Michael Alexander and Molter, Timothy Wesley
- Subjects
MACHINE learning ,COMPUTER architecture ,COMPUTER storage capacity ,BANDWIDTHS ,COMPUTATIONAL neuroscience ,ARTIFICIAL neural networks - Abstract
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. Small and Dim Target Detection via Lateral Inhibition Filtering and Artificial Bee Colony Based Selective Visual Attention.
- Author
-
Duan, Haibin, Deng, Yimin, Wang, Xiaohua, and Xu, Chunfang
- Subjects
VISUAL perception ,BIONICS ,ANT algorithms ,INFORMATION filtering systems ,COMPUTER science ,SIGNAL processing ,BIOENGINEERING - Abstract
This paper proposed a novel bionic selective visual attention mechanism to quickly select regions that contain salient objects to reduce calculations. Firstly, lateral inhibition filtering, inspired by the limulus’ ommateum, is applied to filter low-frequency noises. After the filtering operation, we use Artificial Bee Colony (ABC) algorithm based selective visual attention mechanism to obtain the interested object to carry through the following recognition operation. In order to eliminate the camera motion influence, this paper adopted ABC algorithm, a new optimization method inspired by swarm intelligence, to calculate the motion salience map to integrate with conventional visual attention. To prove the feasibility and effectiveness of our method, several experiments were conducted. First the filtering results of lateral inhibition filter were shown to illustrate its noise reducing effect, then we applied the ABC algorithm to obtain the motion features of the image sequence. The ABC algorithm is proved to be more robust and effective through the comparison between ABC algorithm and popular Particle Swarm Optimization (PSO) algorithm. Except for the above results, we also compared the classic visual attention mechanism and our ABC algorithm based visual attention mechanism, and the experimental results of which further verified the effectiveness of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
13. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data.
- Author
-
Liu, Yang, Dai, Qin, Liu, JianBo, Liu, ShiBin, and Yang, Jin
- Subjects
BURNS & scalds ,SCARS ,NORMALIZED difference vegetation index ,REMOTE sensing ,VECTOR analysis ,BOTANY methodology ,COMPARATIVE studies - Abstract
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
14. Independent Component Analysis for Brain fMRI Does Indeed Select for Maximal Independence.
- Author
-
Calhoun, Vince D., Potluru, Vamsi K., Phlypo, Ronald, Silva, Rogers F., Pearlmutter, Barak A., Caprihan, Arvind, Plis, Sergey M., and Adalı, Tülay
- Subjects
INDEPENDENT component analysis ,FUNCTIONAL magnetic resonance imaging ,COMPUTATIONAL biology ,BRAIN imaging ,BIOENGINEERING ,NEUROSCIENCES ,ALGORITHMS - Abstract
A recent paper by Daubechies et al. claims that two independent component analysis (ICA) algorithms, Infomax and FastICA, which are widely used for functional magnetic resonance imaging (fMRI) analysis, select for sparsity rather than independence. The argument was supported by a series of experiments on synthetic data. We show that these experiments fall short of proving this claim and that the ICA algorithms are indeed doing what they are designed to do: identify maximally independent sources. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. Osmosis-Based Pressure Generation: Dynamics and Application.
- Author
-
Bruhn, Brandon R., Schroeder, Thomas B. H., Li, Suyi, Billeh, Yazan N., Wang, K. W., and Mayer, Michael
- Subjects
OSMOSIS ,PERMEABILITY ,COMPUTATIONAL biology ,FLUID mechanics ,DILUTION ,BULK modulus - Abstract
This paper describes osmotically-driven pressure generation in a membrane-bound compartment while taking into account volume expansion, solute dilution, surface area to volume ratio, membrane hydraulic permeability, and changes in osmotic gradient, bulk modulus, and degree of membrane fouling. The emphasis lies on the dynamics of pressure generation; these dynamics have not previously been described in detail. Experimental results are compared to and supported by numerical simulations, which we make accessible as an open source tool. This approach reveals unintuitive results about the quantitative dependence of the speed of pressure generation on the relevant and interdependent parameters that will be encountered in most osmotically-driven pressure generators. For instance, restricting the volume expansion of a compartment allows it to generate its first 5 kPa of pressure seven times faster than without a restraint. In addition, this dynamics study shows that plants are near-ideal osmotic pressure generators, as they are composed of many small compartments with large surface area to volume ratios and strong cell wall reinforcements. Finally, we demonstrate two applications of an osmosis-based pressure generator: actuation of a soft robot and continuous volume delivery over long periods of time. Both applications do not need an external power source but rather take advantage of the energy released upon watering the pressure generators. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. A Compartmentalized Mathematical Model of the β1-Adrenergic Signaling System in Mouse Ventricular Myocytes.
- Author
-
Bondarenko, Vladimir E.
- Subjects
CELLULAR signal transduction ,MUSCLE cells ,ADRENERGIC receptors ,CALCIUM ions ,CYTOSOL ,MATHEMATICAL models ,LABORATORY mice ,PHYSIOLOGY - Abstract
The β
1 -adrenergic signaling system plays an important role in the functioning of cardiac cells. Experimental data shows that the activation of this system produces inotropy, lusitropy, and chronotropy in the heart, such as increased magnitude and relaxation rates of [Ca2+ ]i transients and contraction force, and increased heart rhythm. However, excessive stimulation of β1 -adrenergic receptors leads to heart dysfunction and heart failure. In this paper, a comprehensive, experimentally based mathematical model of the β1 -adrenergic signaling system for mouse ventricular myocytes is developed, which includes major subcellular functional compartments (caveolae, extracaveolae, and cytosol). The model describes biochemical reactions that occur during stimulation of β1 -adrenoceptors, changes in ionic currents, and modifications of Ca2+ handling system. Simulations describe the dynamics of major signaling molecules, such as cyclic AMP and protein kinase A, in different subcellular compartments; the effects of inhibition of phosphodiesterases on cAMP production; kinetics and magnitudes of phosphorylation of ion channels, transporters, and Ca2+ handling proteins; modifications of action potential shape and duration; magnitudes and relaxation rates of [Ca2+ ]i transients; changes in intracellular and transmembrane Ca2+ fluxes; and [Na+ ]i fluxes and dynamics. The model elucidates complex interactions of ionic currents upon activation of β1 -adrenoceptors at different stimulation frequencies, which ultimately lead to a relatively modest increase in action potential duration and significant increase in [Ca2+ ]i transients. In particular, the model includes two subpopulations of the L-type Ca2+ channels, in caveolae and extracaveolae compartments, and their effects on the action potential and [Ca2+ ]i transients are investigated. The presented model can be used by researchers for the interpretation of experimental data and for the developments of mathematical models for other species or for pathological conditions. [ABSTRACT FROM AUTHOR]- Published
- 2014
- Full Text
- View/download PDF
17. Toward a Semi-Self-Paced EEG Brain Computer Interface: Decoding Initiation State from Non-Initiation State in Dedicated Time Slots.
- Author
-
Yang, Lingling, Leung, Howard, Peterson, David A., Sejnowski, Terrence J., and Poizner, Howard
- Subjects
ELECTROENCEPHALOGRAPHY ,COMPUTER interfaces ,NEUROLOGICAL disorders ,STIMULUS & response (Biology) ,MENTAL health ,NEUROSCIENCES ,SIGNAL processing ,PATIENTS - Abstract
Brain computer interfaces (BCIs) offer a broad class of neurologically impaired individuals an alternative means to interact with the environment. Many BCIs are “synchronous” systems, in which the system sets the timing of the interaction and tries to infer what control command the subject is issuing at each prompting. In contrast, in “asynchronous” BCIs subjects pace the interaction and the system must determine when the subject’s control command occurs. In this paper we propose a new idea for BCI which draws upon the strengths of both approaches. The subjects are externally paced and the BCI is able to determine when control commands are issued by decoding the subject’s intention for initiating control in dedicated time slots. A single task with randomly interleaved trials was designed to test whether it can be used as stimulus for inducing initiation and non-initiation states when the sensory and motor requirements for the two types of trials are very nearly identical. Further, the essential problem on the discrimination between initiation state and non-initiation state was studied. We tested the ability of EEG spectral power to distinguish between these two states. Among the four standard EEG frequency bands, beta band power recorded over parietal-occipital cortices provided the best performance, achieving an average accuracy of 86% for the correct classification of initiation and non-initiation states. Moreover, delta band power recorded over parietal and motor areas yielded a good performance and thus could also be used as an alternative feature to discriminate these two mental states. The results demonstrate the viability of our proposed idea for a BCI design based on conventional EEG features. Our proposal offers the potential to mitigate the signal detection challenges of fully asynchronous BCIs, while providing greater flexibility to the subject than traditional synchronous BCIs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
18. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors.
- Author
-
Gudyś, Adam and Deorowicz, Sebastian
- Subjects
COMPUTER algorithms ,GRAPHICS processing units ,COMPUTER architecture ,COMPUTATIONAL biology ,COMPUTER-aided design ,PROTEOMICS ,SOFTWARE engineering - Abstract
Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
19. CUSHAW3: Sensitive and Accurate Base-Space and Color-Space Short-Read Alignment with Hybrid Seeding.
- Author
-
Liu, Yongchao, Popp, Bernt, and Schmidt, Bertil
- Subjects
NUCLEOTIDE sequence ,HUMAN genome ,OPEN source software ,HEURISTIC algorithms ,COMPUTATIONAL biology ,MATHEMATICAL models - Abstract
The majority of next-generation sequencing short-reads can be properly aligned by leading aligners at high speed. However, the alignment quality can still be further improved, since usually not all reads can be correctly aligned to large genomes, such as the human genome, even for simulated data. Moreover, even slight improvements in this area are important but challenging, and usually require significantly more computational endeavor. In this paper, we present CUSHAW3, an open-source parallelized, sensitive and accurate short-read aligner for both base-space and color-space sequences. In this aligner, we have investigated a hybrid seeding approach to improve alignment quality, which incorporates three different seed types, i.e. maximal exact match seeds, exact-match k-mer seeds and variable-length seeds, into the alignment pipeline. Furthermore, three techniques: weighted seed-pairing heuristic, paired-end alignment pair ranking and read mate rescuing have been conceived to facilitate accurate paired-end alignment. For base-space alignment, we have compared CUSHAW3 to Novoalign, CUSHAW2, BWA-MEM, Bowtie2 and GEM, by aligning both simulated and real reads to the human genome. The results show that CUSHAW3 consistently outperforms CUSHAW2, BWA-MEM, Bowtie2 and GEM in terms of single-end and paired-end alignment. Furthermore, our aligner has demonstrated better paired-end alignment performance than Novoalign for short-reads with high error rates. For color-space alignment, CUSHAW3 is consistently one of the best aligners compared to SHRiMP2 and BFAST. The source code of CUSHAW3 and all simulated data are available at http://cushaw3.sourceforge.net. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
20. Fully Automated Segmentation of the Pons and Midbrain Using Human T1 MR Brain Images.
- Author
-
Nigro, Salvatore, Cerasa, Antonio, Zito, Giancarlo, Perrotta, Paolo, Chiaravalloti, Francesco, Donzuso, Giulia, Fera, Franceso, Bilotta, Eleonora, Pantano, Pietro, and Quattrone, Aldo
- Subjects
MAGNETIC resonance imaging of the brain ,MESENCEPHALON ,BRAIN stem ,BRAIN diseases ,PONS test ,BRAIN anatomy ,CLINICAL trials ,PATIENTS - Abstract
Purpose: This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called LABS: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction. Methods: This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics. Results: The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024–1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97–0.98 and Hausdorff distance ranging 1.07–1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86–0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71–2.15). Conclusions: Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
21. Mining Rare Associations between Biological Ontologies.
- Author
-
Benites, Fernando, Simon, Svenja, and Sapozhnikova, Elena
- Subjects
COMPUTATIONAL biology ,GENE ontology ,BIOLOGICAL databases ,DATA mining ,BIOINFORMATICS ,INFORMATION technology ,INFORMATION storage & retrieval systems - Abstract
The constantly increasing volume and complexity of available biological data requires new methods for their management and analysis. An important challenge is the integration of information from different sources in order to discover possible hidden relations between already known data. In this paper we introduce a data mining approach which relates biological ontologies by mining cross and intra-ontology pairwise generalized association rules. Its advantage is sensitivity to rare associations, for these are important for biologists. We propose a new class of interestingness measures designed for hierarchically organized rules. These measures allow one to select the most important rules and to take into account rare cases. They favor rules with an actual interestingness value that exceeds the expected value. The latter is calculated taking into account the parent rule. We demonstrate this approach by applying it to the analysis of data from Gene Ontology and GPCR databases. Our objective is to discover interesting relations between two different ontologies or parts of a single ontology. The association rules that are thus discovered can provide the user with new knowledge about underlying biological processes or help improve annotation consistency. The obtained results show that produced rules represent meaningful and quite reliable associations. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
22. An Evidence-Based Combining Classifier for Brain Signal Analysis.
- Author
-
Kheradpisheh, Saeed Reza, Nowzari-Dalini, Abbas, Ebrahimpour, Reza, and Ganjtabesh, Mohammad
- Subjects
BRAIN-computer interfaces ,BIOLOGICAL neural networks ,COGNITIVE science ,NEUROSCIENCES ,MEDICAL sciences ,BRAIN physiology ,ELECTROENCEPHALOGRAPHY - Abstract
Nowadays, brain signals are employed in various scientific and practical fields such as Medical Science, Cognitive Science, Neuroscience, and Brain Computer Interfaces. Hence, the need for robust signal analysis methods with adequate accuracy and generalizability is inevitable. The brain signal analysis is faced with complex challenges including small sample size, high dimensionality and noisy signals. Moreover, because of the non-stationarity of brain signals and the impacts of mental states on brain function, the brain signals are associated with an inherent uncertainty. In this paper, an evidence-based combining classifiers method is proposed for brain signal analysis. This method exploits the power of combining classifiers for solving complex problems and the ability of evidence theory to model as well as to reduce the existing uncertainty. The proposed method models the uncertainty in the labels of training samples in each feature space by assigning soft and crisp labels to them. Then, some classifiers are employed to approximate the belief function corresponding to each feature space. By combining the evidence raised from each classifier through the evidence theory, more confident decisions about testing samples can be made. The obtained results by the proposed method compared to some other evidence-based and fixed rule combining methods on artificial and real datasets exhibit the ability of the proposed method in dealing with complex and uncertain classification problems. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
23. Labour-Efficient In Vitro Lymphocyte Population Tracking and Fate Prediction Using Automation and Manual Review.
- Author
-
Chakravorty, Rajib, Rawlinson, David, Zhang, Alan, Markham, John, Dowling, Mark R., Wellard, Cameron, Zhou, Jie H. S., and Hodgkin, Philip D.
- Subjects
LYMPHOCYTES ,CELL differentiation ,HETEROGENEITY ,MICROSCOPY ,CHRONOPHOTOGRAPHY ,DEVELOPMENTAL biology ,IN vitro studies - Abstract
Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
24. Knowledge-Guided Robust MRI Brain Extraction for Diverse Large-Scale Neuroimaging Studies on Humans and Non-Human Primates.
- Author
-
Wang, Yaping, Nie, Jingxin, Yap, Pew-Thian, Li, Gang, Shi, Feng, Geng, Xiujuan, Guo, Lei, and Shen, Dinggang
- Subjects
MAGNETIC resonance imaging of the brain ,BRAIN imaging ,PRIMATE physiology ,AGING ,BRAIN ,AGE groups ,COMPARATIVE studies - Abstract
Accurate and robust brain extraction is a critical step in most neuroimaging analysis pipelines. In particular, for the large-scale multi-site neuroimaging studies involving a significant number of subjects with diverse age and diagnostic groups, accurate and robust extraction of the brain automatically and consistently is highly desirable. In this paper, we introduce population-specific probability maps to guide the brain extraction of diverse subject groups, including both healthy and diseased adult human populations, both developing and aging human populations, as well as non-human primates. Specifically, the proposed method combines an atlas-based approach, for coarse skull-stripping, with a deformable-surface-based approach that is guided by local intensity information and population-specific prior information learned from a set of real brain images for more localized refinement. Comprehensive quantitative evaluations were performed on the diverse large-scale populations of ADNI dataset with over 800 subjects (55∼90 years of age, multi-site, various diagnosis groups), OASIS dataset with over 400 subjects (18∼96 years of age, wide age range, various diagnosis groups), and NIH pediatrics dataset with 150 subjects (5∼18 years of age, multi-site, wide age range as a complementary age group to the adult dataset). The results demonstrate that our method consistently yields the best overall results across almost the entire human life span, with only a single set of parameters. To demonstrate its capability to work on non-human primates, the proposed method is further evaluated using a rhesus macaque dataset with 20 subjects. Quantitative comparisons with popularly used state-of-the-art methods, including BET, Two-pass BET, BET-B, BSE, HWA, ROBEX and AFNI, demonstrate that the proposed method performs favorably with superior performance on all testing datasets, indicating its robustness and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. Rapid Reconstruction of 3D Neuronal Morphology from Light Microscopy Images with Augmented Rayburst Sampling.
- Author
-
Ming, Xing, Li, Anan, Wu, Jingpeng, Yan, Cheng, Ding, Wenxiang, Gong, Hui, Zeng, Shaoqun, and Liu, Qian
- Subjects
NEUROLOGY ,NEURAL circuitry ,MEDICAL microscopy ,IMAGE reconstruction ,COMPUTER-aided diagnosis ,NEURONS ,IMAGE analysis ,COMPUTATIONAL biology ,BRAIN imaging - Abstract
Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
26. Dual-Force ISOMAP: A New Relevance Feedback Method for Medical Image Retrieval.
- Author
-
Shen, Hualei, Tao, Dacheng, and Ma, Dianfu
- Subjects
DIAGNOSTIC imaging ,MEDICAL radiology ,DECISION making ,CONTENT-based image retrieval ,PERFORMANCE evaluation ,IMAGE reconstruction - Abstract
With great potential for assisting radiological image interpretation and decision making, content-based image retrieval in the medical domain has become a hot topic in recent years. Many methods to enhance the performance of content-based medical image retrieval have been proposed, among which the relevance feedback (RF) scheme is one of the most promising. Given user feedback information, RF algorithms interactively learn a user’s preferences to bridge the “semantic gap” between low-level computerized visual features and high-level human semantic perception and thus improve retrieval performance. However, most existing RF algorithms perform in the original high-dimensional feature space and ignore the manifold structure of the low-level visual features of images. In this paper, we propose a new method, termed dual-force ISOMAP (DFISOMAP), for content-based medical image retrieval. Under the assumption that medical images lie on a low-dimensional manifold embedded in a high-dimensional ambient space, DFISOMAP operates in the following three stages. First, the geometric structure of positive examples in the learned low-dimensional embedding is preserved according to the isometric feature mapping (ISOMAP) criterion. To precisely model the geometric structure, a reconstruction error constraint is also added. Second, the average distance between positive and negative examples is maximized to separate them; this margin maximization acts as a force that pushes negative examples far away from positive examples. Finally, the similarity propagation technique is utilized to provide negative examples with another force that will pull them back into the negative sample set. We evaluate the proposed method on a subset of the IRMA medical image dataset with a RF-based medical image retrieval framework. Experimental results show that DFISOMAP outperforms popular approaches for content-based medical image retrieval in terms of accuracy and stability. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
27. Effective Moment Feature Vectors for Protein Domain Structures.
- Author
-
Shi, Jian-Yu, Yiu, Siu-Ming, Zhang, Yan-Ning, and Chin, Francis Yuk-Lun
- Subjects
PROTEIN structure ,BIOMEDICAL signal processing ,PROTEOMICS ,VECTOR spaces ,DECISION theory - Abstract
Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
28. Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration.
- Author
-
Li, Xiaohui and Tan, Qingmei
- Subjects
COMPUTER scheduling ,LOGISTICS ,LARGE scale integration of circuits ,PROFIT maximization ,INDUSTRIAL engineering ,EMERGENCY management ,DISASTER relief ,VEHICLES - Abstract
In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
29. NESSTI: Norms for Environmental Sound Stimuli.
- Author
-
Hocking, Julia, Dzafic, Ilvana, Kazovsky, Maria, and Copland, David A.
- Subjects
NOISE (Work environment) ,AVERSIVE stimuli ,NEUROPSYCHOLOGY ,BRAIN imaging ,COMPARATIVE studies ,QUESTIONNAIRES ,COGNITIVE analysis - Abstract
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
30. Identification of Bicluster Regions in a Binary Matrix and Its Applications.
- Author
-
Chen, Hung-Chia, Zou, Wen, Tien, Yin-Jing, and Chen, James J.
- Subjects
DOCUMENT clustering ,COMPUTER algorithms ,COMPUTATIONAL biology ,GENE expression ,INFORMATION technology ,SIGNAL processing ,DATA mining - Abstract
Biclustering has emerged as an important approach to the analysis of large-scale datasets. A biclustering technique identifies a subset of rows that exhibit similar patterns on a subset of columns in a data matrix. Many biclustering methods have been proposed, and most, if not all, algorithms are developed to detect regions of “coherence” patterns. These methods perform unsatisfactorily if the purpose is to identify biclusters of a constant level. This paper presents a two-step biclustering method to identify constant level biclusters for binary or quantitative data. This algorithm identifies the maximal dimensional submatrix such that the proportion of non-signals is less than a pre-specified tolerance δ. The proposed method has much higher sensitivity and slightly lower specificity than several prominent biclustering methods from the analysis of two synthetic datasets. It was further compared with the Bimax method for two real datasets. The proposed method was shown to perform the most robust in terms of sensitivity, number of biclusters and number of serotype-specific biclusters identified. However, dichotomization using different signal level thresholds usually leads to different sets of biclusters; this also occurs in the present analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
31. A Semi-Quantitative Method to Denote Generic Physical Activity Phenotypes from Long-Term Accelerometer Data – The ATLAS Index
- Author
-
Marschollek, Michael
- Subjects
PHYSICAL activity ,EPIDEMIOLOGY ,MORTALITY ,COHORT analysis ,HEALTH outcome assessment ,PHENOTYPES ,ACCELEROMETERS ,COMPUTATIONAL biology - Abstract
Background: Physical activity is inversely correlated to morbidity and mortality risk. Large cohort studies use wearable accelerometer devices to measure physical activity objectively, providing data potentially relevant to identify different activity patterns and to correlate these to health-related outcome measures. A method to compute relevant characteristics of such data not only with regard to duration and intensity, but also to regularity of activity events, is necessary. The aims of this paper are to propose a new method – the ATLAS index (Activity Types from Long-term Accelerometric Sensor data) – to derive generic measures for distinguishing different characteristic activity phenotypes from accelerometer data, to propose a comprehensive graphical representation, and to conduct a proof-of-concept with long-term measurements from different devices and cohorts. Methods: The ATLAS index consists of the three dimensions regularity (reg), duration (dur) and intensity (int) of relevant activity events identified in long-term accelerometer data. It can be regarded as a 3D vector and represented in a 3D cube graph. 12 exemplary data sets of three different cohort studies with 99,467 minutes of data were chosen for concept validation. Results: Five archetypical activity types are proposed along with their dimensional characteristics (insufficiently active: low reg, int and dur; busy bee: low dur and int, high reg; cardio-active: medium reg, int and dur, endurance athlete: high reg, int and dur; and weekend warrior: high int and dur, low reg). The data sets are displayed in one common graph, indicating characteristic differences in activity patterns. Conclusion: The ATLAS index incorporates the relevant regularity dimension apart from the widely-used measures of duration and intensity. Along with the 3D representation, it allows to compare different activity types in cohort study populations, both visually and computationally using vector distance measures. Further research is necessary to validate the ATLAS index in order to find normative values and group centroids. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
32. Enhancement of Chemical Entity Identification in Text Using Semantic Similarity Validation
- Author
-
Grego, Tiago and Couto, Francisco M.
- Subjects
INFORMATION retrieval ,TEXT mining ,CHEMINFORMATICS ,BIOMETRIC identification ,COMPUTATIONAL biology ,NATURAL language processing ,DATA analysis - Abstract
With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit), maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
33. Molecular Optical Simulation Environment (MOSE): A Platform for the Simulation of Light Propagation in Turbid Media.
- Author
-
Ren, Shenghan, Chen, Xueli, Wang, Hailong, Qu, Xiaochao, Wang, Ge, Liang, Jimin, and Tian, Jie
- Subjects
LIGHT propagation ,INTRINSIC optical imaging ,BIOLUMINESCENCE ,MONTE Carlo method ,FLUORESCENCE ,OPTICAL tomography ,BIOMEDICAL engineering instruments - Abstract
The study of light propagation in turbid media has attracted extensive attention in the field of biomedical optical molecular imaging. In this paper, we present a software platform for the simulation of light propagation in turbid media named the “Molecular Optical Simulation Environment (MOSE)”. Based on the gold standard of the Monte Carlo method, MOSE simulates light propagation both in tissues with complicated structures and through free-space. In particular, MOSE synthesizes realistic data for bioluminescence tomography (BLT), fluorescence molecular tomography (FMT), and diffuse optical tomography (DOT). The user-friendly interface and powerful visualization tools facilitate data analysis and system evaluation. As a major measure for resource sharing and reproducible research, MOSE aims to provide freeware for research and educational institutions, which can be downloaded at http://www.mosetm.net. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
34. Approximate Subgraph Matching-Based Literature Mining for Biomedical Events and Relations.
- Author
-
Liu, Haibin, Hunter, Lawrence, Kešelj, Vlado, and Verspoor, Karin
- Subjects
COMPUTATIONAL biology ,BIOMEDICAL materials ,SUBGRAPHS ,SIGNAL processing ,BIOLOGICAL databases ,NATURAL language processing ,INFORMATION technology - Abstract
The biomedical text mining community has focused on developing techniques to automatically extract important relations between biological components and semantic events involving genes or proteins from literature. In this paper, we propose a novel approach for mining relations and events in the biomedical literature using approximate subgraph matching. Extraction of such knowledge is performed by searching for an approximate subgraph isomorphism between key contextual dependencies and input sentence graphs. Our approach significantly increases the chance of retrieving relations or events encoded within complex dependency contexts by introducing error tolerance into the graph matching process, while maintaining the extraction precision at a high level. When evaluated on practical tasks, it achieves a 51.12% F-score in extracting nine types of biological events on the GE task of the BioNLP-ST 2011 and an 84.22% F-score in detecting protein-residue associations. The performance is comparable to the reported systems across these tasks, and thus demonstrates the generalizability of our proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
35. A Multi-Paradigm Modeling Framework to Simulate Dynamic Reciprocity in a Bioreactor.
- Author
-
Kaul, Himanshu, Cui, Zhanfeng, and Ventikos, Yiannis
- Subjects
BIOREACTORS ,TECHNOLOGICAL innovations ,CELL growth ,CELL populations ,CHEMOTAXIS ,CELL migration ,CELL proliferation ,BIOTECHNOLOGY - Abstract
Despite numerous technology advances, bioreactors are still mostly utilized as functional black-boxes where trial and error eventually leads to the desirable cellular outcome. Investigators have applied various computational approaches to understand the impact the internal dynamics of such devices has on overall cell growth, but such models cannot provide a comprehensive perspective regarding the system dynamics, due to limitations inherent to the underlying approaches. In this study, a novel multi-paradigm modeling platform capable of simulating the dynamic bidirectional relationship between cells and their microenvironment is presented. Designing the modeling platform entailed combining and coupling fully an agent-based modeling platform with a transport phenomena computational modeling framework. To demonstrate capability, the platform was used to study the impact of bioreactor parameters on the overall cell population behavior and vice versa. In order to achieve this, virtual bioreactors were constructed and seeded. The virtual cells, guided by a set of rules involving the simulated mass transport inside the bioreactor, as well as cell-related probabilistic parameters, were capable of displaying an array of behaviors such as proliferation, migration, chemotaxis and apoptosis. In this way the platform was shown to capture not only the impact of bioreactor transport processes on cellular behavior but also the influence that cellular activity wields on that very same local mass transport, thereby influencing overall cell growth. The platform was validated by simulating cellular chemotaxis in a virtual direct visualization chamber and comparing the simulation with its experimental analogue. The results presented in this paper are in agreement with published models of similar flavor. The modeling platform can be used as a concept selection tool to optimize bioreactor design specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
36. Analysis and Optimization of Pulse Dynamics for Magnetic Stimulation.
- Author
-
Goetz, Stefan M., Truong, Cong Nam, Gerhofer, Manuel G., Peterchev, Angel V., Herzog, Hans-Georg, and Weyh, Thomas
- Subjects
TRANSCRANIAL magnetic stimulation ,BRAIN research ,MEDICAL rehabilitation ,MEDICAL literature reviews ,ENERGY dissipation ,COMPUTATIONAL biology ,BIOENGINEERING ,MEDICAL equipment - Abstract
Magnetic stimulation is a standard tool in brain research and has found important clinical applications in neurology, psychiatry, and rehabilitation. Whereas coil designs and the spatial field properties have been intensively studied in the literature, the temporal dynamics of the field has received less attention. Typically, the magnetic field waveform is determined by available device circuit topologies rather than by consideration of what is optimal for neural stimulation. This paper analyzes and optimizes the waveform dynamics using a nonlinear model of a mammalian axon. The optimization objective was to minimize the pulse energy loss. The energy loss drives power consumption and heating, which are the dominating limitations of magnetic stimulation. The optimization approach is based on a hybrid global-local method. Different coordinate systems for describing the continuous waveforms in a limited parameter space are defined for numerical stability. The optimization results suggest that there are waveforms with substantially higher efficiency than that of traditional pulse shapes. One class of optimal pulses is analyzed further. Although the coil voltage profile of these waveforms is almost rectangular, the corresponding current shape presents distinctive characteristics, such as a slow low-amplitude first phase which precedes the main pulse and reduces the losses. Representatives of this class of waveforms corresponding to different maximum voltages are linked by a nonlinear transformation. The main phase, however, scales with time only. As with conventional magnetic stimulation pulses, briefer pulses result in lower energy loss but require higher coil voltage than longer pulses. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
37. Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection.
- Author
-
Lin, Chiuhsiang Joe, Chang, Chi-Chan, and Liu, Bor-Shong
- Subjects
CAMOUFLAGE (Military science) ,STEALTH aircraft ,COMPUTER algorithms ,COMPUTER-aided design ,PSYCHOPHYSICS ,IMAGE quality in imaging systems - Abstract
Background: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
38. Leukemia Prediction Using Sparse Logistic Regression.
- Author
-
Manninen, Tapio, Huttunen, Heikki, Ruusuvuori, Pekka, and Nykter, Matti
- Subjects
LEUKEMIA diagnosis ,LOGISTIC regression analysis ,FLOW cytometry ,MYELOID leukemia ,FLUORESCENCE ,BIOMARKERS ,HEMATOLOGY ,PATIENTS - Abstract
We describe a supervised prediction method for diagnosis of acute myeloid leukemia (AML) from patient samples based on flow cytometry measurements. We use a data driven approach with machine learning methods to train a computational model that takes in flow cytometry measurements from a single patient and gives a confidence score of the patient being AML-positive. Our solution is based on an regularized logistic regression model that aggregates AML test statistics calculated from individual test tubes with different cell populations and fluorescent markers. The model construction is entirely data driven and no prior biological knowledge is used. The described solution scored a 100% classification accuracy in the DREAM6/FlowCAP2 Molecular Classification of Acute Myeloid Leukaemia Challenge against a golden standard consisting of 20 AML-positive and 160 healthy patients. Here we perform a more extensive validation of the prediction model performance and further improve and simplify our original method showing that statistically equal results can be obtained by using simple average marker intensities as features in the logistic regression model. In addition to the logistic regression based model, we also present other classification models and compare their performance quantitatively. The key benefit in our prediction method compared to other solutions with similar performance is that our model only uses a small fraction of the flow cytometry measurements making our solution highly economical. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface.
- Author
-
Feess, David, Krell, Mario M., and Metzen, Jan H.
- Subjects
- *
BRAIN-computer interfaces , *ELECTROENCEPHALOGRAPHY , *ELECTRODES , *INFORMATION processing , *BIOTECHNOLOGY , *BIOENGINEERING , *COMPUTATIONAL biology , *APPLIED mathematics - Abstract
A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system's performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
40. Using Eye Movement to Control a Computer: A Design for a Lightweight Electro-Oculogram Electrode Array and Computer Interface.
- Author
-
Iáñez, Eduardo, Azorin, Jose M., and Perez-Vidal, Carlos
- Subjects
- *
EYE movements , *COMPUTER interfaces , *HUMAN-computer interaction , *COMPUTER algorithms , *ELECTROPHYSIOLOGY , *BIOENGINEERING , *MECHANICAL engineering - Abstract
This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. 4D Segmentation of Brain MR Images with Constrained Cortical Thickness Variation.
- Author
-
Wang, Li, Shi, Feng, Li, Gang, and Shen, Dinggang
- Subjects
- *
MAGNETIC resonance imaging of the brain , *IMAGE segmentation , *DISEASE progression , *LONGITUDINAL method , *BRAIN imaging , *IMAGE analysis , *THICKNESS measurement - Abstract
Segmentation of brain MR images plays an important role in longitudinal investigation of developmental, aging, disease progression changes in the cerebral cortex. However, most existing brain segmentation methods consider multiple time-point images individually and thus cannot achieve longitudinal consistency. For example, cortical thickness measured from the segmented image will contain unnecessary temporal variations, which will affect the time related change pattern and eventually reduce the statistical power of analysis. In this paper, we propose a 4D segmentation framework for the adult brain MR images with the constraint of cortical thickness variations. Specifically, we utilize local intensity information to address the intensity inhomogeneity, spatial cortical thickness constraint to maintain the cortical thickness being within a reasonable range, and temporal cortical thickness variation constraint in neighboring time-points to suppress the artificial variations. The proposed method has been tested on BLSA dataset and ADNI dataset with promising results. Both qualitative and quantitative experimental results demonstrate the advantage of the proposed method, in comparison to other state-of-the-art 4D segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. A Bio-Inspired Methodology of Identifying Influential Nodes in Complex Networks.
- Author
-
Gao, Cai, Lan, Xin, Zhang, Xiaoge, and Deng, Yong
- Subjects
- *
BIOLOGICALLY inspired computing , *COMPUTATIONAL biology , *COMPUTER algorithms , *INDUSTRIAL engineering , *NETWORK analysis (Communication) , *MATHEMATICAL models , *APPLIED mathematics - Abstract
How to identify influential nodes is a key issue in complex networks. The degree centrality is simple, but is incapable to reflect the global characteristics of networks. Betweenness centrality and closeness centrality do not consider the location of nodes in the networks, and semi-local centrality, leaderRank and pageRank approaches can be only applied in unweighted networks. In this paper, a bio-inspired centrality measure model is proposed, which combines the Physarum centrality with the K-shell index obtained by K-shell decomposition analysis, to identify influential nodes in weighted networks. Then, we use the Susceptible-Infected (SI) model to evaluate the performance. Examples and applications are given to demonstrate the adaptivity and efficiency of the proposed method. In addition, the results are compared with existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
43. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
- Author
-
Tang, Xiaoying, Oishi, Kenichi, Faria, Andreia V., Hillis, Argye E., Albert, Marilyn S., Mori, Susumu, and Miller, Michael I.
- Subjects
- *
SEGMENTATION (Biology) , *PARAMETER estimation , *COMPUTATIONAL biology , *BIOMEDICAL engineering , *DIFFERENTIAL geometry , *MORPHOGENESIS , *BIOENGINEERING - Abstract
This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
44. Increasing the Contrast of the Brain MR FLAIR Images Using Fuzzy Membership Functions and Structural Similarity Indices in Order to Segment MS Lesions.
- Author
-
Bijar, Ahmad, Khayati, Rasoul, and Peñalver Benavent, Antonio
- Subjects
- *
MAGNETIC resonance imaging of the brain , *CONTRAST media , *FUZZY systems in medicine , *MULTIPLE sclerosis diagnosis , *IMAGE segmentation , *COMPUTATIONAL neuroscience , *BRAIN , *RADIOGRAPHY - Abstract
Segmentation is an important step for the diagnosis of multiple sclerosis (MS). This paper presents a new approach to the fully automatic segmentation of MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) Magnetic Resonance (MR) images. With the aim of increasing the contrast of the FLAIR MR images with respect to the MS lesions, the proposed method first estimates the fuzzy memberships of brain tissues (i.e., the cerebrospinal fluid (CSF), the normal-appearing brain tissue (NABT), and the lesion). The procedure for determining the fuzzy regions of their member functions is performed by maximizing fuzzy entropy through Genetic Algorithm. Research shows that the intersection points of the obtained membership functions are not accurate enough to segment brain tissues. Then, by extracting the structural similarity (SSIM) indices between the FLAIR MR image and its lesions membership image, a new contrast-enhanced image is created in which MS lesions have high contrast against other tissues. Finally, the new contrast-enhanced image is used to segment MS lesions. To evaluate the result of the proposed method, similarity criteria from all slices from 20 MS patients are calculated and compared with other methods, which include manual segmentation. The volume of segmented lesions is also computed and compared with Gold standard using the Intraclass Correlation Coefficient (ICC) and paired samples t test. Similarity index for the patients with small lesion load, moderate lesion load and large lesion load was 0.7261, 0.7745 and 0.8231, respectively. The average overall similarity index for all patients is 0.7649. The t test result indicates that there is no statistically significant difference between the automatic and manual segmentation. The validated results show that this approach is very promising. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
45. Self-Stabilization in Membrane Systems.
- Author
-
Alhazov, Artiom, Antoniotti, Marco, Freund, Rudolf, Leporati, Alberto, and Mauri, Giancarlo
- Subjects
- *
SELF-stabilization (Computer science) , *REWRITING systems (Computer science) , *COMPUTER science , *COMPUTER programming , *BIOLOGY , *ENGINEERING - Abstract
In this paper we study a notion of self-stabilization, inspired from biology and engineering. Multiple variants of formalization of this notion are considered, and we discuss how such properties affect the computational power of multiset rewriting systems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
46. Population estimation of whitefly for cotton plant using image processing approach
- Author
-
Varsha R. Ratnaparkhe and Monica N. Jige
- Subjects
biology ,Computer science ,fungi ,food and beverages ,Image processing ,Whitefly ,Agricultural engineering ,engineering.material ,Pesticide ,biology.organism_classification ,Fiber crop ,Population estimation ,engineering ,Crop quality ,PEST analysis - Abstract
Cotton is the most important fiber crop not only of India but of the entire world. Whitefly, a bio-aggressor was chosen as the pest of interest in this paper. Due to infection by pests crop quality and yield is reduced. The easiest way to control the pest infection is the use of pesticides. But the more use of pesticide are hazardous and it not only kill pest in plant, but also affect the health of human, animal and plant. To overcome this problem it is necessary to control the use of pesticide, pest detection is the most important process for an effective cultivation. Counting whiteflies on leaves is beneficial for preventing spread of pests and calculate the optimum amount of pesticides. This paper presents the algorithm for automatic detection and counting of whitefly using image processing in MATLAB.
- Published
- 2017
47. Sustained Firing of Model Central Auditory Neurons Yields a Discriminative Spectro-temporal Representation for Natural Sounds
- Author
-
Mounya Elhilali and Michael A. Carlin
- Subjects
Male ,Computer science ,Speech recognition ,Audio Signal Processing ,0302 clinical medicine ,Engineering ,Cluster Analysis ,Natural sounds ,lcsh:QH301-705.5 ,Neurons ,0303 health sciences ,education.field_of_study ,Coding Mechanisms ,Ecology ,Sensory Systems ,medicine.anatomical_structure ,Computational Theory and Mathematics ,Modeling and Simulation ,Auditory Perception ,Female ,Research Article ,Population ,Models, Neurological ,Sensory system ,Stimulus (physiology) ,Auditory cortex ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Neuronal tuning ,Genetics ,medicine ,Auditory system ,Animals ,Humans ,Speech ,education ,Molecular Biology ,Biology ,Ecology, Evolution, Behavior and Systematics ,030304 developmental biology ,Computational Neuroscience ,Auditory Cortex ,Quantitative Biology::Neurons and Cognition ,Ferrets ,Computational Biology ,lcsh:Biology (General) ,Acoustic Stimulation ,Receptive field ,Signal Processing ,Vocalization, Animal ,Noise ,030217 neurology & neurosurgery ,Neuroscience - Abstract
The processing characteristics of neurons in the central auditory system are directly shaped by and reflect the statistics of natural acoustic environments, but the principles that govern the relationship between natural sound ensembles and observed responses in neurophysiological studies remain unclear. In particular, accumulating evidence suggests the presence of a code based on sustained neural firing rates, where central auditory neurons exhibit strong, persistent responses to their preferred stimuli. Such a strategy can indicate the presence of ongoing sounds, is involved in parsing complex auditory scenes, and may play a role in matching neural dynamics to varying time scales in acoustic signals. In this paper, we describe a computational framework for exploring the influence of a code based on sustained firing rates on the shape of the spectro-temporal receptive field (STRF), a linear kernel that maps a spectro-temporal acoustic stimulus to the instantaneous firing rate of a central auditory neuron. We demonstrate the emergence of richly structured STRFs that capture the structure of natural sounds over a wide range of timescales, and show how the emergent ensembles resemble those commonly reported in physiological studies. Furthermore, we compare ensembles that optimize a sustained firing code with one that optimizes a sparse code, another widely considered coding strategy, and suggest how the resulting population responses are not mutually exclusive. Finally, we demonstrate how the emergent ensembles contour the high-energy spectro-temporal modulations of natural sounds, forming a discriminative representation that captures the full range of modulation statistics that characterize natural sound ensembles. These findings have direct implications for our understanding of how sensory systems encode the informative components of natural stimuli and potentially facilitate multi-sensory integration., Author Summary We explore a fundamental question with regard to the representation of sound in the auditory system, namely: what are the coding strategies that underlie observed neurophysiological responses in central auditory areas? There has been debate in recent years as to whether neural ensembles explicitly minimize their propensity to fire (the so-called sparse coding hypothesis) or whether neurons exhibit strong, sustained firing rates when processing their preferred stimuli. Using computational modeling, we directly confront issues raised in this debate, and our results suggest that not only does a sustained firing strategy yield a sparse representation of sound, but the principle yields emergent neural ensembles that capture the rich structural variations present in natural stimuli. In particular, spectro-temporal receptive fields (STRFs) have been widely used to characterize the processing mechanisms of central auditory neurons and have revealed much about the nature of sound processing in central auditory areas. In our paper, we demonstrate how neurons that maximize a sustained firing objective yield STRFs akin to those commonly measured in physiological studies, capturing a wide range of aspects of natural sounds over a variety of timescales, suggesting that such a coding strategy underlies observed neural responses.
- Published
- 2013
48. Time-Gated Optical Projection Tomography Allows Visualization of Adult Zebrafish Internal Structures
- Author
-
Franco Cotelli, Cosimo D'Andrea, Efrem Foglia, Andrea Bassi, Gianluca Valentini, Luca Fieramonti, Rinaldo Cubeddu, Sandro De Silvestri, Giulio Cerullo, and Anna Pistocchi
- Subjects
Time Factors ,genetic structures ,Light ,Image Processing ,Nervous System ,Light scattering ,law.invention ,Diagnostic Radiology ,Animals, Genetically Modified ,Engineering ,law ,Optical Properties ,Zebrafish ,Physics ,Multidisciplinary ,Infrared Radiation ,Electromagnetic Radiation ,Resolution (electron density) ,Animal Models ,Optical Computing ,Medicine ,Tomography ,Radiology ,Artifacts ,Preclinical imaging ,Research Article ,medicine.medical_specialty ,Visible Light ,Science ,Materials Science ,Material Properties ,Bone and Bones ,Optics ,Model Organisms ,Computed Tomography ,Imaging, Three-Dimensional ,medicine ,Animals ,Tomography, Optical ,Medical physics ,Biology ,Tomographic reconstruction ,Computing Systems ,Scattering ,business.industry ,Lasers ,Laser ,eye diseases ,Visualization ,Computer Science ,Signal Processing ,sense organs ,business - Abstract
Optical imaging through biological samples is compromised by tissue scattering and currently various approaches aim to overcome this limitation. In this paper we demonstrate that an all optical technique, based on non-linear upconversion of infrared ultrashort laser pulses and on multiple view acquisition, allows the reduction of scattering effects in tomographic imaging. This technique, namely Time-Gated Optical Projection Tomography (TGOPT), is used to reconstruct three dimensionally the internal structure of adult zebrafish without staining or clearing agents. This method extends the use of Optical Projection Tomography to optically diffusive samples yielding reconstructions with reduced artifacts, increased contrast and improved resolution with respect to those obtained with non-gated techniques. The paper shows that TGOPT is particularly suited for imaging the skeletal system and nervous structures of adult zebrafish.
- Published
- 2012
49. Experimental tests of PVD AlCrN-coated planer knives on planing Scots pine (Pinus sylvestris L.) under industrial conditions
- Author
-
Paweł Sutowski, Bogdan Warcholiński, Wojciech Kapłonek, Adam Gilewicz, Krzysztof Nadolny, Marzena Sutowska, and Piotr Myśliński
- Subjects
040101 forestry ,0106 biological sciences ,biology ,Computer science ,Rake ,Process (computing) ,Scots pine ,Mechanical engineering ,Forestry ,04 agricultural and veterinary sciences ,Edge (geometry) ,engineering.material ,Raw material ,biology.organism_classification ,01 natural sciences ,Coating ,Machining ,Wood processing ,010608 biotechnology ,engineering ,0401 agriculture, forestry, and fisheries ,General Materials Science - Abstract
Raw pine wood processing and especially its mechanical processing constitute a significant share among technological operations leading to obtaining a finished product. Stable implementation of machining operations, ensuring long-term repeatable processing results depends on many factors, such as quality and invariability of raw material, technical condition of technological equipment, adopted parameters of work, qualifications and experience of operators, as well as preparation and properties of the machining tools used. It seems that the greatest potential in the search for opportunities to increase the efficiency of machining operations has the modification of machining tools used in it. This paper presents the results of research work aimed at determining how the life of cutting tools used in planing operations of wet pine wood is affected by the application of chromium aluminum nitride (AlCrN) coating to planar industrial planing knives in the process of physical vapour deposition. For this purpose operational tests were carried out under production conditions in a medium-sized wood processing company. The study compares the effective working time, rounding radius, the profile along the knife (size of worn edge displacement, wear area of the cutting edge), selected texture parameters of the planar industrial planing knife rake face and visual analyses of cutting edge condition of AlCrN-coated planar knives and unmodified ones. The obtained experimental results showed the possibility of increasing the life of AlCrN-coated knives up to 154% compared to the results obtained with uncoated ones. The proposed modification of the operational features of the knives does not involve any changes in the technological process of planing, does not require any interference with the machining station nor its parameters, therefore enabling rapid and easy implementation into industrial practice.
- Published
- 2021
50. Computer Science, Biology and Biomedical Informatics Academy: Outcomes from 5 Years of Immersing High-school Students into Informatics Research.
- Author
-
King, Andrew J., Fisher, Arielle M., Becich, Michael J., and Boone, David N.
- Subjects
COMPUTERS in biology ,MEDICAL informatics ,HIGH school students - Abstract
The University of Pittsburgh's Department of Biomedical Informatics and Division of Pathology Informatics created a Science, Technology, Engineering, and Mathematics (STEM) pipeline in 2011 dedicated to providing cutting-edge informatics research and career preparatory experiences to a diverse group of highly motivated high-school students. In this third editorial installment describing the program, we provide a brief overview of the pipeline, report on achievements of the past scholars, and present results from self-reported assessments by the 2015 cohort of scholars. The pipeline continues to expand with the 2015 addition of the innovation internship, and the introduction of a program in 2016 aimed at offering first-time research experiences to undergraduates who are underrepresented in pathology and biomedical informatics. Achievements of program scholars include authorship of journal articles, symposium and summit presentations, and attendance at top 25 universities. All of our alumni matriculated into higher education and 90% remain in STEM majors. The 2015 high-school program had ten participating scholars who self-reported gains in confidence in their research abilities and understanding of what it means to be a scientist. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.