21 results on '"Lampa S"'
Search Results
2. Morphometric analysis of neuromuscular topography in the serratus anterior muscle
- Author
-
Potluri, S., Lampa, S. J., Norton, A. S., and Laskowski, M. B.
- Published
- 2006
- Full Text
- View/download PDF
3. 351 MOTOR UNIT DISTRIBUTION FROM THE INFERIOR GLUTEAL NERVE IN THE GLUTEUS MAXIMUS MUSCLE
- Author
-
Gilbert, K. C., primary, Lampa, S. J., additional, Norton, A. S., additional, and Laskowski, M. B., additional
- Published
- 2005
- Full Text
- View/download PDF
4. ChemInform Abstract: THERMAL ANALYSIS OF MANGANESE OXIDE‐COPPER OXIDE CATALYSTS
- Author
-
DONDUR, V., primary, LAMPA, S., additional, and VUCELIC, D., additional
- Published
- 1983
- Full Text
- View/download PDF
5. Linking the Resource Description Framework to cheminformatics and proteochemometrics
- Author
-
Willighagen Egon L, Alvarsson Jonathan, Andersson Annsofie, Eklund Martin, Lampa Samuel, Lapins Maris, Spjuth Ola, and Wikberg Jarl ES
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Semantic web technologies are finding their way into the life sciences. Ontologies and semantic markup have already been used for more than a decade in molecular sciences, but have not found widespread use yet. The semantic web technology Resource Description Framework (RDF) and related methods show to be sufficiently versatile to change that situation. Results The work presented here focuses on linking RDF approaches to existing molecular chemometrics fields, including cheminformatics, QSAR modeling and proteochemometrics. Applications are presented that link RDF technologies to methods from statistics and cheminformatics, including data aggregation, visualization, chemical identification, and property prediction. They demonstrate how this can be done using various existing RDF standards and cheminformatics libraries. For example, we show how IC50 and Ki values are modeled for a number of biological targets using data from the ChEMBL database. Conclusions We have shown that existing RDF standards can suitably be integrated into existing molecular chemometrics methods. Platforms that unite these technologies, like Bioclipse, makes this even simpler and more transparent. Being able to create and share workflows that integrate data aggregation and analysis (visual and statistical) is beneficial to interoperability and reproducibility. The current work shows that RDF approaches are sufficiently powerful to support molecular chemometrics workflows.
- Published
- 2011
- Full Text
- View/download PDF
6. MOTOR UNIT DISTRIBUTION FROM THE INFERIOR GLUTEAL NERVE IN THE GLUTEUS MAXIMUS MUSCLE.
- Author
-
Gilbert, K. C., Lampa, S. J., Norton, A. S., and Laskowski, M. B.
- Published
- 2005
- Full Text
- View/download PDF
7. Towards reproducible computational drug discovery.
- Author
-
Schaduangrat N, Lampa S, Simeon S, Gleeson MP, Spjuth O, and Nantasenamat C
- Abstract
The reproducibility of experiments has been a long standing impediment for further scientific progress. Computational methods have been instrumental in drug discovery efforts owing to its multifaceted utilization for data collection, pre-processing, analysis and inference. This article provides an in-depth coverage on the reproducibility of computational drug discovery. This review explores the following topics: (1) the current state-of-the-art on reproducible research, (2) research documentation (e.g. electronic laboratory notebook, Jupyter notebook, etc.), (3) science of reproducible research (i.e. comparison and contrast with related concepts as replicability, reusability and reliability), (4) model development in computational drug discovery, (5) computational issues on model development and deployment, (6) use case scenarios for streamlining the computational drug discovery protocol. In computational disciplines, it has become common practice to share data and programming codes used for numerical calculations as to not only facilitate reproducibility, but also to foster collaborations (i.e. to drive the project further by introducing new ideas, growing the data, augmenting the code, etc.). It is therefore inevitable that the field of computational drug design would adopt an open approach towards the collection, curation and sharing of data/code.
- Published
- 2020
- Full Text
- View/download PDF
8. Software engineering for scientific big data analysis.
- Author
-
Grüning BA, Lampa S, Vaudel M, and Blankenberg D
- Subjects
- Biomedical Research methods, Big Data, Practice Guidelines as Topic, Software standards
- Abstract
The increasing complexity of data and analysis methods has created an environment where scientists, who may not have formal training, are finding themselves playing the impromptu role of software engineer. While several resources are available for introducing scientists to the basics of programming, researchers have been left with little guidance on approaches needed to advance to the next level for the development of robust, large-scale data analysis tools that are amenable to integration into workflow management systems, tools, and frameworks. The integration into such workflow systems necessitates additional requirements on computational tools, such as adherence to standard conventions for robustness, data input, output, logging, and flow control. Here we provide a set of 10 guidelines to steer the creation of command-line computational tools that are usable, reliable, extensible, and in line with standards of modern coding practices., (© The Author(s) 2019. Published by Oxford University Press.)
- Published
- 2019
- Full Text
- View/download PDF
9. SciPipe: A workflow library for agile development of complex and dynamic bioinformatics pipelines.
- Author
-
Lampa S, Dahlö M, Alvarsson J, and Spjuth O
- Subjects
- Gene Library, Machine Learning, Programming Languages, Workflow, Computational Biology, Genomics, Software
- Abstract
Background: The complex nature of biological data has driven the development of specialized software tools. Scientific workflow management systems simplify the assembly of such tools into pipelines, assist with job automation, and aid reproducibility of analyses. Many contemporary workflow tools are specialized or not designed for highly complex workflows, such as with nested loops, dynamic scheduling, and parametrization, which is common in, e.g., machine learning., Findings: SciPipe is a workflow programming library implemented in the programming language Go, for managing complex and dynamic pipelines in bioinformatics, cheminformatics, and other fields. SciPipe helps in particular with workflow constructs common in machine learning, such as extensive branching, parameter sweeps, and dynamic scheduling and parametrization of downstream tasks. SciPipe builds on flow-based programming principles to support agile development of workflows based on a library of self-contained, reusable components. It supports running subsets of workflows for improved iterative development and provides a data-centric audit logging feature that saves a full audit trace for every output file of a workflow, which can be converted to other formats such as HTML, TeX, and PDF on demand. The utility of SciPipe is demonstrated with a machine learning pipeline, a genomics, and a transcriptomics pipeline., Conclusions: SciPipe provides a solution for agile development of complex and dynamic pipelines, especially in machine learning, through a flexible application programming interface suitable for scientists used to programming or scripting., (© The Author(s) 2019. Published by Oxford University Press.)
- Published
- 2019
- Full Text
- View/download PDF
10. PhenoMeNal: processing and analysis of metabolomics data in the cloud.
- Author
-
Peters K, Bradbury J, Bergmann S, Capuccini M, Cascante M, de Atauri P, Ebbels TMD, Foguet C, Glen R, Gonzalez-Beltran A, Günther UL, Handakas E, Hankemeier T, Haug K, Herman S, Holub P, Izzo M, Jacob D, Johnson D, Jourdan F, Kale N, Karaman I, Khalili B, Emami Khonsari P, Kultima K, Lampa S, Larsson A, Ludwig C, Moreno P, Neumann S, Novella JA, O'Donovan C, Pearce JTM, Peluso A, Piras ME, Pireddu L, Reed MAC, Rocca-Serra P, Roger P, Rosato A, Rueedi R, Ruttkies C, Sadawi N, Salek RM, Sansone SA, Selivanov V, Spjuth O, Schober D, Thévenot EA, Tomasoni M, van Rijswijk M, van Vliet M, Viant MR, Weber RJM, Zanetti G, and Steinbeck C
- Subjects
- Cloud Computing, Humans, Workflow, Metabolomics methods, Software
- Abstract
Background: Metabolomics is the comprehensive study of a multitude of small molecules to gain insight into an organism's metabolism. The research field is dynamic and expanding with applications across biomedical, biotechnological, and many other applied biological domains. Its computationally intensive nature has driven requirements for open data formats, data repositories, and data analysis tools. However, the rapid progress has resulted in a mosaic of independent, and sometimes incompatible, analysis methods that are difficult to connect into a useful and complete data analysis solution., Findings: PhenoMeNal (Phenome and Metabolome aNalysis) is an advanced and complete solution to set up Infrastructure-as-a-Service (IaaS) that brings workflow-oriented, interoperable metabolomics data analysis platforms into the cloud. PhenoMeNal seamlessly integrates a wide array of existing open-source tools that are tested and packaged as Docker containers through the project's continuous integration process and deployed based on a kubernetes orchestration framework. It also provides a number of standardized, automated, and published analysis workflows in the user interfaces Galaxy, Jupyter, Luigi, and Pachyderm., Conclusions: PhenoMeNal constitutes a keystone solution in cloud e-infrastructures available for metabolomics. PhenoMeNal is a unique and complete solution for setting up cloud e-infrastructures through easy-to-use web interfaces that can be scaled to any custom public and private cloud environment. By harmonizing and automating software installation and configuration and through ready-to-use scientific workflow user interfaces, PhenoMeNal has succeeded in providing scientists with workflow-driven, reproducible, and shareable metabolomics data analysis platforms that are interfaced through standard data formats, representative datasets, versioned, and have been tested for reproducibility and interoperability. The elastic implementation of PhenoMeNal further allows easy adaptation of the infrastructure to other application areas and 'omics research domains., (© The Author(s) 2018. Published by Oxford University Press.)
- Published
- 2019
- Full Text
- View/download PDF
11. Predicting Off-Target Binding Profiles With Confidence Using Conformal Prediction.
- Author
-
Lampa S, Alvarsson J, Arvidsson Mc Shane S, Berg A, Ahlberg E, and Spjuth O
- Abstract
Ligand-based models can be used in drug discovery to obtain an early indication of potential off-target interactions that could be linked to adverse effects. Another application is to combine such models into a panel, allowing to compare and search for compounds with similar profiles. Most contemporary methods and implementations however lack valid measures of confidence in their predictions, and only provide point predictions. We here describe a methodology that uses Conformal Prediction for predicting off-target interactions, with models trained on data from 31 targets in the ExCAPE-DB dataset selected for their utility in broad early hazard assessment. Chemicals were represented by the signature molecular descriptor and support vector machines were used as the underlying machine learning method. By using conformal prediction, the results from predictions come in the form of confidence p -values for each class. The full pre-processing and model training process is openly available as scientific workflows on GitHub, rendering it fully reproducible. We illustrate the usefulness of the developed methodology on a set of compounds extracted from DrugBank. The resulting models are published online and are available via a graphical web interface and an OpenAPI interface for programmatic access.
- Published
- 2018
- Full Text
- View/download PDF
12. A confidence predictor for logD using conformal regression and a support-vector machine.
- Author
-
Lapins M, Arvidsson S, Lampa S, Berg A, Schaal W, Alvarsson J, and Spjuth O
- Abstract
Lipophilicity is a major determinant of ADMET properties and overall suitability of drug candidates. We have developed large-scale models to predict water-octanol distribution coefficient (logD) for chemical compounds, aiding drug discovery projects. Using ACD/logD data for 1.6 million compounds from the ChEMBL database, models are created and evaluated by a support-vector machine with a linear kernel using conformal prediction methodology, outputting prediction intervals at a specified confidence level. The resulting model shows a predictive ability of [Formula: see text] and with the best performing nonconformity measure having median prediction interval of [Formula: see text] log units at 80% confidence and [Formula: see text] log units at 90% confidence. The model is available as an online service via an OpenAPI interface, a web page with a molecular editor, and we also publish predictive values at 90% confidence level for 91 M PubChem structures in RDF format for download and as an URI resolver service.
- Published
- 2018
- Full Text
- View/download PDF
13. SweGen: a whole-genome data resource of genetic variability in a cross-section of the Swedish population.
- Author
-
Ameur A, Dahlberg J, Olason P, Vezzi F, Karlsson R, Martin M, Viklund J, Kähäri AK, Lundin P, Che H, Thutkawkorapin J, Eisfeldt J, Lampa S, Dahlberg M, Hagberg J, Jareborg N, Liljedahl U, Jonasson I, Johansson Å, Feuk L, Lundeberg J, Syvänen AC, Lundin S, Nilsson D, Nystedt B, Magnusson PK, and Gyllensten U
- Subjects
- Datasets as Topic, Genome-Wide Association Study, Humans, Sweden, Twins genetics, Genome, Human, Polymorphism, Single Nucleotide, Registries
- Abstract
Here we describe the SweGen data set, a comprehensive map of genetic variation in the Swedish population. These data represent a basic resource for clinical genetics laboratories as well as for sequencing-based association studies by providing information on genetic variant frequencies in a cohort that is well matched to national patient cohorts. To select samples for this study, we first examined the genetic structure of the Swedish population using high-density SNP-array data from a nation-wide cohort of over 10 000 Swedish-born individuals included in the Swedish Twin Registry. A total of 1000 individuals, reflecting a cross-section of the population and capturing the main genetic structure, were selected for whole-genome sequencing. Analysis pipelines were developed for automated alignment, variant calling and quality control of the sequencing data. This resulted in a genome-wide collection of aggregated variant frequencies in the Swedish population that we have made available to the scientific community through the website https://swefreq.nbis.se. A total of 29.2 million single-nucleotide variants and 3.8 million indels were detected in the 1000 samples, with 9.9 million of these variants not present in current databases. Each sample contributed with an average of 7199 individual-specific variants. In addition, an average of 8645 larger structural variants (SVs) were detected per individual, and we demonstrate that the population frequencies of these SVs can be used for efficient filtering analyses. Finally, our results show that the genetic diversity within Sweden is substantial compared with the diversity among continental European populations, underscoring the relevance of establishing a local reference data set.
- Published
- 2017
- Full Text
- View/download PDF
14. RDFIO: extending Semantic MediaWiki for interoperable biomedical data management.
- Author
-
Lampa S, Willighagen E, Kohonen P, King A, Vrandečić D, Grafström R, and Spjuth O
- Subjects
- Humans, Internet, Intersectoral Collaboration, Metabolomics, Rare Diseases genetics, User-Computer Interface, Information Storage and Retrieval methods, Software
- Abstract
Background: Biological sciences are characterised not only by an increasing amount but also the extreme complexity of its data. This stresses the need for efficient ways of integrating these data in a coherent description of biological systems. In many cases, biological data needs organization before integration. This is not seldom a collaborative effort, and it is thus important that tools for data integration support a collaborative way of working. Wiki systems with support for structured semantic data authoring, such as Semantic MediaWiki, provide a powerful solution for collaborative editing of data combined with machine-readability, so that data can be handled in an automated fashion in any downstream analyses. Semantic MediaWiki lacks a built-in data import function though, which hinders efficient round-tripping of data between interoperable Semantic Web formats such as RDF and the internal wiki format., Results: To solve this deficiency, the RDFIO suite of tools is presented, which supports importing of RDF data into Semantic MediaWiki, with metadata needed to export it again in the same RDF format, or ontology. Additionally, the new functionality enables mash-ups of automated data imports combined with manually created data presentations. The application of the suite of tools is demonstrated by importing drug discovery related data about rare diseases from Orphanet and acid dissociation constants from Wikidata. The RDFIO suite of tools is freely available for download via pharmb.io/project/rdfio ., Conclusions: Through a set of biomedical demonstrators, it is demonstrated how the new functionality enables a number of usage scenarios where the interoperability of SMW and the wider Semantic Web is leveraged for biomedical data sets, to create an easy to use and flexible platform for exploring and working with biomedical data.
- Published
- 2017
- Full Text
- View/download PDF
15. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.
- Author
-
Lampa S, Alvarsson J, and Spjuth O
- Abstract
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
- Published
- 2016
- Full Text
- View/download PDF
16. Large-scale ligand-based predictive modelling using support vector machines.
- Author
-
Alvarsson J, Lampa S, Schaal W, Andersson C, Wikberg JE, and Spjuth O
- Abstract
The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.
- Published
- 2016
- Full Text
- View/download PDF
17. Experiences with workflows for automating data-intensive bioinformatics.
- Author
-
Spjuth O, Bongcam-Rudloff E, Hernández GC, Forer L, Giovacchini M, Guimera RV, Kallio A, Korpelainen E, Kańduła MM, Krachunov M, Kreil DP, Kulev O, Łabaj PP, Lampa S, Pireddu L, Schönherr S, Siretskiy A, and Vassilev D
- Subjects
- High-Throughput Nucleotide Sequencing, Reproducibility of Results, Computational Biology methods, Electronic Data Processing methods, Workflow
- Abstract
High-throughput technologies, such as next-generation sequencing, have turned molecular biology into a data-intensive discipline, requiring bioinformaticians to use high-performance computing resources and carry out data management and analysis tasks on large scale. Workflow systems can be useful to simplify construction of analysis pipelines that automate tasks, support reproducibility and provide measures for fault-tolerance. However, workflow systems can incur significant development and administration overhead so bioinformatics pipelines are often still built without them. We present the experiences with workflows and workflow systems within the bioinformatics community participating in a series of hackathons and workshops of the EU COST action SeqAhead. The organizations are working on similar problems, but we have addressed them with different strategies and solutions. This fragmentation of efforts is inefficient and leads to redundant and incompatible solutions. Based on our experiences we define a set of recommendations for future systems to enable efficient yet simple bioinformatics workflow construction and execution.
- Published
- 2015
- Full Text
- View/download PDF
18. Non-Invasive Genetic Mark-Recapture as a Means to Study Population Sizes and Marking Behaviour of the Elusive Eurasian Otter (Lutra lutra).
- Author
-
Lampa S, Mihoub JB, Gruber B, Klenke R, and Henle K
- Subjects
- Animals, Conservation of Natural Resources, Feces chemistry, Female, Genotype, Germany, Male, Mink classification, Mink genetics, Otters classification, Population Density, Sex Ratio, Animal Distribution physiology, DNA genetics, Eliminative Behavior, Animal physiology, Otters genetics
- Abstract
Quantifying population status is a key objective in many ecological studies, but is often difficult to achieve for cryptic or elusive species. Here, non-invasive genetic capture-mark-recapture (CMR) methods have become a very important tool to estimate population parameters, such as population size and sex ratio. The Eurasian otter (Lutra lutra) is such an elusive species of management concern and is increasingly studied using faecal-based genetic sampling. For unbiased sex ratios or population size estimates, the marking behaviour of otters has to be taken into account. Using 2132 otter faeces of a wild otter population in Upper Lusatia (Saxony, Germany) collected over six years (2006-2012), we studied the marking behaviour and applied closed population CMR models accounting for genetic misidentification to estimate population sizes and sex ratios. We detected a sex difference in the marking behaviour of otters with jelly samples being more often defecated by males and placed actively exposed on frequently used marking sites. Since jelly samples are of higher DNA quality, it is important to not only concentrate on this kind of samples or marking sites and to invest in sufficiently high numbers of repetitions of non-jelly samples to ensure an unbiased sex ratio. Furthermore, otters seemed to increase marking intensity due to the handling of their spraints, hence accounting for this behavioural response could be important. We provided the first precise population size estimate with confidence intervals for Upper Lusatia (for 2012: N = 20 ± 2.1, 95% CI = 16-25) and showed that spraint densities are not a reliable index for abundances. We further demonstrated that when minks live in sympatry with otters and have comparably high densities, a non-negligible number of supposed otter samples are actually of mink origin. This could severely bias results of otter monitoring if samples are not genetically identified.
- Published
- 2015
- Full Text
- View/download PDF
19. Lessons learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data.
- Author
-
Lampa S, Dahlö M, Olason PI, Hagberg J, and Spjuth O
- Abstract
: Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made.
- Published
- 2013
- Full Text
- View/download PDF
20. Ephrin-A5 overexpression degrades topographic specificity in the mouse gluteus maximus muscle.
- Author
-
Lampa SJ, Potluri S, Norton AS, Fusco W, and Laskowski MB
- Subjects
- Action Potentials physiology, Animals, Electrophysiology, Ephrin-A5 genetics, Ephrin-A5 physiology, Genotype, Mice, Mice, Inbred C57BL, Presynaptic Terminals physiology, Reverse Transcriptase Polymerase Chain Reaction, Ephrin-A5 biosynthesis, Muscle, Skeletal growth & development, Muscle, Skeletal innervation
- Abstract
Motor neurons project onto specific muscles with a distinct positional bias. We have previously shown using electrophysiological techniques that overexpression of ephrin-A5 degrades this topographic map. Here, we show that positional differences in axon terminal areas, an entirely different parameter of neuromuscular topography, are also eliminated with ephrin-A5 overexpression. Therefore, we now have both morphological and electrophysiological approaches to explore the mechanisms of neuromuscular topography.
- Published
- 2004
- Full Text
- View/download PDF
21. A morphological technique for exploring neuromuscular topography expressed in the mouse gluteus maximus muscle.
- Author
-
Lampa SJ, Potluri S, Norton AS, and Laskowski MB
- Subjects
- Animals, Animals, Newborn, Axons metabolism, Bungarotoxins pharmacokinetics, Fluorescent Dyes pharmacokinetics, In Vitro Techniques, Mice, Mice, Inbred C57BL, Muscle Fibers, Skeletal classification, Muscle, Skeletal metabolism, Myosin Heavy Chains metabolism, Neuromuscular Junction cytology, Presynaptic Terminals metabolism, Pyridinium Compounds pharmacokinetics, Quaternary Ammonium Compounds pharmacokinetics, Receptors, Nicotinic metabolism, Buttocks, Muscle Fibers, Skeletal metabolism, Muscle, Skeletal cytology, Neuromuscular Junction metabolism
- Abstract
Motor neuron pools innervate muscle fibers forming an ordered topographic map. In the gluteus maximus (GM) muscle, as well as additional muscles, we and others have demonstrated electrophysiologically that there exists a rostrocaudal distribution of axon terminals on the muscle surface. The role of muscle fiber type in determining this topography is unknown. A morphological approach was designed to investigate this question directly. We combined three different methods in the same muscle preparation: (1) the uptake of activity-dependent dyes into selected axon terminals to define the spinal segmental origin of a peripheral nerve terminal; (2) the fluorescent labeling of nicotinic acetylcholine receptors to determine motor endplate size; (3) the immunocytochemical staining of skeletal muscle to determine fiber subtype. We applied these methods to the mouse GM muscle to determine the relationship between muscle fiber type and the topographic map of the inferior gluteal nerve (IGN). Results from this unique combination of techniques in the same preparation showed that axon terminals from more rostral spinal nerve segments of origin are larger on rostral muscle fibers expressing myosin heavy chain (MyHC) IIB epitope than caudal type IIB fibers. Because type IIB fibers dominate the GM, this suggests that for these rostral axons terminal size is independent of fiber type. How this axon terminal size is related to the topographic map is the next question to be answered.
- Published
- 2004
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.