89 results on '"Paul J. Gemperline"'
Search Results
2. On-line optimization of a batch reaction - development and experimental demonstration.
- Author
-
S. Samir Alam, R. Russell Rhinehart, Karen A. High, and Paul J. Gemperline
- Published
- 2004
- Full Text
- View/download PDF
3. Soft known-value constraints for improved quantitation in multivariate curve resolution
- Author
-
Hamid Abdollahi, Mahsa Akbari Lakeh, and Paul J. Gemperline
- Subjects
Multivariate curve resolution ,Chemistry ,010401 analytical chemistry ,Value (computer science) ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Biochemistry ,0104 chemical sciences ,Analytical Chemistry ,Batch reaction ,Chemometrics ,Incomplete knowledge ,Noise ,Identification (information) ,Reference values ,Environmental Chemistry ,0210 nano-technology ,Algorithm ,Spectroscopy - Abstract
Multivariate curve resolution (MCR) is a powerful tool in chemometrics that has been involved in the solution of many analytical problems. The introduction of partial or incomplete knowledge of reference values as known-value constraints in an MCR model can considerably reduce the extent of rotational ambiguity for all components. Known-value constraints can provide enough information for MCR methods to perform both the identification and quantitative analysis of first-order data sets. In practice, in addition to noise and non-ideal behavior, limitations in the reference methods or procedures cause deviation in measured known values. It is shown that deviation in the measured known values, when used as known-value constraints, may result in considerable quantification errors in MCR results and can challenge identification analysis. This contribution investigates the importance and effect of soft known-value constraints on the accuracy of MCR solutions. The influence of noise levels, the amount of deviation of known values from true values, and the interaction of these two factors were evaluated with simulated data. An illustration using soft known-value constraints is given for a batch reaction experiment.
- Published
- 2019
4. Soft-trilinear constraints for improved quantitation in multivariate curve resolution
- Author
-
Hamid Abdollahi, Paul J. Gemperline, and Elnaz Tavakkoli
- Subjects
Analyte ,Sample (material) ,Biochemistry ,Data matrix (multivariate statistics) ,Analytical Chemistry ,Data set ,Set (abstract data type) ,Constraint (information theory) ,Position (vector) ,Electrochemistry ,Range (statistics) ,Environmental Chemistry ,Algorithm ,Spectroscopy ,Mathematics - Abstract
Nowadays, hyphenated chemical analysis methods like GC/MS, LC/MS, or HPLC with UV/Vis diode array detection are widely used. These methods produce a data matrix of mixtures measured during the analytical process. When a set of samples is to be analyzed with one data matrix per sample, the data is often presumed to have “trilinear” structure if the profile for each compound does not change shape or position from one sample to the other. By applying this information as a trilinearity constraint in Self Modeling Curve Resolution (SMCR) methods, overlapping peaks related to the pure compounds of interest can be resolved in a unique way. In practice, many systems have non-trilinear behavior due to deviation from ideal response, for example, a sample matrix effect or changes in instrumental response (e.g., shifts or changes in the shape of chromatographic peaks). In such cases, the trilinear model is not valid because every analyte does not have the same peak shape or position in every sample. In such cases, the unique profiles obtained by strictly enforced trilinearity constraints will not necessarily produce true profiles because the data set does not follow the assumed trilinear behavior. In this work, we introduce “soft-trilinearity constraints” to permit peak profiles of given components to have small deviations in their shape and position in different samples. The advantages and disadvantages of this approach are compared to other methods like PARAFAC2. We illustrate the influence of soft-trilinearity constraints on the accuracy of SMCR results for the case of a 3-component simulated system and an experimental data set. The results show that implementing soft-trilinearity constraints reduces the range of possible solutions considerably compared to the application of constraints such as just non-negativity. In addition, we show that the application of hard-trilinearity constraints can lead to solutions that are completely wrong or exclude the opportunity of a possible solution at all.
- Published
- 2019
5. A career in <scp>Chemometrics</scp> : An interview with <scp>Paul Gemperline</scp>
- Author
-
Paul J. Gemperline and Paul Trevorrow
- Subjects
Chemometrics ,Medical education ,Applied Mathematics ,Sociology ,Analytical Chemistry - Published
- 2019
- Full Text
- View/download PDF
6. Comprehensive kinetic model for the dissolution, reaction, and crystallization processes involved in the synthesis of aspirin
- Author
-
David Joiner, Julien Billeter, Mary Ellen P. McNally, Paul J. Gemperline, and Ron M. Hoffman
- Subjects
Supersaturation ,Chemistry ,Applied Mathematics ,Analytical chemistry ,Partial molar property ,Analytical Chemistry ,law.invention ,Volume (thermodynamics) ,Chemical engineering ,law ,Reagent ,Slurry ,Solubility ,Crystallization ,Dissolution - Abstract
Kinetic modeling of batch reactions monitored by in-situ spectroscopy has been shown to be a helpful method for developing a complete understanding of reaction systems. Much work has been done to demonstrate the ability to model dissolution, reaction and crystallization processes separately, however little has been done in terms of combining all of these into one comprehensive kinetic model. This paper demonstrates the integration of models of dissolution, temperature-dependent solubility and unseeded crystallization driven by cooling into a comprehensive kinetic model describing the evolution of a slurry reaction monitored by in-situ ATR UV/Vis spectroscopy. The model estimates changes in the volume of the dissolved fraction of the slurry by use of the partial molar volume of the dissolved species that change during the course of reagent addition, dissolution, reaction and crystallization. The comprehensive model accurately estimates concentration profiles of dissolved and undissolved components of the slurry and thereby, the degree of undersaturation and supersaturation necessary for estimation of the rates of dissolution and crystallization. Results were validated across two subsequent batches via offline HPLC measurements.
- Published
- 2014
- Full Text
- View/download PDF
7. Systematic Method for the Kinetic Modeling of Temporally Resolved Hyperspectral Microscope Images of Fluorescently Labeled Cells
- Author
-
Patrick J. Cutler, David M. Haaland, and Paul J. Gemperline
- Subjects
Yellow fluorescent protein ,Fluorescence-lifetime imaging microscopy ,Microscope ,Fluorophore ,Green Fluorescent Proteins ,Analytical chemistry ,Kidney ,Kinetic energy ,Models, Biological ,law.invention ,Mitochondrial Proteins ,chemistry.chemical_compound ,law ,Humans ,Computer Simulation ,Lung ,Instrumentation ,Spectroscopy ,Luminescent Agents ,biology ,Chemistry ,HEK 293 cells ,Hyperspectral imaging ,Epithelial Cells ,Image Enhancement ,Photobleaching ,I-kappa B Kinase ,Kinetics ,Microscopy, Fluorescence ,biology.protein ,Biological system - Abstract
In this paper we report the application of a novel method for fitting kinetic models to temporally resolved hyperspectral images of fluorescently labeled cells to mathematically resolve pure-component spatial images, pure-component spectra, and pure-component reaction profiles. The method is demonstrated on one simulated image and two experimental cell images, including human embryonic kidney cells (HEK 293) and human A549 pulmonary type II epithelial cells. In both cell images, inhibitor kappa B kinase alpha (IKKα) and mitochondrial antiviral signaling protein (MAVS) were labeled with green and yellow fluorescent protein, respectively. Kinetic modeling was performed on the compressed images by using a separable least squares method. A combination of several first-order decays were needed to adequately model the photo-bleaching processes for each fluorophore observed in these images, consistent with the hypothesis that each fluorophore was found in several different environments within the cells. Numerous plausible mechanisms for kinetic modeling of the photo-bleaching processes in these images were tested and a method for selecting the most parsimonious and statistically sufficient model was used to prepare spatial maps of each fluorophore.
- Published
- 2009
- Full Text
- View/download PDF
8. Experimental monitoring and data analysis tools for protein folding
- Author
-
Paul J. Gemperline, Anna de Juan, and Patrick J. Cutler
- Subjects
Steady state ,Design of experiments ,Analytical chemistry ,Folding (DSP implementation) ,Biochemistry ,Least squares ,Analytical Chemistry ,Characterization (materials science) ,chemistry.chemical_compound ,Myoglobin ,chemistry ,Scientific method ,Environmental Chemistry ,Protein folding ,Biological system ,Spectroscopy - Abstract
Protein folding is a complex process that can take place through different pathways depending on the inducing agent and on the monitored time scale. This diversity of possibilities requires a good design of experiments and powerful data analysis tools that allow operating with multitechnique measurements and/or with diverse experiments related to different aspects of the process of interest. Multivariate curve resolution-alternating least squares (MCR-ALS) has been the core methodology used to perform multitechnique and/or multiexperiment data analysis. This algorithm allows for obtaining the process concentration profiles and pure spectra of all species involved in the protein folding from the sole raw spectroscopic measurements obtained during the experimental monitoring. The process profiles provide insight on the mechanism of the process studied whereas the shapes of the recovered pure spectra help in the characterization of the protein conformations involved. Relevant features of the MCR-ALS algorithm are the possibility to handle fused data, i.e., series of experiments monitored with different techniques and/or performed under different experimental conditions, and the flexibility to include a priori information linked to general properties of concentration profiles and spectra and to the kinetic model governing the folding process. All these characteristics help to obtain a comprehensive description of the protein folding mechanism. To our knowledge, this work includes for the first time the simultaneous analysis of steady-state and short-time scale kinetic experiments linked to a protein folding process. The potential of this methodology is shown taking myoglobin as a model system for protein folding or, in general, for the study of any complex biological process that needs multitechnique and multiexperiment monitoring and analysis. Transformations in myoglobin due to changes in pH have been monitored by ultraviolet/visible (UV-vis) absorption and circular dichroism (CD) spectroscopy. Steady-state and stopped-flow experiments were carried out to account for the evolution of the process at different time scales. In this example, the multiexperiment analysis has allowed for the reliable detection and modeling of a kinetic transient species in the myoglobin folding process, absent in the steady-state working conditions.
- Published
- 2009
- Full Text
- View/download PDF
9. Multivariate kinetic hard-modelling of spectroscopic data: A comparison of the esterification of butanol by acetic anhydride on different scales and with different instruments
- Author
-
Yorck-Michael Neuhold, Paul J. Gemperline, Alison Nordon, Martin De Cecco, J. Katy Basford, Maryann Ehly, Graeme Puxty, Konrad Hungerbühler, David Littlejohn, and Marc Jecklin
- Subjects
Arrhenius equation ,Stereochemistry ,Applied Mathematics ,General Chemical Engineering ,Thermodynamics ,Context (language use) ,General Chemistry ,Rate equation ,Activation energy ,Industrial and Manufacturing Engineering ,Acetic anhydride ,chemistry.chemical_compound ,symbols.namesake ,Reaction rate constant ,chemistry ,Reaction dynamics ,symbols ,Butyl acetate - Abstract
For safety, economic efficiency and environmental efficiency understanding and predicting the behaviour of a chemical reaction are of greatest importance in industry. Hard-modelling, the evolution of a chemical reaction by the rate law derived from the molecular mechanism, is a powerful method for this application. Any change in the experimental equipment or conditions does not have an impact on the model and does not require time consuming recalibration or reanalysis. Thus, hard-modelling is suitable for extrapolations of the reaction dynamics to other concentrations and/or temperatures. In this context, the solvent-free esterification of 1-butanol (BuOH) by acetic anhydride (AA) using 1,1,3,3-tetramethylguanidine (TMG) as a base catalyst, to form butyl acetate (BuOA) and acetic acid (AH) has been analysed, in a comparative study, in three different reactors (located in different laboratories) of different volumetric scales (50, 75 mL and 5 L) by mid-IR (50 mL) or near-IR (75 mL and 5 L) spectroscopy. Chemical suppliers and experimenters were also different but the same experimental design (temperatures and initial concentrations) was applied to all measurements. Multivariate kinetic hard-modelling of the spectroscopic signals was applied to the kinetic data sets corresponding to each reactor. The importance of finding the simplest model that sufficiently describes the experimental data is discussed. Compared to previously published results, a non-contradictory but simplified and consistent kinetic model (first order in AA, BuOH and TMG) was found to be optimal to fit the data from all three reactors taken within a temperature range from 30 to 50 °C. The corresponding model parameters, rate constant kref,1 (at reference temperature Tref) and activation energy Ea,1 define a mean-centred Arrhenius equation and were in reasonable agreement considering the different experimental environment, spectroscopic methods, volumetric scales and relatively scarce factorial design of experiments employed; the latter being responsible for a limited definition of Ea,1. For validation of the mechanism, pure component mid-IR spectra have been interpreted and assigned in terms of their characteristic bands
- Published
- 2008
- Full Text
- View/download PDF
10. Advances in the modelling and analysis of complex and industrial processes
- Author
-
Paul J. Gemperline, Yorck-Michael Neuhold, Marcel Maeder, and Graeme Puxty
- Subjects
Chemical process ,Computer science ,Process Chemistry and Technology ,Curve fitting ,Data mining ,computer.software_genre ,Industrial engineering ,computer ,Spectroscopy ,Software ,Computer Science Applications ,Analytical Chemistry ,Data modeling - Abstract
Data fitting is an important tool for the analysis of chemical processes. Limitations in the traditional fitting programs require strict control of external parameters such as temperature, pH, etc. Recent developments in fitting programs combine the analysis of non-ideal data with the global analysis of several different measurements. There are several advantages, the methods circumvent the necessity of external control of these parameters (thermostatting, buffering), they simplify experimental design, and they deliver additional information such as activation parameters and reaction enthalpies. Several practical examples and applications are given.
- Published
- 2006
- Full Text
- View/download PDF
11. Modeling of batch reactions within situ spectroscopic measurements and calorimetry
- Author
-
Samir Alam, Marcel Maeder, Paul J. Gemperline, Shane Moore, Graeme Puxty, and R. Russell Rhinehart
- Subjects
Chemical kinetics ,Standard enthalpy of reaction ,Reaction mechanism ,Reaction rate constant ,Reaction calorimeter ,Chemistry ,Applied Mathematics ,Reagent ,Enthalpy ,Analytical chemistry ,Calorimetry ,Analytical Chemistry - Abstract
This paper describes kinetic fitting of UV-visible spectra and energy flow measured as a function of time from a reaction calorimeter, giving a single global model that achieves fusion of spectroscopic and calorimetry data. We demonstrate that a temperature controlled model of a reaction mechanism fitted to in situ spectroscopic measurements can be coupled to an energy balance model, since the amount of energy released by the reaction is proportional to the change in concentration of reactants and products with time. This allows simultaneous determination of the reaction mechanism parameters and the reaction enthalpy by fitting the coupled model to the spectroscopic and temperature data. The resulting model fully characterizes the kinetics and thermochemical properties of reactions that take place during a batch titration reaction of salicylic acid (SA) with acetic anhydride (AA) to form acetylsalicylic acid (ASA). The model comprises a system of ordinary differential equations fit directly to the spectroscopic and calorimetry data. It permits accurate estimates of model parameters producing estimates of concentration and reaction temperature profiles as a function of time, provided a sufficiently accurate description of the reaction mechanism is specified. No standards or pure component spectra were required, giving calibration-free estimates of concentration and temperature profiles. The parameters estimated in the model include kinetic rate constants and heat of reaction of the reactions observed during the experiment. In addition, heat capacities of reagents flowing into the reactor and thermal transfer coefficients were estimated. Copyright © 2006 John Wiley & Sons, Ltd.
- Published
- 2005
- Full Text
- View/download PDF
12. Non-negativity constraints for elimination of multiple solutions in fitting of multivariate kinetic models to spectroscopic data
- Author
-
Alexandra Stang, Paul J. Gemperline, and Joaquim Jaumot
- Subjects
Multivariate statistics ,Applied Mathematics ,Reliability (computer networking) ,media_common.quotation_subject ,Univariate ,Ambiguity ,Type (model theory) ,Analytical Chemistry ,Chemometrics ,Maxima and minima ,Component (UML) ,Calculus ,Applied mathematics ,Mathematics ,media_common - Abstract
Multiple solutions arise when fitting complicated multi-step kinetic models to spectroscopic data. For consecutive reactions of the type ABC this well-known ambiguity is due to the presence of multiple equivalent global minima in the response surfaces associated with the non-linear least squares fitting of rate constants to spectroscopic data. Several methods have been described to overcome the ambiguity when fitting consecutive reactions with univariate data but few attempts to solve the problem have been described for multivariate data. Additionally, for complicated multi-step reaction schemes there may be several local minima that make selection of the initial parameter guesses difficult. This paper reports a general approach to overcome these types of ambiguities in multivariate kinetic fitting methods using non-negativity constraints for the determination of the pure component spectra estimated during the model-fitting process. Under many conditions these constraints reduce or eliminate entirely the incorrect local minima. The effectiveness and reliability of the new constraints were tested with several simulated and real data sets. Copyright © 2005 John Wiley & Sons, Ltd.
- Published
- 2005
- Full Text
- View/download PDF
13. Grouping three-mode data with mixture methods: the case of the diseased blue crabs
- Author
-
Kaye E. Basford, Pieter M. Kroonenberg, and Paul J. Gemperline
- Subjects
Fishery ,animal structures ,biology ,Decapoda ,Applied Mathematics ,Maximum likelihood ,food and beverages ,Mixed distribution ,biology.organism_classification ,Cluster analysis ,Crustacean ,Analytical Chemistry - Abstract
The primary aim of this paper is to provide an introduction to a three-mode method of clustering and the useful role it can fulfil in clustering chemical three-mode data. The analysis of trace elements present in body tissues of diseased blue crabs caught along the coast of North Carolina serves as an example. The clustering method succeeded in separating the diseased crabs from healthy controls, lending support to the hypothesis that the trace elements were the origin of the blue crabs' disease. Copyright (C) 2005 John Wiley & Sons, Ltd.
- Published
- 2004
- Full Text
- View/download PDF
14. Calibration-Free Estimates of Batch Process Yields and Detection of Process Upsets Using in Situ Spectroscopic Measurements and Nonisothermal Kinetic Models: 4-(Dimethylamino)pyridine- Catalyzed Esterification of Butanol
- Author
-
Frank Tarczynski, Marcel Maeder, Dwight S. Walker, Mary Bosserman, Graeme Puxty, and Paul J. Gemperline
- Subjects
Time Factors ,Batch reactor ,Acetic Anhydrides ,Analytical chemistry ,Acetates ,Catalysis ,Isothermal process ,Analytical Chemistry ,chemistry.chemical_compound ,symbols.namesake ,1-Butanol ,Reaction rate constant ,4-Aminopyridine ,Acetic Acid ,Arrhenius equation ,Spectroscopy, Near-Infrared ,Esterification ,Butanol ,Kinetics ,Acetic anhydride ,Models, Chemical ,chemistry ,Calibration ,symbols ,Batch processing - Abstract
In this paper, we report the use of an NIR fiber-optic spectrometer with a high-speed diode array for calibration-free monitoring and modeling of the reaction of acetic anhydride with butanol using the catalyst 4-(dimethylamino)pyridine in a microscale batch reactor. Acquisition of spectra at 5 ms/scan gave information relevant for modeling these fast batch processes with a single multibatch kinetic model. Nonlinear fitting of a first-principles model directly to the reaction spectra gave calibration-free estimates of time-dependent concentration profiles and pure component spectra. The amount of catalyst was varied between different batches to permit accurate estimation of its effect in the multiway model. A wide range of different models with increasing complexity could be fit to each batch individually with low residuals and apparent low lack of fit. However, only one model properly estimated the concentration profiles when all five batches were fitted simultaneously in a multiway kinetic model. Inclusion of on-line temperature measurements and use of an Arrhenius model for the estimated rate constant gave significantly improved model fits compared to an isothermal kinetic model. Augmentation of prerun batches with data from an additional batch permitted model-based forecasts of reaction trajectories, reaction yield, reaction end points, and process upsets. One batch with added water to simulate a process upset was easily detected by the calibration free process model.
- Published
- 2004
- Full Text
- View/download PDF
15. Characterization of subcritical water oxidation with in situ monitoring and self-modeling curve resolution
- Author
-
Yu Yang, Paul J. Gemperline, and Zhihui Bian
- Subjects
Aqueous solution ,Resolution (mass spectrometry) ,Chemical oxygen demand ,Analytical chemistry ,Biochemistry ,Analytical Chemistry ,chemistry.chemical_compound ,Aniline ,chemistry ,Reagent ,Oxidizing agent ,Environmental Chemistry ,Gas chromatography ,Hydrogen peroxide ,Spectroscopy - Abstract
In this paper, a subcritical water oxidation (SBWO) process was monitored using self-modeling curve resolution (SMCR) of in situ UV-Vis measurements to estimate time-dependant composition profiles of reactants, intermediates and products. A small laboratory scale reactor with UV-Vis fiber-optic probes and a flow cell was used to demonstrate the usefulness of SMCR for monitoring the destruction of model compounds phenol, benzoic acid, and aniline in a dilute aqueous solutions. Hydrogen peroxide was used as the oxidizing reagent at moderate temperature (150–250 °C) and pressure (60–90 atm) in a single phase. By use of in situ monitoring, reaction times were easily determined and conditions for efficient oxidations were easily diagnosed without the need for time consuming off-line reference measurements. For selected runs, the destruction of the model compound was confirmed by gas chromatography and chemical oxygen demand (COD) measurements. Suspected intermediate oxidation products were easily detected by the use of UV-Vis spectrometry and self-modeling curve resolution, but could not be detected by gas chromatography.
- Published
- 2003
- Full Text
- View/download PDF
16. Characterizing batch reactions within situ spectroscopic measurements, calorimetry and dynamic modeling
- Author
-
Eric Cash, Bei Ma, Paul J. Gemperline, Enric Comas, and Mary A. Bosserman
- Subjects
Chemometrics ,Reaction rate ,Ultraviolet visible spectroscopy ,Spectrometer ,Chemistry ,Applied Mathematics ,Attenuated total reflection ,Batch reactor ,Analytical chemistry ,Batch processing ,Calorimetry ,Analytical Chemistry - Abstract
A method for fully characterizing consecutive batch reactions using self-modeling curve resolution of in situ spectroscopic measurements and reaction energy profiles is reported. Simultaneous measurement of reaction temperature, reactor jacket temperature, reactor heater power and UV/ visible spectra was made with a laboratory (50 ml capacity) batch reactor equipped with a UV/visible spectrometer and a fiber optic attenuated total reflectance (ATR) probe. Composition profiles and pure component spectra of reactants and products were estimated without the aid of reference measurements or standards from the in situ UV/visible spectra using non-negative alternating least squares (ALS), a type of self-modeling curve resolution (SMCR). Multiway SMCR analysis of consecutive batches permitted standardless comparisons of consecutive batches to determine which batch produced more or less product and which batch proceeded faster or slower. Dynamic modeling of batch energy profiles permitted mathematical resolution of the reaction dose heat and reaction heat. Kinetic fitting of the in situ reaction spectra was used to determine reaction rate constants. These three complementary approaches permitted simple and rapid characterization of the reaction's rate of reaction, energy balance and mass balance.
- Published
- 2003
- Full Text
- View/download PDF
17. A priori estimates of the elution profiles of the pure components in overlapped liquid chromatography peaks using target factor analysis.
- Author
-
Paul J. Gemperline
- Published
- 1984
- Full Text
- View/download PDF
18. Determination of the ethylene oxide content of polyether polyols by low-field 1H nuclear magnetic resonance spectrometry
- Author
-
Paul J. Gemperline, David Littlejohn, Céline Meunier, Alison Nordon, and Robert H. Carr
- Subjects
chemistry.chemical_classification ,Spectrometer ,Ethylene oxide ,Analytical chemistry ,Carbon-13 NMR ,Mass spectrometry ,Biochemistry ,Analytical Chemistry ,Free induction decay ,chemistry.chemical_compound ,Polyol ,chemistry ,Calibration ,Proton NMR ,Environmental Chemistry ,Spectroscopy - Abstract
Methods have been developed and compared for the analysis of a glycerol-based polyether polyol using a low-field, medium-resolution NMR spectrometer, with an operating frequency of 29 MHz for 1 H . Signal areas in the time and frequency domains were used to calculate the ethylene oxide (EO) content of individual samples. The time domain signals (free induction decay) were analysed using a new version of the direct exponential curve resolution algorithm (FID-DECRA). Direct analysis of the 1 H NMR FT spectra gave percentage EO concentrations of reasonable accuracy (average percentage error of 1.3%) and precision (average RSD of 1.8%) when compared with results derived from high-field 13 C NMR spectrometry. The direct FID-DECRA method showed a negative bias (−0.8±0.12% w/w) in the estimation of percentage EO concentration, but the precision (average RSD of 0.9%) was twice as good as that of direct spectral analysis. When the 13 C NMR analysis was used as a reference method for univariate calibration of the 1 H NMR procedures, the best accuracy (average percentage error of 0.5%) and precision (average RSD of 0.6%) were obtained using FID-DECRA, for EO concentrations in the range 14.8–15.5% w/w. An additional advantage of FID-DECRA is that the analytical procedure could be automated, which is particularly desirable for process analysis.
- Published
- 2002
- Full Text
- View/download PDF
19. Nonlinear Optimization Algorithm for Multivariate Optical Element Design
- Author
-
Michael L. Myrick, Paul J. Gemperline, Frederick G. Haibach, and Olusola O. Soyemi
- Subjects
Multivariate statistics ,010401 analytical chemistry ,Analytical chemistry ,Binary number ,Optical computing ,01 natural sciences ,0104 chemical sciences ,Nonlinear programming ,010309 optics ,Chemometrics ,0103 physical sciences ,Principal component regression ,Layer (object-oriented design) ,Instrumentation ,Algorithm ,Spectroscopy ,Matrix method ,Mathematics - Abstract
A new algorithm for the design of optical computing filters for chemical analysis, otherwise known as multivariate optical elements (MOEs), is described. The approach is based on the nonlinear optimization of the MOE layer thicknesses to minimize the standard error in sample prediction for the chemical species of interest using a modified version of the Gauss–Newton nonlinear optimization algorithm. The design algorithm can either be initialized with random layer thicknesses or with layer thicknesses derived from spectral matching of a multivariate principal component regression (PCR) vector for the constituent of interest. The algorithm has been successfully tested by using it to design various MOEs for the determination of Bismarck Brown dye in a binary mixture of Crystal Violet and Bismarck Brown.
- Published
- 2002
- Full Text
- View/download PDF
20. Bootstrap methods for assessing the performance of near-infrared pattern classification techniques
- Author
-
Brandye M. Smith and Paul J. Gemperline
- Subjects
Bootstrapping (electronics) ,Applied Mathematics ,Test set ,Bootstrap aggregating ,Statistics ,Principal component analysis ,Pattern recognition (psychology) ,Spectral method ,Bootstrap model ,Analytical Chemistry ,Mathematics ,Parametric statistics - Abstract
Two parametric bootstrap techniques were applied to near-infrared (NIR) pattern classification models for two classes of microcrystalline cellulose, Avicel® PH101 and PH102, which differ only in particle size. The development of pattern classification models for similar substances is difficult, since their characteristic clusters overlap. Bootstrapping was used to enlarge small test sets for a better approximation of the overlapping area of these nearly identical substances, consequently resulting in better estimates of misclassification rates. A bootstrap that resampled the residuals, referred to as the outside model space bootstrap in this paper, and a novel bootstrap that resampled principal component scores, referred to as the inside model space bootstrap, were studied. A comparison revealed that classification rates for both bootstrap techniques were similar to the original test set classification rates. The bootstrap method developed in this study, which resampled the principal component scores, was more effective for estimating misclassification volumes than the residual-resampling method. Copyright © 2002 John Wiley & Sons, Ltd.
- Published
- 2002
- Full Text
- View/download PDF
21. Quantitative Analysis of Low-Field NMR Signals in the Time Domain
- Author
-
Colin A. McGill, Paul J. Gemperline, Alison Nordon, and David Littlejohn
- Subjects
Free induction decay ,Matrix (mathematics) ,Signal processing ,Rank (linear algebra) ,Chemistry ,Partial least squares regression ,Analytical chemistry ,Time domain ,Biological system ,Signal ,Hankel matrix ,Analytical Chemistry - Abstract
Two novel methods are described for direct quantitative analysis of NMR free induction decay (FID) signals. The methods use adaptations of the generalized rank annihilation method (GRAM) and the direct exponential curve resolution algorithm (DECRA). With FID-GRAM, the Hankel matrix of the sample signal is compared with that of a reference mixture to obtain quantitative data about the components. With FID-DECRA, a single-sample FID matrix is split into two matrices, allowing quantitative recovery of decay constants and the individual signals in the FID. Inaccurate results were obtained with FID-GRAM when there were differences between the frequency or transverse relaxation time of signals for the reference and test samples. This problem does not arise with FID-DECRA, because comparison with a reference signal is unnecessary. Application of FID-DECRA to 19F NMR data, which contained overlapping signals from three components, gave concentrations comparable to those derived from partial least squares (PLS) analysis of the Fourier transformed spectra. However, the main advantage of FID-DECRA was that accurate (5% error) and precise (2.3% RSD) results were obtained using only one calibration sample, whereas with PLS, a training set of 10 standard mixtures was used to give comparable accuracy and precision.
- Published
- 2001
- Full Text
- View/download PDF
22. Wavelength selection and optimization of pattern recognition methods using the genetic algorithm
- Author
-
Brandye M. Smith and Paul J. Gemperline
- Subjects
Mahalanobis distance ,business.industry ,Chemistry ,Pattern recognition ,Residual ,Biochemistry ,Analytical Chemistry ,Chemometrics ,Set (abstract data type) ,Principal component analysis ,Pattern recognition (psychology) ,Genetic algorithm ,Environmental Chemistry ,Artificial intelligence ,business ,Spectroscopy ,Selection (genetic algorithm) - Abstract
A genetic algorithm (GA) method for wavelength selection and optimization of near-infrared (NIR) pattern recognition methods was developed to reduce misclassification errors of similar materials. Our goal was to automate completely the process of producing pattern recognition models, consequently, we felt it was important to include pre-processing options, the number of principal components and wavelength selection in the chromosomes. The SIMCA residual variance analysis and the Mahalanobis distance methods were used to classify samples of three different types of microcrystalline cellulose (Avicel PH101, PH102, and RC581) and sulfamethoxazole (SMX). Without GA optimization, approximately 15% of Avicel PH101 and PH102 test samples were misclassified since their NIR spectra are very similar. The GA was used to optimize pattern recognition performance on training sets using a figure of merit designed to maximize correct classification of acceptable samples and minimize classification of unacceptable samples or samples of dissimilar materials. After GA optimization of pattern recognition parameters, 100% correct classification of a validation set was achieved using both the residual variance analysis and the Mahalanobis distance methods.
- Published
- 2000
- Full Text
- View/download PDF
23. Chemometric characterization of batch reactions
- Author
-
Paul J. Gemperline, Eric Cash, Dwight S. Walker, and Min Zhu
- Subjects
Resolution (mass spectrometry) ,Chemistry ,Applied Mathematics ,Reactive intermediate ,Analytical chemistry ,Chemical reactor ,Mass spectrometry ,Chemical reaction ,Computer Science Applications ,Characterization (materials science) ,Chemometrics ,Control and Systems Engineering ,Batch processing ,Electrical and Electronic Engineering ,Instrumentation - Abstract
A method for characterizing consecutive batch reactions using chemometrics and in-situ spectroscopic measurements is reported. Composition profiles and pure component spectra of reactants, intermediates, and products are estimated using a new pairwise iterative target transformation factor analysis (ITTFA), a type of self-modeling curve resolution (SMCR), without the aid of referee measurements or standards. In one type of reaction, strong evidence for the formation of a reactive intermediate was detected and characterized by SMCR. Pairwise analysis of consecutive batches permitted standardless comparisons between the two batches to determine if the reaction proceeded faster or slower, and made more or less product.
- Published
- 1999
- Full Text
- View/download PDF
24. Multivariate background correction for hyphenated chromatography detectors
- Author
-
Paul J. Gemperline, Ben Archer, and JungHwan Cho
- Subjects
Chemometrics ,Multivariate statistics ,Accuracy and precision ,Data processing ,Signal-to-noise ratio ,Chromatography ,Chemistry ,Applied Mathematics ,Detector ,Calibration ,Background Correction Method ,Analytical Chemistry - Abstract
This paper reports a new multivariate method of background correction for hyphenated chromatography data such as diode array HPLC and diode array capillary electrophoresis (CE) measured under conditions of low signal-to-noise. The new method is able to correct linear and curved dynamically shifting baselines, a significant problem that limits the precision of CE assays. Its use is illustrated with nine simulated and 63 measured data sets. Serial dilutions were used to give measured data sets with a signal-to-noise ratio down to 10:1. The method's usefulness for routine operation was demonstrated by applying it to the above 72 data sets in an automatic' mode without any user interaction. In many cases these experiments showed improved precision and quantitative accuracy of peak areas after multivariate background correction. In most other cases the background correction method was neutral, i.e. it did not harm the peak area accuracy and precision where background offsets were absent or were too small to be reliably corrected. Compared with second-order calibration methods such as the generalized rank annihilation method (GRAM) which can also correct for unknown background signals, the new method is insensitive to variation in peak shape and retention time from run to run, and prior knowledge in the form of standards is not needed.
- Published
- 1999
- Full Text
- View/download PDF
25. Fiber-optic UV/visible composition monitoring for process control of batch reactions
- Author
-
Brian Baker, Ashley C. Quinn, Paul J. Gemperline, Min Zhu, and Dwight S. Walker
- Subjects
Optical fiber ,Chemistry ,Process Chemistry and Technology ,Analytical chemistry ,Computer Science Applications ,Analytical Chemistry ,law.invention ,Chemometrics ,Ultraviolet visible spectroscopy ,Control limits ,law ,Process control ,Control chart ,Median absolute deviation ,Spectroscopy ,Software ,Smoothing - Abstract
In this paper, a method for characterizing an industrially significant reaction using chemometrics, fiber-optic UV/visible spectroscopy and a single fiber transmission probe is reported. Aliquots of the reaction mixture were also taken at constant intervals for off-line HPLC analysis. HPLC peak areas were used to develop multivariate calibrations for the real-time determination of product and consumption of reactants. Composition profiles and pure component spectra of the reactant mixture, intermediate, and product were estimated using automatic window factor analysis (WFA), a type of self-modeling curve resolution (SMCR), without the aid of referee methods of analysis or standards. Window edges were automatically refined by a new iterative process that uses a robust adaptive noise threshold in the stopping criterion. Strong evidence for the formation of a reactive intermediate was detected and characterized by SMCR that could not be detected by HPLC. Eight replicate runs over a period of 3 months demonstrated that the SMCR results were reproducible. Robust smoothing of the SMCR profiles with locally weighted scatter plot smoothing (LOWESS) was used to construct control charts for detecting upsets in the batch reaction caused by the introduction of small amounts of water. Residuals (smooth–unsmoothed) outside control limits (3×MAD, median absolute deviation of residuals from pre-run batches) were used to detect small, sudden process upsets.
- Published
- 1999
- Full Text
- View/download PDF
26. Rugged spectroscopic calibration for process control
- Author
-
Paul J. Gemperline
- Subjects
Artificial neural network ,Computer science ,Calibration (statistics) ,Process Chemistry and Technology ,Computer Science Applications ,Analytical Chemistry ,Statistics ,Principal component analysis ,Partial least squares regression ,Linear regression ,Process control ,Sensitivity (control systems) ,Noise (video) ,Biological system ,Spectroscopy ,Software - Abstract
Multivariate spectroscopic calibration is now finding increased industrial use in the determination of mixture composition and product quality. Typically, these applications involve measurement of batch processes or process streams by UV, visible, near-infrared or infrared spectroscopy, followed by prediction of product composition or quality with multiple linear regression or partial least squares calibration models. Real time predictions of composition or quality measures may then be used to control the process to increase efficiency, purity, etc. One obstacle that limits widespread use of this strategy is the lack of calibration model ruggedness. Lack of ruggedness in calibration models may manifest itself in the form of large prediction errors following small perturbations in instrument response or slight changes in the sample system, fiber-optic probe, or process stream composition. In this paper, we describe a strategy for developing rugged calibration models using artificial neural networks and demonstrate the method on several NIR process data sets. Fourier transform or principal component preprocessing was used to reduce noise and the number of input measurements per sample. A large number of neural networks with different network architectures and random initializations were trained to predict composition using the pre-processed data. A sensitivity analysis was performed with monitoring data sets to screen the resulting networks for ones that were insensitive to simulated wavelength calibration errors, baseline offsets, path length changes or high levels of stray light. External validation data sets were used to demonstrate the ruggedness of selected neural network calibration models.
- Published
- 1997
- Full Text
- View/download PDF
27. Effective mass sampled by NIR fiber-optic reflectance probes in blending processes
- Author
-
JungHwan Cho, S. Sonja Sekulic, Paul K. Aldridge, and Paul J. Gemperline
- Subjects
Optical fiber ,Near-infrared spectroscopy ,Analytical chemistry ,Biochemistry ,Reflectivity ,Standard deviation ,Analytical Chemistry ,law.invention ,Microcrystalline cellulose ,chemistry.chemical_compound ,Wavelength ,Effective mass (solid-state physics) ,chemistry ,law ,Homogeneity (physics) ,Environmental Chemistry ,Spectroscopy - Abstract
Recent efforts have demonstrated the in situ use of NIR spectroscopy to determine the homogeneity and potency of powder blends. This work has raised questions regarding the effective mass of a powder blend interrogated by a fiber-optic probe. The effective mass determined from experiments described herein is wavelength dependent and ranges from 0.154 to 0.858 g with a maximum standard deviation of about 0.16 g. Although the precision of this estimate is low, it is sufficiently accurate to demonstrate the usefulness of in situ NIR monitoring of blending operations in the pharmaceutical industry. The method was established by using the relationship between sample mass and spectral variance. Mixtures of lactose (50% w/w), microcrystalline cellulose (40% w/w), and sodium benzoate (10% w/w) were manually blended and sampled at selected intervals. The spectral variance at relevant wavelengths was determined as a function of blend homogeneity and sample mass using a standard micro-sample cup. The spectral variance obtained from micro-cup measurements was used to calibrate the effective mass sampled by a fiber-optic reflectance probe. The estimated mass was greatest at wavelengths where the minor constituent contributed most to the overall spectral variance. Typical pharmaceutical tablets have weights in the range of 0.1–1.0 g. According to FDA regulations, the maximum allowed sample mass for determining the homogeneity of these preparations is 0.3–3.0 g. For many formulations, the effective mass sampled by the fiber-optic probe easily falls below this range.
- Published
- 1997
- Full Text
- View/download PDF
28. Determination of multicomponent dissolution profiles of pharmaceutical products by in situ fiber-optic UV measurements
- Author
-
JungHwan Cho, Brian Batchelor, Dwight S. Walker, Brian Baker, and Paul J. Gemperline
- Subjects
Active ingredient ,In situ ,Optical fiber ,Chromatography ,Chemistry ,Analytical chemistry ,Factorial experiment ,Biochemistry ,Analytical Chemistry ,law.invention ,law ,Calibration ,Environmental Chemistry ,Dissolution testing ,Spectrograph ,Dissolution ,Spectroscopy - Abstract
The feasibility of using a fiber-optic UV/visible spectrograph for in situ dissolution testing of a pharmaceutical product containing two active ingredients, sulfamethoxazole and trimethoprim, was demonstrated. Detailed dissolution profiles clearly showed that trimethoprim dissolved rapidly, while sulfamethoxazole dissolved slowly. Multivariate calibration of the fiber-optic spectrograph was accomplished using full-range spectra from 250 to 320 nm and principal component regress (PCR). Calibration mixtures were prepared from standard reference materials according to a three-level, central composite factorial design. It was not necessary to include excipients in the calibration mixtures. The accuracy of the new in situ UV/visible method was compared to a standard HPLC method and was limited to about ± 3% due to the high spectral similarity of the two active ingredients. The detailed dissolution profiles afforded by this new method may be an invaluable aid in the development of multicomponent, time-released drug products.
- Published
- 1997
- Full Text
- View/download PDF
29. Kinetic modeling of dissolution and crystallization of slurries with attenuated total reflectance UV-visible absorbance and near-infrared reflectance measurements
- Author
-
Ronald M. Hoffman, Mary Ellen P. McNally, Chun H Hsieh, Paul J. Gemperline, and Julien Billeter
- Subjects
Absorbance ,Supersaturation ,law ,Chemistry ,Process analytical technology ,Attenuated total reflection ,Slurry ,Analytical chemistry ,Diffuse reflection ,Crystallization ,Dissolution ,Analytical Chemistry ,law.invention - Abstract
Slurries are often used in chemical and pharmaceutical manufacturing processes but present challenging online measurement and monitoring problems. In this paper, a novel multivariate kinetic modeling application is described that provides calibration-free estimates of time-resolved profiles of the solid and dissolved fractions of a substance in a model slurry system. The kinetic model of this system achieved data fusion of time-resolved spectroscopic measurements from two different kinds of fiber-optic probes. Attenuated total reflectance UV-vis (ATR UV-vis) and diffuse reflectance near-infrared (NIR) spectra were measured simultaneously in a small-scale semibatch reactor. A simplified comprehensive kinetic model was then fitted to the time-resolved spectroscopic data to determine the kinetics of crystallization and the kinetics of dissolution for online monitoring and quality control purposes. The parameters estimated in the model included dissolution and crystal growth rate constants, as well as the dissolution rate order. The model accurately estimated the degree of supersaturation as a function of time during conditions when crystallization took place and accurately estimated the degree of undersaturation during conditions when dissolution took place.
- Published
- 2013
30. Near-IR Detection of Polymorphism and Process-Related Substances
- Author
-
C. L. Evans, Howard W. Ward, S. T. Colgan, Paul K. Aldridge, Paul J. Gemperline, and Nichole R. Boyer
- Subjects
Qualitative analysis ,Polymorphism (materials science) ,Chemistry ,Stereochemistry ,food and beverages ,Multivariate calibration ,Near-Infrared Spectrometry ,Biological system ,Analyse qualitative ,Analytical Chemistry - Abstract
This paper reports a fast, sensitive pattern recognition method for determining the polymorphic quality of a solid drug substance, polymorph A. The pattern recognition method employed can discriminate between the desired polymorphic form of the drug substance and another undesired polymorph. In addition, it can reliably detect samples containing minor levels of the undesired polymorph. The method can also discriminate between the desired polymorph and other crystalline forms. Most significantly, this sensitive method has been successfully transferred to six other near-IR instruments without resorting to sophisticated multivariate calibration transfer strategies.
- Published
- 1996
- Full Text
- View/download PDF
31. Wavelength Calibration Method for a CCD Detector and Multichannel Fiber-Optic Probes
- Author
-
JungHwan Cho, Paul J. Gemperline, and Dwight S. Walker
- Subjects
Physics ,Optical fiber ,Spectrometer ,business.industry ,010401 analytical chemistry ,Detector ,Astrophysics::Instrumentation and Methods for Astrophysics ,Physics::Optics ,Linear interpolation ,01 natural sciences ,0104 chemical sciences ,law.invention ,010309 optics ,Optics ,law ,0103 physical sciences ,Calibration ,Charge-coupled device ,business ,Instrumentation ,Spectrograph ,Spectroscopy ,Interpolation - Abstract
A wavelength calibration method for a charge-coupled device (CCD) array detector and seven-channel fiber-optic spectrograph is described. The method was developed for the extraction of wavelength-calibrated spectra for in situ dissolution testing of tablets. The method includes automatic recognition of the positions of the seven fiber channels and recognition of mercury line positions in pixel numbers. A wavelength calibration model with two trigonometric terms was used for least-squares fitting of horizontal pixel numbers to the known wavelengths of five lines from a low-pressure mercury discharge lamp. Four other models were used for comparison. The standard error of estimate (SEE) was minimum for the model with two trigonometric terms. Each fiber channel was calibrated separately. After least-squares fitting, linear interpolation was used to obtain the wavelength-calibrated spectra in the range of 190–450 nm at 1-nm intervals. With this method, spectra from seven fiber-optic probes can be acquired simultaneously and rapidly for in situ dissolution testing. The wavelength calibration procedure is good enough to permit solution spectra acquired on one channel to be used for multivariate calibration of the other channels under appropriate circumstances.
- Published
- 1995
- Full Text
- View/download PDF
32. UV/Visible Spectral Dissolution Monitoring by in Situ Fiber-Optic Probes
- Author
-
Dwight S. Walker, Alger Salt, Paul J. Gemperline, and JungHwan Cho
- Subjects
Reproducibility ,Optical fiber ,Chemistry ,business.industry ,Linearity ,Analytical Chemistry ,law.invention ,Optics ,Interference (communication) ,law ,Calibration ,Dissolution testing ,business ,Dissolution ,Spectrograph - Abstract
A seven-channel fiber-optic UV/visible spectrograph has been developed for in situ dissolution testing of pharmaceutical products. The system employs a spectrograph and a two-dimensional charge-coupled device for detection. This arrangement permits simultaneous monitoring of six dissolution vessels and a seventh reference vessel. Wavelength calibration algorithms were developed to ensure that spectra recorded on each of the seven probes are mutually compatible. Standard reference materials and placebo ingredients were used to create multicomponent calibration models. Multivariate calibration techniques were developed in an effort to produce rugged analytical methods. Four products with single active ingredients were studied. We also conducted a series of studies to determine the reproducibility, both day-to-day and probe-to probe, and linearity and lack of interference from excipients for the assays developed using the new instrument It was determined that in situ dissolution testing is feasible for the formulations studied. The application of this technology is attractive because it saves time and labor and reduces the need for analytical solvents.
- Published
- 1995
- Full Text
- View/download PDF
33. A near-infrared reflectance analysis method for the noninvasive identification of film-coated and non-film-coated, blister-packed tablets
- Author
-
Nichole R. Boyer, Brian F. MacDonald, Paul J. Gemperline, and Melissa A. Dempster
- Subjects
Variance method ,Chromatography ,Chemistry ,Analytical chemistry ,Biochemistry ,Reflectivity ,Dosage form ,Analytical Chemistry ,Chemometrics ,Blister pack ,Environmental Chemistry ,Near infrared reflectance ,Nir spectra ,Spectroscopy ,Analysis method - Abstract
A non-invasive near-infrared reflectance analysis (NIRA) method has been developed to confirm the identity of blister-packed, film-coated and non film-coated tablets for clinical trial supplies. NIR spectra of the tablets were measured through the blister pack plastic using a fiber optic probe. The blister packs contained 18 cells which held pink, pentagonal tablets or film-coated, white, oblong tablets. The pink tablets contained about 80% w w of a marketed drug as a clinical comparator or were matching placebos. The white film-coated tablets contained about 60% w w of an experimental drug, about 75% w w of a marketed drug as a clinical comparator, or were matching placebos. Pattern recognition methods were developed to identify tablets in the blister pack by using libraries of second derivative NIR spectra of each tablet-type. Three chemometric methods were used, the wavelength distance method, SIMCA residual variance method, and Mahalanobis distance method. The methods were tested with full spectra (1100–2350 nm) and three subset wavelength ranges. Several of the methods reported here reliably discriminate between film-coated and non film-coated tablets, and between active and placebo tablets. Library validation and test set data are presented along with a discussion of the advantages and disadvantages of selecting subset wavelength regions.
- Published
- 1995
- Full Text
- View/download PDF
34. Classification of Near-Infrared Spectra Using Wavelength Distances: Comparison to the Mahalanobis Distance and Residual Variance Methods
- Author
-
Nichole R. Boyer and Paul J. Gemperline
- Subjects
Wavelength ,Mahalanobis distance ,Chemistry ,Statistics ,Principal component analysis ,Probability distribution ,Sample (statistics) ,Variance (accounting) ,Residual ,Linear discriminant analysis ,Analytical Chemistry - Abstract
A simple and easy to understand method for classification of near-infrared spectra is reported. The method uses a sample's normalized distance from a library of mean spectra. The probability distribution of the test is described, and its ability to discriminate between similar materials was tested and is reported. Its ability to detect samples that fail to meet product specifications and samples adulterated with minor levels of impurities was also tested and is reported. The performance of the method is compared to methods based on principal component analysis, Mahalanobis distances, and SIMCA residual variance distances. Overall, the wavelength distance method gave better classification results than the Mahalanobis and SIMCA methods when small training sets were used, but poor results were obtained in the detection of samples that do not meet product specifications and samples adulterated with low levels of contamination
- Published
- 1995
- Full Text
- View/download PDF
35. Computation of the range of feasible solutions in self-modeling curve resolution algorithms
- Author
-
Paul J. Gemperline
- Subjects
Chemometrics ,Data processing ,Range (mathematics) ,Analisis factorial ,Chemistry ,Uncertainty handling ,Computation ,Resolution (electron density) ,Analytical chemistry ,Biological system ,Analytical Chemistry - Abstract
Self-modeling curve resolution (SMCR) describes a set of mathematical tools for estimating pure-component spectra and composition profiles from mixture spectra. The source of mixture spectra may be overlapped chromatography peaks, composition profiles from equilibrium studies, kinetic profiles from chemical reactions and batch industrial processes, depth profiles of treated surfaces, and many other types of materials and processes. Mathematical solutions are produced under the assumption that pure-component profiles and spectra should be nonnegative and composition profiles should be unimodal. In many cases, SMCR may be the only method available for resolving the composition profiles and pure-component spectra from these measurements. Under ideal circumstances, the SMCR results are accurate quantitative estimates of the true underlying profiles. Although SMCR tools are finding wider use, it is not widely known or appreciated that, in most circumstances, SMCR techniques produce a family of solutions that obey nonnegativity constraints. In this paper, we present a new method for computation of the range of feasible solutions and use it to study the effect of chromatographic resolution, peak height, spectral dissimilarity, and signal-to-noise ratios on the magnitude of feasible solutions. An illustration of its use in resolving composition profiles from a batch reaction is also given.
- Published
- 2011
36. Eliminating complex eigenvectors and eigenvalues in multiway analyses using the direct trilinear decomposition method
- Author
-
Paul J. Gemperline and Shousong. Li
- Subjects
Data processing ,Similarity (geometry) ,Rank (linear algebra) ,Applied Mathematics ,Calculus ,Applied mathematics ,Decomposition method (constraint satisfaction) ,Matrix similarity ,Eigenvalues and eigenvectors ,Analytical Chemistry ,Mathematics ,Gram ,Resolution (algebra) - Abstract
The direct trilinear decomposition method (DTDM) is an algorithm for performing quantitative curve resolution of three-dimensional data that follow the so-called trilinear model, e.g. chromatography–spectroscopy or emission-excitation fluorescence. Under certain coditions complex eigenvalues and eigenvectors emerge when the generalized eigenproblem is solved in DTDM. Previous publications never treated those cases. In this paper we show how similarity transformations can be used to eliminate the imaginary part of the complex eigenvalues and eigenvectors, thereby increasing the usefulness of DTDM in practical applications. The similarity transformation technique was first used by our laboratory to solve the similar problem in the generalized rank annihilation method (GRAM). Because unique elution profiles and spectra can be derived by using data matrices from three or more samples simultaneously, DTDM with similarity transformations is more efficient than GRAM in the case where there are many samples to be investigated.
- Published
- 1993
- Full Text
- View/download PDF
37. Comparison of the use of volume fractions with other measures of concentration for quantitative spectroscopic calibration using the classical least squares method
- Author
-
Paul J. Gemperline, Kevin D. Dahm, Donald J. Dahm, Howard Mark, Ronald Rubinovitz, and David A. Heaps
- Subjects
Analyte ,Methylene Chloride ,Spectroscopy, Near-Infrared ,Chemistry ,Near-infrared spectroscopy ,Analytical chemistry ,Heptanes ,Absorbance ,Volume fraction ,Calibration ,Multivariate Analysis ,Chloroform ,Least-Squares Analysis ,Ternary operation ,Spectroscopy ,Instrumentation ,Mass fraction ,Algorithms ,Toluene - Abstract
Since the commercial development of modern near-infrared spectroscopy in the 1970s, analysts have almost invariably used units of weight percent as the measure of analyte concentration, due largely to the historical precedent from other analytical methods, including other spectroscopic techniques. The application of the CLS algorithm to a set of binary and ternary liquid mixtures reveals that the spectroscopic measurement sees the sample differently; that the measured absorbance spectrum is in fact sensitive to the volume fraction of the various components of the mixture. Because there is not a one-to-one relationship between volume fraction and other measures of analyte concentration, nor is the relationship linear, this has important implications for the application of both the CLS algorithm and the various other, more conventional, calibration algorithms that are commonly used.
- Published
- 2010
38. Developments in nonlinear multivariate calibration
- Author
-
Paul J. Gemperline
- Subjects
Multivariate statistics ,Multivariate analysis ,Artificial neural network ,business.industry ,Process Chemistry and Technology ,Linear model ,Computer Science Applications ,Analytical Chemistry ,Nonlinear system ,Linear regression ,Calibration ,Principal component regression ,Artificial intelligence ,Biological system ,business ,Spectroscopy ,Software ,Mathematics - Abstract
This paper describes ongoing efforts by researchers to develop methods for detecting, studying, and modeling nonlinear spectral response in multicomponent spectroscopic assays. Some topics for future study are also identified. Tests for detecting nonlinear regions of spectral response in multivariate, multicomponent spectroscopic assays are described. These techniques can be used to study the capability of multivariate linear models like multiple linear regression, principal components regression and partial least-squares to approximate nonlinear response. With artificial neural networks it is possible to develop calibrations that accommodate different functional forms of nonlinear response in different spectral regions.
- Published
- 1992
- Full Text
- View/download PDF
39. Principal component analysis, trace elements, and blue crab shell disease
- Author
-
Kevin H. Miller, Paul J. Gemperline, John E. Weinstein, Terry L. West, J. Craig Hamilton, and John T. Bray
- Subjects
Trace (semiology) ,Brachyura ,Chemistry ,Data Interpretation, Statistical ,Principal component analysis ,North Carolina ,Shell (structure) ,Animals ,Humans ,Mineralogy ,Animal Diseases ,Trace Elements ,Analytical Chemistry - Published
- 1992
- Full Text
- View/download PDF
40. Methods for kinetic modeling of temporally resolved hyperspectral confocal fluorescence images
- Author
-
Patrick J. Cutler, David M. Haaland, Erik Andries, and Paul J. Gemperline
- Subjects
Microscopy, Confocal ,Photobleaching ,Series (mathematics) ,Pixel ,business.industry ,Chemistry ,Confocal ,Hyperspectral imaging ,Rate equation ,Models, Theoretical ,Least squares ,Nonlinear system ,Kinetics ,Optics ,Multivariate Analysis ,Image Processing, Computer-Assisted ,Computer Simulation ,Least-Squares Analysis ,business ,Biological system ,Instrumentation ,Spectroscopy ,Fluorescent Dyes - Abstract
Elucidating kinetic information (rate constants) from temporally resolved hyperspectral confocal fluorescence images offers some very important opportunities for the interpretation of spatially resolved hyperspectral confocal fluorescence images but also presents significant challenges, these being (1) the massive amount of data contained in a series of time-resolved spectral images (one time course of spectral data for each pixel) and (2) unknown concentrations of the reactants and products at time = 0, a necessary precondition normally required by traditional kinetic fitting approaches. This paper describes two methods for solving these problems: direct nonlinear (DNL) estimation of all parameters and separable least squares (SLS). The DNL method can be applied to reactions of any rate law, while the SLS method is restricted to first-order reactions. In SLS, the inherently linear and nonlinear parameters of first-order reactions are solved in separate linear and nonlinear steps, respectively. The new methods are demonstrated using simulated data sets and an experimental data set involving photobleaching of several fluorophores. This work demonstrates that both DNL and SLS hard-modeling methods applied to the kinetic modeling of temporally resolved hyperspectral images can outperform traditional soft-modeling and hard/soft-modeling methods which use multivariate curve resolution–alternating least squares (MCR-ALS) methods. In addition, the SLS method is much faster and is able to analyze much larger data sets than the DNL method.
- Published
- 2009
41. Microscale Immune Studies Laboratory
- Author
-
William E. Seaman, Mark Hilary Van Benthem, Ryan Wesley Davis, Shawn Martin, Michael B. Sinclair, Allan Brasier, Jens Fredrich Poschet, Anup K. Singh, James Bryce Ricken, Steven S. Branda, Christopher A. Apblett, Steven J. Plimpton, Amanda Carroll-Portillo, Matthew W. Moorman, Howland D. T. Jones, Jaewook Joo, Julie Kaiser, K.L. Sale, James S. Brennan, Jean-Loup M. Faulon, Todd Lane, Daniel J. Throckmorton, Roberto Rebeil, Jaclyn K. Murton, Diane S. Lidke, Glenn D. Kubiak, Nimisha Srivastava, Catherine Branda, Elsa Ndiaye-Dulac, David M. Haaland, Elizabeth L. Carles, Thomas D. Perroud, Paul J. Gemperline, Bryan Carson, Zhaoduo Zhang, Anthony Martino, Ronald P Manginell, Ronald F. Renzi, Conrad D. James, Kamlesh D. Patel, Susan M. Brozik, Milind Misra, Meiye Wu, Amy Elizabeth Herr, and Susan Rempe
- Subjects
education.field_of_study ,Innate immune system ,Immune system ,Systems biology ,Population ,TLR4 ,Macrophage ,Kinase activity ,Cell sorting ,Biology ,education ,Cell biology - Abstract
The overarching goal is to develop novel technologies to elucidate molecular mechanisms of the innate immune response in host cells to pathogens such as bacteria and viruses including the mechanisms used by pathogens to subvert/suppress/obfuscate the immune response to cause their harmful effects. Innate immunity is our first line of defense against a pathogenic bacteria or virus. A comprehensive 'system-level' understanding of innate immunity pathways such as toll-like receptor (TLR) pathways is the key to deciphering mechanisms of pathogenesis and can lead to improvements in early diagnosis or developing improved therapeutics. Current methods for studying signaling focus on measurements of a limited number of components in a pathway and hence, fail to provide a systems-level understanding. We have developed a systems biology approach to decipher TLR4 pathways in macrophage cell lines in response to exposure to pathogenic bacteria and their lipopolysaccharide (LPS). Our approach integrates biological reagents, a microfluidic cell handling and analysis platform, high-resolution imaging and computational modeling to provide spatially- and temporally-resolved measurement of TLR-network components. The Integrated microfluidic platform is capable of imaging single cells to obtain dynamic translocation data as well as high-throughput acquisition of quantitative protein expression and phosphorylation information of selected cell populations. The more » platform consists of multiple modules such as single-cell array, cell sorter, and phosphoflow chip to provide confocal imaging, cell sorting, flow cytomtery and phosphorylation assays. The single-cell array module contains fluidic constrictions designed to trap and hold single host cells. Up to 100 single cells can be trapped and monitored for hours, enabling detailed statistically-significant measurements. The module was used to analyze translocation behavior of transcription factor NF-kB in macrophages upon activation by E. coli and Y. pestis LPS. The chip revealed an oscillation pattern in translocation of NF-kB indicating the presence of a negative feedback loop involving IKK. Activation of NF-kB is preceded by phosphorylation of many kinases and to correlate the kinase activity with translocation, we performed flow cytometric assays in the PhosphoChip module. Phopshorylated forms of p38. ERK and RelA were measured in macrophage cells challenged with LPS and showed a dynamic response where phosphorylation increases with time reaching a maximum at {approx}30-60min. To allow further downstream analysis on selected cells, we also implemented an optical-trapping based sorting of cells. This has allowed us to sort macrophages infected with bacteria from uninfected cells with the goal of obtaining data only on the infected (the desired) population. The various microfluidic chip modules and the accessories required to operate them such as pumps, heaters, electronic control and optical detectors are being assembled in a bench-top, semi-automated device. The data generated is being utilized to refine existing TLR pathway model by adding kinetic rate constants and concentration information. The microfluidic platform allows high-resolution imaging as well as quantitative proteomic measurements with high sensitivity (
- Published
- 2009
- Full Text
- View/download PDF
42. Nonlinear multivariate calibration using principal components regression and artificial neural networks
- Author
-
James R. Long, Paul J. Gemperline, and Vasilis G. Gregoriou
- Subjects
Multivariate analysis ,Artificial neural network ,business.industry ,Chemistry ,Multivariate calibration ,Non linear model ,Pattern recognition ,Analytical Chemistry ,Nonlinear system ,Statistics ,Principal component analysis ,Principal component regression ,Artificial intelligence ,business - Published
- 1991
- Full Text
- View/download PDF
43. Experimental monitoring and data analysis tools for protein folding: study of steady-state evolution and modeling of kinetic transients by multitechnique and multiexperiment data fusion
- Author
-
Patrick, Cutler, Paul J, Gemperline, and Anna, de Juan
- Subjects
Kinetics ,Protein Folding ,Myoglobin ,Circular Dichroism ,Animals ,Horses ,Models, Biological - Abstract
Protein folding is a complex process that can take place through different pathways depending on the inducing agent and on the monitored time scale. This diversity of possibilities requires a good design of experiments and powerful data analysis tools that allow operating with multitechnique measurements and/or with diverse experiments related to different aspects of the process of interest. Multivariate curve resolution-alternating least squares (MCR-ALS) has been the core methodology used to perform multitechnique and/or multiexperiment data analysis. This algorithm allows for obtaining the process concentration profiles and pure spectra of all species involved in the protein folding from the sole raw spectroscopic measurements obtained during the experimental monitoring. The process profiles provide insight on the mechanism of the process studied whereas the shapes of the recovered pure spectra help in the characterization of the protein conformations involved. Relevant features of the MCR-ALS algorithm are the possibility to handle fused data, i.e., series of experiments monitored with different techniques and/or performed under different experimental conditions, and the flexibility to include a priori information linked to general properties of concentration profiles and spectra and to the kinetic model governing the folding process. All these characteristics help to obtain a comprehensive description of the protein folding mechanism. To our knowledge, this work includes for the first time the simultaneous analysis of steady-state and short-time scale kinetic experiments linked to a protein folding process. The potential of this methodology is shown taking myoglobin as a model system for protein folding or, in general, for the study of any complex biological process that needs multitechnique and multiexperiment monitoring and analysis. Transformations in myoglobin due to changes in pH have been monitored by ultraviolet/visible (UV-vis) absorption and circular dichroism (CD) spectroscopy. Steady-state and stopped-flow experiments were carried out to account for the evolution of the process at different time scales. In this example, the multiexperiment analysis has allowed for the reliable detection and modeling of a kinetic transient species in the myoglobin folding process, absent in the steady-state working conditions.
- Published
- 2008
44. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Penalty alternating least squares (P-ALS) extension to multi-way problems
- Author
-
Selena E. Richards, Paul J. Gemperline, and Robert Miller
- Subjects
Models, Statistical ,Spectroscopy, Near-Infrared ,Rank (linear algebra) ,Pyridines ,Acetic Anhydrides ,Function (mathematics) ,Least squares ,Base (group theory) ,Reduction (complexity) ,Matrix (mathematics) ,1-Butanol ,Models, Chemical ,Non-linear least squares ,Data Interpretation, Statistical ,Computer Simulation ,Total least squares ,Instrumentation ,Algorithm ,Spectroscopy ,Mathematics - Abstract
An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of 1-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.
- Published
- 2008
45. Characterization of Cu2+-binding modes in the prion protein by visible circular dichroism and multivariate curve resolution
- Author
-
Paul J. Gemperline, Colin S. Burns, John M. Kenney, Patrick J. Cutler, and J.B. Pollock
- Subjects
Circular dichroism ,Prions ,Molecular Sequence Data ,Biophysics ,Analytical chemistry ,Biochemistry ,Least squares ,Spectral line ,Ion ,Animals ,Amino Acid Sequence ,Least-Squares Analysis ,Molecular Biology ,Analysis of Variance ,Binding Sites ,Component (thermodynamics) ,Chemistry ,Circular Dichroism ,Resolution (electron density) ,Titrimetry ,Cell Biology ,Crystallography ,Titration ,Absorption (chemistry) ,Peptides ,Copper - Abstract
Visible circular dichroism (CD) spectra from the copper(II) titration of the metal-binding region of the prion protein, residues 57–98, were analyzed using the self-modeling curve resolution method multivariate curve resolution–alternating least squares (MCR-ALS). MCR-ALS is a set of mathematical tools for estimating pure component spectra and composition profiles from mixture spectra. Model-free solutions (e.g., soft models) are produced under the assumption that pure component profiles should be nonnegative and unimodal. Optionally, equality constraints can be used when the concentration or spectrum of one or more species is known. MCR-ALS is well suited to complex biochemical systems such as the prion protein which binds multiple copper ions and thus gives rise to titration data consisting of several pure component spectra with overlapped or superimposed absorption bands. Our study reveals the number of binding modes used in the uptake of Cu2+ by the full metal-binding region of the prion protein and their relative concentration profiles throughout the titration. The presence of a non-CD active binding mode can also be inferred. We show that MCR-ALS analysis can be initialized using empirically generated or mathematically generated pure component spectra. The use of small model peptides allows us to correlate specific Cu2+-binding structures to the pure component spectra.
- Published
- 2008
46. Spectroscopic calibration and quantitation using artificial neural networks
- Author
-
Paul J. Gemperline, James R. Long, and Vasilis G. Gregoriou
- Subjects
Artificial neural network ,Calibration (statistics) ,Chemistry ,business.industry ,Computer aid ,Pattern recognition ,Artificial intelligence ,business ,Analytical Chemistry - Abstract
Les reseaux neuronaux artificiels sont appliques pour l'etalonnage de multiples parametres spectroscopiques non lineaires en utilisant des donnees de spectroscopie infrarouge et UV-visible. Lorsqu'une reponse spectrale non lineaire est presente, les reseaux neuronaux fournissent de meilleures previsions que des methodes mathematiques
- Published
- 1990
- Full Text
- View/download PDF
47. Combination of the Mahalanobis distance and residual variance pattern recognition techniques for classification of near-infrared reflectance spectra
- Author
-
Paul J. Gemperline and Nilesh K. Shah
- Subjects
Mahalanobis distance ,Chemistry ,business.industry ,Near-infrared spectroscopy ,Pattern recognition ,Variance (accounting) ,Residual ,Class (biology) ,Analytical Chemistry ,Classification rule ,Principal component analysis ,Pattern recognition (psychology) ,Artificial intelligence ,business - Abstract
Principal component analysis of near-infrared reflectance (NIR) spectra is used for the calculation of Mahalanobis distances and for the construction of soft independent modeling of class analogy (SIMCA) classification models. The complementary behavior of these two classification methods is discussed and a new classification rule based on a combination of the two methods is described. The application of NIR spectroscopy and the pattern recognition technique for identifying and classifying raw materials used in pharmaceutical industry is also discussed
- Published
- 1990
- Full Text
- View/download PDF
48. Mixture analysis using factor analysis. II: Self-modeling curve resolution
- Author
-
J. Craig Hamilton and Paul J. Gemperline
- Subjects
Analisis factorial ,Chromatography ,Materials science ,Resolution (mass spectrometry) ,Applied Mathematics ,Analytical chemistry ,Gas chromatography ,Mass spectrometry ,Analytical Chemistry - Abstract
One of the major applications of factor analysis in the chemical literature, self-modeling curve resolution (SMCR), is covered in this review, including a historical account of the methods derived from Lawton and Sylvestre's original method. Papers treating the theory or applications of SMCR are included. Qualitative and quantitative applications are described where appropriate.
- Published
- 1990
- Full Text
- View/download PDF
49. Scale-up of batch kinetic models
- Author
-
Alison Nordon, Maryann Ehly, Martin De Cecco, J. Katy Basford, David Littlejohn, and Paul J. Gemperline
- Subjects
Chemistry ,Chemistry, Pharmaceutical ,Analytical chemistry ,Kinetic energy ,Biochemistry ,Spectral line ,Analytical Chemistry ,Chemical kinetics ,symbols.namesake ,Models, Chemical ,Reagent ,Calibration ,symbols ,Environmental Chemistry ,Thermodynamics ,Pharmacokinetics ,Gas chromatography ,Raman spectroscopy ,Nonlinear regression ,Spectroscopy - Abstract
The scale-up of batch kinetic models was studied by examining the kinetic fitting results of batch esterification reactions completed in 75 mL and 5 L reactors. Different temperatures, amounts of catalysts, and amounts of initial starting reagents were used to completely characterize the reaction. A custom written Matlab toolbox called GUIPRO was used to fit first-principles kinetic models directly to in-line NIR and Raman spectroscopic data. Second-order kinetic models provided calibration-free estimates of kinetic and thermodynamic reaction parameters, time dependent concentration profiles, and pure component spectra of reagents and product. The estimated kinetic and thermodynamic parameters showed good agreement between small-scale and large-scale reactions. The accuracy of pure component spectra estimates was validated by comparison to collected NIR and Raman pure component spectra. The model estimated product concentrations were also validated by comparison to concentrations measured by off-line GC analysis. Based on the good agreement between kinetic and thermodynamic parameters and comparison between actual and estimated concentration and spectral profiles, it was concluded that the scale-up of batch kinetic models was successful.
- Published
- 2006
50. Introduction to Chemometrics
- Author
-
Paul J. Gemperline
- Subjects
Chemometrics ,Chromatography ,Chemistry - Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.