22 results on '"Wang, Weixun"'
Search Results
2. Coach-assisted multi-agent reinforcement learning framework for unexpected crashed agents
- Author
-
Zhao, Jian, Zhao, Youpeng, Wang, Weixun, Yang, Mingyu, Hu, Xunhan, Zhou, Wengang, Hao, Jianye, and Li, Houqiang
- Published
- 2022
- Full Text
- View/download PDF
3. Rationally Designed Mutations Convert de novo Amyloid-like Fibrils into Monomeric β-Sheet Proteins
- Author
-
Wang, Weixun and Hecht, Michael H.
- Published
- 2002
4. Self-Assembled Monolayers from a Designed Combinatorial Library of de novo β-Sheet Proteins
- Author
-
Xu, Guofeng, Wang, Weixun, Groves, John T., and Hecht, Michael H.
- Published
- 2001
5. MARLlib: A Scalable Multi-agent Reinforcement Learning Library
- Author
-
Hu, Siyi, Zhong, Yifan, Gao, Minquan, Wang, Weixun, Dong, Hao, Li, Zhihui, Liang, Xiaodan, Chang, Xiaojun, and Yang, Yaodong
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Multiagent Systems ,Machine Learning (cs.LG) ,Multiagent Systems (cs.MA) - Abstract
Despite the fast development of multi-agent systems (MAS) and multi-agent reinforcement learning (MARL) algorithms, there is a lack of unified evaluation platforms and commonly-acknowledged baseline implementation. Therefore, an urgent need is to develop an integrated library suite that delivers reliable MARL implementation and replicable evaluation in various benchmarks. To fill such a research gap, in this paper, we propose MARLlib, a comprehensive MARL algorithm library for solving multi-agent problems. With a novel design of agent-level distributed dataflow, MARLlib manages to unify tens of algorithms in a highly composable integration style. Moreover, MARLlib goes beyond current work by integrating diverse environment interfaces and providing flexible parameter sharing strategies; this allows for versatile solutions to cooperative, competitive, and mixed tasks with minimal code modifications for end users. Finally, MARLlib provides easy-to-use APIs and a fully decoupled configuration system to help end users manipulate the learning process. A plethora of experiments is conducted to substantiate the correctness of our implementation, based on which we further derive new insights into the relationship between the performance and the design of algorithmic components. With MARLlib, we expect researchers to be able to tackle broader real-world multi-agent problems with trustworthy solutions. Github: \url{https://github.com/Replicable-MARL/MARLlib
- Published
- 2022
6. Off-Beat Multi-Agent Reinforcement Learning
- Author
-
Qiu, Wei, Wang, Weixun, Wang, Rundong, An, Bo, Hu, Yujing, Obraztsova, Svetlana, Rabinovich, Zinovi, Hao, Jianye, Chen, Yingfeng, and Fan, Changjie
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Multiagent Systems ,Multiagent Systems (cs.MA) ,Machine Learning (cs.LG) - Abstract
We investigate model-free multi-agent reinforcement learning (MARL) in environments where off-beat actions are prevalent, i.e., all actions have pre-set execution durations. During execution durations, the environment changes are influenced by, but not synchronised with, action execution. Such a setting is ubiquitous in many real-world problems. However, most MARL methods assume actions are executed immediately after inference, which is often unrealistic and can lead to catastrophic failure for multi-agent coordination with off-beat actions. In order to fill this gap, we develop an algorithmic framework for MARL with off-beat actions. We then propose a novel episodic memory, LeGEM, for model-free MARL algorithms. LeGEM builds agents' episodic memories by utilizing agents' individual experiences. It boosts multi-agent learning by addressing the challenging temporal credit assignment problem raised by the off-beat actions via our novel reward redistribution scheme, alleviating the issue of non-Markovian reward. We evaluate LeGEM on various multi-agent scenarios with off-beat actions, including Stag-Hunter Game, Quarry Game, Afforestation Game, and StarCraft II micromanagement tasks. Empirical results show that LeGEM significantly boosts multi-agent coordination and achieves leading performance and improved sample efficiency., Fix typos
- Published
- 2022
7. Revisiting QMIX: Discriminative Credit Assignment by Gradient Entropy Regularization
- Author
-
Zhao, Jian, Zhang, Yue, Hu, Xunhan, Wang, Weixun, Zhou, Wengang, Hao, Jianye, Zhu, Jiangcheng, and Li, Houqiang
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,ComputingMilieux_MISCELLANEOUS - Abstract
In cooperative multi-agent systems, agents jointly take actions and receive a team reward instead of individual rewards. In the absence of individual reward signals, credit assignment mechanisms are usually introduced to discriminate the contributions of different agents so as to achieve effective cooperation. Recently, the value decomposition paradigm has been widely adopted to realize credit assignment, and QMIX has become the state-of-the-art solution. In this paper, we revisit QMIX from two aspects. First, we propose a new perspective on credit assignment measurement and empirically show that QMIX suffers limited discriminability on the assignment of credits to agents. Second, we propose a gradient entropy regularization with QMIX to realize a discriminative credit assignment, thereby improving the overall performance. The experiments demonstrate that our approach can comparatively improve learning efficiency and achieve better performance.
- Published
- 2022
8. Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
- Author
-
Hu, Yujing, Wang, Weixun, Jia, Hangtian, Wang, Yixiang, Chen, Yingfeng, Hao, Jianye, Wu, Feng, and Fan, Changjie
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,ComputingMilieux_MISCELLANEOUS ,Machine Learning (cs.LG) - Abstract
Reward shaping is an effective technique for incorporating domain knowledge into reinforcement learning (RL). Existing approaches such as potential-based reward shaping normally make full use of a given shaping reward function. However, since the transformation of human knowledge into numeric reward values is often imperfect due to reasons such as human cognitive bias, completely utilizing the shaping reward function may fail to improve the performance of RL algorithms. In this paper, we consider the problem of adaptively utilizing a given shaping reward function. We formulate the utilization of shaping rewards as a bi-level optimization problem, where the lower level is to optimize policy using the shaping rewards and the upper level is to optimize a parameterized shaping weight function for true reward maximization. We formally derive the gradient of the expected true reward with respect to the shaping weight function parameters and accordingly propose three learning algorithms based on different assumptions. Experiments in sparse-reward cartpole and MuJoCo environments show that our algorithms can fully exploit beneficial shaping rewards, and meanwhile ignore unbeneficial shaping rewards or even transform them into beneficial ones., Accepted by NeurIPS2020
- Published
- 2020
9. An ultrasensitive method for the quantitation of active and inactive GLP-1 in human plasma via immunoaffinity LC–MS/MS
- Author
-
Chappell, Derek L, Lee, Anita YH, Castro-Perez, Jose, Zhou, Haihong, Roddy, Thomas P, Lassman, Michael E, Shankar, Sudha S, Yates, Nathan A, Wang, Weixun, and Laterza, Omar F
- Published
- 2014
- Full Text
- View/download PDF
10. An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning
- Author
-
Yang, Tianpei, Wang, Weixun, Tang, Hongyao, Hao, Jianye, Meng, Zhaopeng, Mao, Hangyu, Li, Dong, Liu, Wulong, Zhang, Chengwei, Hu, Yujing, Chen, Yingfeng, and Fan, Changjie
- Subjects
FOS: Computer and information sciences ,Computer Science - Multiagent Systems ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Multiagent Systems (cs.MA) - Abstract
Transfer Learning has shown great potential to enhance single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents can share knowledge with each other. However, it remains a problem of how an agent should learn from other agents. In this paper, we propose a novel Multiagent Policy Transfer Framework (MAPTF) to improve MARL efficiency. MAPTF learns which agent's policy is the best to reuse for each agent and when to terminate it by modeling multiagent policy transfer as the option learning problem. Furthermore, in practice, the option module can only collect all agent's local experiences for update due to the partial observability of the environment. While in this setting, each agent's experience may be inconsistent with each other, which may cause the inaccuracy and oscillation of the option-value's estimation. Therefore, we propose a novel option learning algorithm, the successor representation option learning to solve it by decoupling the environment dynamics from rewards and learning the option-value under each agent's preference. MAPTF can be easily combined with existing deep RL and MARL approaches, and experimental results show it significantly boosts the performance of existing methods in both discrete and continuous state spaces., Accepted by NeurIPS'2021
- Published
- 2020
11. Efficient Deep Reinforcement Learning via Adaptive Policy Transfer
- Author
-
Yang, Tianpei, Hao, Jianye, Meng, Zhaopeng, Zhang, Zongzhang, Hu, Yujing, Cheng, Yingfeng, Fan, Changjie, Wang, Weixun, Liu, Wulong, Wang, Zhaodong, and Peng, Jiajie
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Transfer Learning (TL) has shown great potential to accelerate Reinforcement Learning (RL) by leveraging prior knowledge from past learned policies of relevant tasks. Existing transfer approaches either explicitly computes the similarity between tasks or select appropriate source policies to provide guided explorations for the target task. However, how to directly optimize the target policy by alternatively utilizing knowledge from appropriate source policies without explicitly measuring the similarity is currently missing. In this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL by taking advantage of this idea. Our framework learns when and which source policy is the best to reuse for the target policy and when to terminate it by modeling multi-policy transfer as the option learning problem. PTF can be easily combined with existing deep RL approaches. Experimental results show it significantly accelerates the learning process and surpasses state-of-the-art policy transfer methods in terms of learning efficiency and final performance in both discrete and continuous action spaces., Accepted by IJCAI'2020
- Published
- 2020
12. Action Semantics Network: Considering the Effects of Actions in Multiagent Systems
- Author
-
Wang, Weixun, Yang, Tianpei, Liu, Yong, Hao, Jianye, Hao, Xiaotian, Hu, Yujing, Chen, Yingfeng, Fan, Changjie, and Gao, Yang
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Multiagent Systems ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Multiagent Systems (cs.MA) - Abstract
In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures., accepted by ICLR2020
- Published
- 2019
13. Towards Cooperation in Sequential Prisoner's Dilemmas: a Deep Multiagent Reinforcement Learning Approach
- Author
-
Wang, Weixun, Hao, Jianye, Wang, Yixi, and Taylor, Matthew
- Subjects
FOS: Computer and information sciences ,Computer Science - Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Computer Science and Game Theory ,Computer Science - Multiagent Systems ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Computer Science and Game Theory (cs.GT) ,Machine Learning (cs.LG) ,Multiagent Systems (cs.MA) - Abstract
The Iterated Prisoner's Dilemma has guided research on social dilemmas for decades. However, it distinguishes between only two atomic actions: cooperate and defect. In real-world prisoner's dilemmas, these choices are temporally extended and different strategies may correspond to sequences of actions, reflecting grades of cooperation. We introduce a Sequential Prisoner's Dilemma (SPD) game to better capture the aforementioned characteristics. In this work, we propose a deep multiagent reinforcement learning approach that investigates the evolution of mutual cooperation in SPD games. Our approach consists of two phases. The first phase is offline: it synthesizes policies with different cooperation degrees and then trains a cooperation degree detection network. The second phase is online: an agent adaptively selects its policy based on the detected degree of opponent cooperation. The effectiveness of our approach is demonstrated in two representative SPD 2D games: the Apple-Pear game and the Fruit Gathering game. Experimental results show that our strategy can avoid being exploited by exploitative opponents and achieve cooperation with cooperative opponents., 13 pages, 21 figures
- Published
- 2018
14. Background-free upconversion-encoded microspheres for mycotoxin detection based on a rapid visualization method.
- Author
-
Yang, Minye, Cui, Meihui, Wang, Weixun, Yang, Yaodong, Chang, Jin, Hao, Jianye, and Wang, Hanjie
- Subjects
OCHRATOXINS ,MICROSPHERES ,FLUORIMETRY ,SIGNAL detection ,FOOD quality ,BLUE light - Abstract
Methods for detecting mycotoxins are very important because of the great health hazards of mycotoxins. However, there is a high background and low signal-to-noise ratio in real-time sensing, and therefore it is difficult to meet the fast, accurate, and convenient requirements for control of food quality. Here we constructed a quantitative fluorescence image analysis based on multicolor upconversion nanocrystal (UCN)-encoded microspheres for detection of ochratoxin A and zearalenone. The background-free encoding image signal of UCN-doped microspheres was captured by fluorescence microscopy under near-infrared excitation, whereas the detection image signal of phycoerythrin-labeled secondary antibodies conjugated to the microspheres was captured under blue light excitation. We custom-wrote an algorithm to analyze the two images for the same sample in 10 s, and only the gray value in the red channel of the secondary probe confirmed the quantity. The results showed that this novel detection platform performed feasible and reliable fluorescence image measurements by this method. Additionally, the limit of detection of was 0.34721 ng/mL for ochratoxin A and 0.41162 ng/mL for zearalenone. We envision that this UCN encoding strategy will be usefully applied for fast, accurate, and convenient testing of multiple food contaminants to ensure the safety of the food. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Dipeptidyl Peptidase 4 Inhibition Stimulates Distal Tubular Natriuresis and Increases in Circulating SDF-1α1-67 in Patients With Type 2 Diabetes.
- Author
-
Lovshin, Julie A., Rajasekeran, Harindra, Lytvyn, Yulyia, Lovblom, Leif E., Khan, Shajiha, Alemu, Robel, Locke, Amy, Lai, Vesta, He, Huaibing, Hittle, Lucinda, Weixun Wang, Drucker, Daniel J., Cherney, David Z. I., and Wang, Weixun
- Subjects
DIABETES ,EMPAGLIFLOZIN ,CARDIOVASCULAR diseases ,KIDNEY failure ,TYPE 2 diabetes ,VACCINATION ,THERAPEUTICS - Abstract
Objective: Antihyperglycemic agents, such as empagliflozin, stimulate proximal tubular natriuresis and improve cardiovascular and renal outcomes in patients with type 2 diabetes. Because dipeptidyl peptidase 4 (DPP-4) inhibitors are used in combination with sodium-glucose cotransporter 2 (SGLT2) inhibitors, we examined whether and how sitagliptin modulates fractional sodium excretion and renal and systemic hemodynamic function.Research Design and Methods: We studied 32 patients with type 2 diabetes in a prospective, double-blind, randomized, placebo-controlled trial. Measurements of renal tubular function and renal and systemic hemodynamics were obtained at baseline, then hourly after one dose of sitagliptin or placebo, and repeated at 1 month. Fractional excretion of sodium and lithium and renal hemodynamic function were measured during clamped euglycemia. Systemic hemodynamics were measured using noninvasive cardiac output monitoring, and plasma levels of intact versus cleaved stromal cell-derived factor (SDF)-1α were quantified using immunoaffinity and tandem mass spectrometry.Results: Sitagliptin did not change fractional lithium excretion but significantly increased total fractional sodium excretion (1.32 ± 0.5 to 1.80 ± 0.01% vs. 2.15 ± 0.6 vs. 2.02 ± 1.0%, P = 0.012) compared with placebo after 1 month of treatment. Moreover, sitagliptin robustly increased intact plasma SDF-1α1-67 and decreased truncated plasma SDF-1α3-67. Renal hemodynamic function, systemic blood pressure, cardiac output, stroke volume, and total peripheral resistance were not adversely affected by sitagliptin.Conclusions: DPP-4 inhibition promotes a distal tubular natriuresis in conjunction with increased levels of intact SDF-1α1-67. Because of the distal location of the natriuretic effect, DPP-4 inhibition does not affect tubuloglomerular feedback or impair renal hemodynamic function, findings relevant to using DPP-4 inhibitors for treating type 2 diabetes. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
16. TCEC: Temperature and Energy-Constrained Scheduling in Real-Time Multitasking Systems.
- Author
-
Qin, Xiaoke, Wang, Weixun, and Mishra, Prabhat
- Subjects
- *
REAL-time computing , *COMPUTER multitasking , *TEMPERATURE , *CONSTRAINT satisfaction , *SEMICONDUCTORS , *MICROPROCESSORS , *SYSTEMS design , *ALGORITHMS - Abstract
The ongoing scaling of semiconductor technology is causing severe increase of on-chip power density and temperature in microprocessors. This urgently requires both power and thermal management during system design. In this paper, we propose a model checking-based technique using extended timed automata to solve the processor frequency assignment problem in a temperature and energy-constrained multitasking system. We also develop a polynomial time-approximation algorithm to address the state-space explosion problem caused by symbolic model checker. Our approximation scheme is guaranteed to not generate any false-positive answer, while it may return false-negative answer in rare cases. Our method is universally applicable since it is independent of any system and task characteristics. Experimental results demonstrate the usefulness of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
17. De novo amyloid proteins from designed combinatorial libraries.
- Author
-
West, Michael W. and Wang, Weixun
- Subjects
- *
AMINO acid sequence , *AMYLOID - Abstract
Focuses on a study which created a combinatorial library of protein sequences de novo to probe the relationship between amino acid sequence and the propensity to form amyloid. Methodology; Design of sequences; Biochemical and structural characterization of de novo amyloid proteins; Amyloidogenic propensity of alternating polar/nonpolar patterns.
- Published
- 1999
- Full Text
- View/download PDF
18. Erratum. Dipeptidyl Peptidase 4 Inhibition Stimulates Distal Tubular Natriuresis and Increases in Circulating SDF-1α 1-67 in Patients With Type 2 Diabetes. Diabetes Care 2017;40:1073-1081.
- Author
-
Lovshin JA, Rajasekeran H, Lytvyn Y, Lovblom LE, Khan S, Alemu R, Locke A, Lai V, He H, Hittle L, Wang W, Drucker DJ, and Cherney DZI
- Published
- 2017
- Full Text
- View/download PDF
19. Dipeptidyl Peptidase 4 Inhibition Stimulates Distal Tubular Natriuresis and Increases in Circulating SDF-1α 1-67 in Patients With Type 2 Diabetes.
- Author
-
Lovshin JA, Rajasekeran H, Lytvyn Y, Lovblom LE, Khan S, Alemu R, Locke A, Lai V, He H, Hittle L, Wang W, Drucker DJ, and Cherney DZI
- Subjects
- Aged, Blood Pressure drug effects, Diabetes Mellitus, Type 2 blood, Dipeptidyl-Peptidase IV Inhibitors administration & dosage, Double-Blind Method, Female, Hemodynamics, Humans, Hypoglycemic Agents administration & dosage, Kidney drug effects, Kidney metabolism, Male, Middle Aged, Prospective Studies, Sitagliptin Phosphate administration & dosage, Sitagliptin Phosphate adverse effects, Sodium-Glucose Transporter 2 blood, Sodium-Glucose Transporter 2 Inhibitors, Chemokine CXCL12 blood, Diabetes Mellitus, Type 2 drug therapy, Dipeptidyl-Peptidase IV Inhibitors adverse effects, Hypoglycemic Agents adverse effects, Natriuresis drug effects
- Abstract
Objective: Antihyperglycemic agents, such as empagliflozin, stimulate proximal tubular natriuresis and improve cardiovascular and renal outcomes in patients with type 2 diabetes. Because dipeptidyl peptidase 4 (DPP-4) inhibitors are used in combination with sodium-glucose cotransporter 2 (SGLT2) inhibitors, we examined whether and how sitagliptin modulates fractional sodium excretion and renal and systemic hemodynamic function., Research Design and Methods: We studied 32 patients with type 2 diabetes in a prospective, double-blind, randomized, placebo-controlled trial. Measurements of renal tubular function and renal and systemic hemodynamics were obtained at baseline, then hourly after one dose of sitagliptin or placebo, and repeated at 1 month. Fractional excretion of sodium and lithium and renal hemodynamic function were measured during clamped euglycemia. Systemic hemodynamics were measured using noninvasive cardiac output monitoring, and plasma levels of intact versus cleaved stromal cell-derived factor (SDF)-1α were quantified using immunoaffinity and tandem mass spectrometry., Results: Sitagliptin did not change fractional lithium excretion but significantly increased total fractional sodium excretion (1.32 ± 0.5 to 1.80 ± 0.01% vs. 2.15 ± 0.6 vs. 2.02 ± 1.0%, P = 0.012) compared with placebo after 1 month of treatment. Moreover, sitagliptin robustly increased intact plasma SDF-1α
1-67 and decreased truncated plasma SDF-1α3-67 . Renal hemodynamic function, systemic blood pressure, cardiac output, stroke volume, and total peripheral resistance were not adversely affected by sitagliptin., Conclusions: DPP-4 inhibition promotes a distal tubular natriuresis in conjunction with increased levels of intact SDF-1α1-67 . Because of the distal location of the natriuretic effect, DPP-4 inhibition does not affect tubuloglomerular feedback or impair renal hemodynamic function, findings relevant to using DPP-4 inhibitors for treating type 2 diabetes., (© 2017 by the American Diabetes Association.)- Published
- 2017
- Full Text
- View/download PDF
20. Measurement of fractional synthetic rates of multiple protein analytes by triple quadrupole mass spectrometry.
- Author
-
Lee AY, Yates NA, Ichetovkin M, Deyanova E, Southwick K, Fisher TS, Wang W, Loderstedt J, Walker N, Zhou H, Zhao X, Sparrow CP, Hubbard BK, Rader DJ, Sitlani A, Millar JS, and Hendrickson RC
- Subjects
- Apolipoprotein A-I biosynthesis, Apolipoprotein B-100 biosynthesis, Chromatography, Liquid, Gas Chromatography-Mass Spectrometry, Humans, Protein Stability, Sensitivity and Specificity, Apolipoprotein A-I analysis, Apolipoprotein B-100 analysis, Protein Biosynthesis
- Abstract
Background: Current approaches to measure protein turnover that use stable isotope-labeled tracers via GC-MS are limited to a small number of relatively abundant proteins. We developed a multiplexed liquid chromatography-selected reaction monitoring mass spectrometry (LC-SRM) assay to measure protein turnover and compared the fractional synthetic rates (FSRs) for 2 proteins, VLDL apolipoprotein B100 (VLDL apoB100) and HDL apoA-I, measured by both methods. We applied this technique to other proteins for which kinetics are not readily measured with GC-MS., Methods: Subjects were given a primed-constant infusion of [5,5,5-D(3)]-leucine (D(3)-leucine) for 15 h with blood samples collected at selected time points. Apolipoproteins isolated by SDS-PAGE from lipoprotein fractions were analyzed by GC-MS or an LC-SRM assay designed to measure the M+3/M+0 ratio at >1% D(3)-leucine incorporation. We calculated the FSR for each apolipoprotein by curve fitting the tracer incorporation data from each subject., Results: The LC-SRM method was linear over the range of tracer enrichment values tested and highly correlated with GC-MS (R(2) > 0.9). The FSRs determined from both methods were similar for HDL apoA-I and VLDL apoB100. We were able to apply the LC-SRM approach to determine the tracer enrichment of multiple proteins from a single sample as well as proteins isolated from plasma after immunoprecipitation., Conclusions: The LC-SRM method provides a new technique for measuring the enrichment of proteins labeled with stable isotopes. LC-SRM is amenable to a multiplexed format to provide a relatively rapid and inexpensive means to measure turnover of multiple proteins simultaneously.
- Published
- 2012
- Full Text
- View/download PDF
21. Identification of respective lysine donor and glutamine acceptor sites involved in factor XIIIa-catalyzed fibrin α chain cross-linking.
- Author
-
Wang W
- Subjects
- Binding Sites physiology, Factor XIIIa metabolism, Fibrin metabolism, Glutamine metabolism, Humans, Lysine metabolism, Peptides metabolism, Structure-Activity Relationship, Factor XIIIa chemistry, Fibrin chemistry, Glutamine chemistry, Lysine chemistry, Peptides chemistry
- Abstract
Factor XIIIa-catalyzed ε-(γ-glutamyl)-lysyl bonds between glutamine and lysine residues on fibrin α and γ chains stabilize the fibrin clot and protect it from mechanical and proteolytic damage. The cross-linking of γ chains is known to involve the reciprocal linkages between Gln(398) and Lys(406). In α chains, however, the respective lysine and glutamine partners remain largely unknown. Traditional biochemical approaches have only identified the possible lysine donor and glutamine acceptor sites but have failed to define the respective relationships between them. Here, a differential mass spectrometry method was implemented to characterize cross-linked α chain peptides originating from native fibrin. Tryptic digests of fibrin that underwent differential cross-linking conditions were analyzed by high resolution Fourier transform mass spectrometry. Differential intensities associated with monoisotopic masses of cross-linked peptides were selected for further characterization. A fit-for-purpose algorithm was developed to assign cross-linked peptide pairs of fibrin α chains to the monoisotopic masses relying on accurate mass measurement as the primary criterion for identification. Equipped with hypothesized sequences, tandem mass spectrometry was then used to confirm the identities of the cross-linked peptides. In addition to the reciprocal cross-links between Gln(398) and Lys(406) on the γ chains of fibrin (the positive control of the study), nine specific cross-links (Gln(223)-Lys(508), Gln(223)-Lys(539), Gln(237)-Lys(418), Gln(237)-Lys(508), Gln(237)-Lys(539), Gln(237)-Lys(556), Gln(366)-Lys(539), Gln(563)-Lys(539), and Gln(563)-Lys(601)) on the α chains of fibrin were newly identified. These findings provide novel structural details with respect to the α chain cross-linking compared with earlier efforts.
- Published
- 2011
- Full Text
- View/download PDF
22. Rationally designed mutations convert de novo amyloid-like fibrils into monomeric beta-sheet proteins.
- Author
-
Wang W and Hecht MH
- Subjects
- Amino Acid Sequence, Amyloid genetics, Models, Molecular, Molecular Sequence Data, Mutagenesis, Oligopeptides chemistry, Protein Structure, Secondary, Solubility, Amyloid chemistry
- Abstract
Amyloid fibrils are associated with a variety of neurodegenerative maladies including Alzheimer's disease and the prion diseases. The structures of amyloid fibrils are composed of beta-strands oriented orthogonal to the fibril axis ("cross beta" structure). We previously reported the design and characterization of a combinatorial library of de novo beta-sheet proteins that self-assemble into fibrillar structures resembling amyloid. The libraries were designed by using a "binary code" strategy, in which the locations of polar and nonpolar residues are specified explicitly, but the identities of these residues are not specified and are varied combinatorially. The initial libraries were designed to encode proteins containing amphiphilic beta-strands separated by reverse turns. Each beta-strand was designed to be seven residues long, with polar (open circle) and nonpolar (shaded circle) amino acids arranged with an alternating periodicity ([see text]). The initial design specified the identical polar/nonpolar pattern for all of the beta-strands; no strand was explicitly designated to form the edges of the resulting beta-sheets. With all beta-strands preferring to occupy interior (as opposed to edge) locations, intermolecular oligomerization was favored, and the proteins assembled into amyloid-like fibrils. To assess whether explicit design of edge-favoring strands might tip the balance in favor of monomeric beta-sheet proteins, we have now redesigned the first and/or last beta-strands of several sequences from the original library. In the redesigned beta-strands, the binary pattern is changed from [see text] (K denotes lysine). The presence of a lysine on the nonpolar face of a beta-strand should disfavor fibrillar structures because such structures would bury an uncompensated charge. The nonpolar right arrow lysine mutations, therefore, would be expected to favor monomeric structures in which the [see text] sequences form edge strands with the charged lysine side chain accessible to solvent. To test this hypothesis, we constructed several second generation sequences in which the central nonpolar residue of either the N-terminal beta-strand or the C-terminal beta-strand (or both) is changed to lysine. Characterization of the redesigned proteins shows that they form monomeric beta-sheet proteins.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.