13 results on '"VanHouten JP"'
Search Results
2. Drug Overdose Deaths Among Women Aged 30-64 Years - United States, 1999-2017.
- Author
-
VanHouten JP, Rudd RA, Ballesteros MF, and Mack KA
- Subjects
- Adult, Female, Humans, Middle Aged, United States epidemiology, Drug Overdose mortality
- Abstract
The drug epidemic in the United States continues to evolve. The drug overdose death rate has rapidly increased among women (1,2), although within this demographic group, the increase in overdose death risk is not uniform. From 1999 to 2010, the largest percentage changes in the rates of overall drug overdose deaths were among women in the age groups 45-54 years and 55-64 years (1); however, this finding does not take into account trends in specific drugs or consider changes in age group distributions in drug-specific overdose death rates. To target prevention strategies to address the epidemic among women in these age groups, CDC examined overdose death rates among women aged 30-64 years during 1999-2017, overall and by drug subcategories (antidepressants, benzodiazepines, cocaine, heroin, prescription opioids, and synthetic opioids, excluding methadone). Age distribution changes in drug-specific overdose death rates were calculated. Among women aged 30-64 years, the unadjusted drug overdose death rate increased 260%, from 6.7 deaths per 100,000 population (4,314 total drug overdose deaths) in 1999 to 24.3 (18,110) in 2017. The number and rate of deaths involving antidepressants, benzodiazepines, cocaine, heroin, and synthetic opioids each increased during this period. Prescription opioid-related deaths increased between 1999 and 2017 among women aged 30-64 years, with the largest increases among those aged 55-64 years. Interventions to address the rise in drug overdose deaths include implementing the CDC Guideline for Prescribing Opioids for Chronic Pain (3), reviewing records of controlled substance prescribing (e.g., prescription drug monitoring programs, health insurance programs), and developing capacity of drug use disorder treatments and linkage to care, especially for middle-aged women with drug use disorders., Competing Interests: All authors have completed and submitted the ICMJE form for disclosure of potential conflicts of interest. No potential conflicts of interest were disclosed.
- Published
- 2019
- Full Text
- View/download PDF
3. Cost-utility of osteoarticular allograft versus endoprosthetic reconstruction for primary bone sarcoma of the knee: A markov analysis.
- Author
-
Wilson RJ, Sulieman LM, VanHouten JP, Halpern JL, Schwartz HS, Devin CJ, and Holt GE
- Subjects
- Arthroplasty, Replacement, Knee methods, Bone Neoplasms economics, Bone Transplantation methods, Cost-Benefit Analysis, Femur surgery, Humans, Knee Joint surgery, Markov Chains, Osteosarcoma economics, Plastic Surgery Procedures methods, Tibia surgery, Transplantation, Homologous, Arthroplasty, Replacement, Knee economics, Bone Neoplasms surgery, Bone Transplantation economics, Osteosarcoma surgery, Plastic Surgery Procedures economics
- Abstract
Background: The most cost-effective reconstruction after resection of bone sarcoma is unknown. The goal of this study was to compare the cost effectiveness of osteoarticular allograft to endoprosthetic reconstruction of the proximal tibia or distal femur., Methods: A Markov model was used. Revision and complication rates were taken from existing studies. Costs were based on Medicare reimbursement rates and implant prices. Health-state utilities were derived from the Health Utilities Index 3 survey with additional assumptions. Incremental cost-effectiveness ratios (ICER) were used with less than $100 000 per quality-adjusted life year (QALY) considered cost-effective. Sensitivity analyses were performed for comparison over a range of costs, utilities, complication rates, and revisions rates., Results: Osteoarticular allografts, and a 30% price-discounted endoprosthesis were cost-effective with ICERs of $92.59 and $6 114.77. One-way sensitivity analysis revealed discounted endoprostheses were favored if allografts cost over $21 900 or endoprostheses cost less than $51 900. Allograft reconstruction was favored over discounted endoprosthetic reconstruction if the allograft complication rate was less than 1.3%. Allografts were more cost-effective than full-price endoprostheses., Conclusions: Osteoarticular allografts and price-discounted endoprosthetic reconstructions are cost-effective. Sensitivity analysis, using plausible complication and revision rates, favored the use of discounted endoprostheses over allografts. Allografts are more cost-effective than full-price endoprostheses., (© 2017 Wiley Periodicals, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
4. Higher Charlson Comorbidity Index Scores Are Associated With Increased Hospital Length of Stay After Lower Extremity Orthopaedic Trauma.
- Author
-
Lakomkin N, Kothari P, Dodd AC, VanHouten JP, Yarlagadda M, Collinge CA, Obremskey WT, and Sethi MK
- Subjects
- Age Distribution, Comorbidity, Female, Humans, Incidence, Leg Injuries, Male, Middle Aged, New York epidemiology, Prevalence, Retrospective Studies, Risk Factors, Sex Distribution, Survival Rate, Trauma Severity Indices, Utilization Review, Fractures, Bone mortality, Fractures, Bone surgery, Length of Stay statistics & numerical data
- Abstract
Objectives: The purpose of this study was to explore the relationship between preoperative Charlson Comorbidity Index (CCI) and postoperative length of stay (LOS) for lower extremity and hip/pelvis orthopaedic trauma patients., Design: Retrospective., Setting: Urban level 1 trauma center., Patients/participants: A total of 1561 patients treated for isolated lower extremity and pelvis fractures between 2000 and 2012., Interventions: Surgical intervention for fractures MAIN OUTCOME MEASUREMENTS:: The main outcome metric was LOS. Negative binomial regression analysis was used to examine the association between CCI and LOS while controlling for significant confounders., Results: One thousand five hundred sixty-one patients met the inclusion criteria, 1302 (83.4%) of which had lower extremity injuries and 259 (16.6%) experienced hip/pelvis trauma. A total of 1001 (64.1%) patients presented with a CCI score of 1 and stayed an average of 7.9 days. Patients with a CCI of 3 experienced a mean LOS of 1.2 days longer than patients presenting with a CCI of 1, whereas patients presenting with a CCI score of 5 stayed an average of 4.6 days longer. After controlling for age, race, American Society of Anesthesiologists score, sex, anesthesia type, and anesthesia time, a higher preoperative CCI was found to be associated with longer LOS for patients with lower extremity fractures (Incidence Rate Ratio: 1.04, P = 0.01). No significant association was found between CCI and LOS for patients with hip/pelvic fractures., Conclusions: This study demonstrated the potential utility of the CCI as a predictor of hospital LOS for lower extremity patients; however, the association may be small given the smaller Incidence Rate Ratio value. Further studies are needed to clarify the predictive value of the CCI for different types of orthopaedic injuries., Level of Evidence: Prognostic Level III. See Instructions for Authors for a complete.
- Published
- 2017
- Full Text
- View/download PDF
5. STRATEGIES FOR EQUITABLE PHARMACOGENOMIC-GUIDED WARFARIN DOSING AMONG EUROPEAN AND AFRICAN AMERICAN INDIVIDUALS IN A CLINICAL POPULATION.
- Author
-
Wiley LK, Vanhouten JP, Samuels DC, Aldrich MC, Roden DM, Peterson JF, and Denny JC
- Subjects
- Adult, Aged, Female, Humans, Male, Middle Aged, Algorithms, Anticoagulants administration & dosage, Anticoagulants pharmacokinetics, Black or African American, Cohort Studies, Computational Biology, Cytochrome P-450 CYP2C9 genetics, Gene Frequency, Models, Genetic, Polymorphism, Single Nucleotide, Vitamin K Epoxide Reductases genetics, White, Pharmacogenomic Variants, Warfarin administration & dosage, Warfarin pharmacokinetics
- Abstract
The blood thinner warfarin has a narrow therapeutic range and high inter- and intra-patient variability in therapeutic doses. Several studies have shown that pharmacogenomic variants help predict stable warfarin dosing. However, retrospective and randomized controlled trials that employ dosing algorithms incorporating pharmacogenomic variants under perform in African Americans. This study sought to determine if: 1) including additional variants associated with warfarin dose in African Americans, 2) predicting within single ancestry groups rather than a combined population, or 3) using percentage African ancestry rather than observed race, would improve warfarin dosing algorithms in African Americans. Using BioVU, the Vanderbilt University Medical Center biobank linked to electronic medical records, we compared 25 modeling strategies to existing algorithms using a cohort of 2,181 warfarin users (1,928 whites, 253 blacks). We found that approaches incorporating additional variants increased model accuracy, but not in clinically significant ways. Race stratification increased model fidelity for African Americans, but the improvement was small and not likely to be clinically significant. Use of percent African ancestry improved model fit in the context of race misclassification.
- Published
- 2017
- Full Text
- View/download PDF
6. Evaluating electronic health record data sources and algorithmic approaches to identify hypertensive individuals.
- Author
-
Teixeira PL, Wei WQ, Cronin RM, Mo H, VanHouten JP, Carroll RJ, LaRose E, Bastarache LA, Rosenbloom ST, Edwards TL, Roden DM, Lasko TA, Dart RA, Nikolai AM, Peissig PL, and Denny JC
- Subjects
- Aged, Blood Pressure Determination, Clinical Coding, Female, Humans, Information Storage and Retrieval methods, Male, Middle Aged, Natural Language Processing, Phenotype, ROC Curve, Algorithms, Electronic Health Records, Hypertension diagnosis, Machine Learning
- Abstract
Objective: Phenotyping algorithms applied to electronic health record (EHR) data enable investigators to identify large cohorts for clinical and genomic research. Algorithm development is often iterative, depends on fallible investigator intuition, and is time- and labor-intensive. We developed and evaluated 4 types of phenotyping algorithms and categories of EHR information to identify hypertensive individuals and controls and provide a portable module for implementation at other sites., Materials and Methods: We reviewed the EHRs of 631 individuals followed at Vanderbilt for hypertension status. We developed features and phenotyping algorithms of increasing complexity. Input categories included International Classification of Diseases, Ninth Revision (ICD9) codes, medications, vital signs, narrative-text search results, and Unified Medical Language System (UMLS) concepts extracted using natural language processing (NLP). We developed a module and tested portability by replicating 10 of the best-performing algorithms at the Marshfield Clinic., Results: Random forests using billing codes, medications, vitals, and concepts had the best performance with a median area under the receiver operator characteristic curve (AUC) of 0.976. Normalized sums of all 4 categories also performed well (0.959 AUC). The best non-NLP algorithm combined normalized ICD9 codes, medications, and blood pressure readings with a median AUC of 0.948. Blood pressure cutoffs or ICD9 code counts alone had AUCs of 0.854 and 0.908, respectively. Marshfield Clinic results were similar., Conclusion: This work shows that billing codes or blood pressure readings alone yield good hypertension classification performance. However, even simple combinations of input categories improve performance. The most complex algorithms classified hypertension with excellent recall and precision., (© The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.)
- Published
- 2017
- Full Text
- View/download PDF
7. In response.
- Author
-
Greenberg SE, Lakomkin N, VanHouten JP, and Sethi MK
- Published
- 2016
- Full Text
- View/download PDF
8. Does Admission to Medicine or Orthopaedics Impact a Geriatric Hip Patient's Hospital Length of Stay?
- Author
-
Greenberg SE, VanHouten JP, Lakomkin N, Ehrenfeld J, Jahangir AA, Boyce RH, Obremksey WT, and Sethi MK
- Subjects
- Age Distribution, Aged, Aged, 80 and over, Female, Health Services for the Aged, Humans, Male, Middle Aged, Prevalence, Sex Distribution, Tennessee epidemiology, Admitting Department, Hospital statistics & numerical data, Hip Fractures epidemiology, Hip Fractures surgery, Length of Stay statistics & numerical data, Orthopedics statistics & numerical data, Patient Admission statistics & numerical data
- Abstract
Objectives: The aim of our study was to determine the association between admitting service, medicine or orthopaedics, and length of stay (LOS) for a geriatric hip fracture patient., Design: Retrospective., Setting: Urban level 1 trauma center., Patients/participants: Six hundred fourteen geriatric hip fracture patients from 2000 to 2009., Interventions: Orthopaedic surgery for geriatric hip fracture., Main Outcome Measurements: Patient demographics, medical comorbidities, hospitalization length, and admitting service. Negative binomial regression used to determine association between LOS and admitting service., Results: Six hundred fourteen geriatric hip fracture patients were included in the analysis, of whom 49.2% of patients (n = 302) were admitted to the orthopaedic service and 50.8% (3 = 312) to the medicine service. The median LOS for patients admitted to orthopaedics was 4.5 days compared with 7 days for patients admitted to medicine (P < 0.0001). Readmission was also significantly higher for patients admitted to medicine (n = 92, 29.8%) than for those admitted to orthopaedics (n = 70, 23.1%). After controlling for important patient factors, it was determined that medicine patients are expected to stay about 1.5 times (incidence rate ratio: 1.48, P < 0.0001) longer in the hospital than orthopaedic patients., Conclusions: This is the largest study to demonstrate that admission to the medicine service compared with the orthopaedic service increases a geriatric hip fractures patient's expected LOS. Since LOS is a major driver of cost as well as a measure of quality care, it is important to understand the factors that lead to a longer hospital stay to better allocate hospital resources. Based on the results from our institution, orthopaedic surgeons should be aware that admission to medicine might increase a patient's expected LOS., Level of Evidence: Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.
- Published
- 2016
- Full Text
- View/download PDF
9. Risk Factors for Deep Venous Thrombosis Following Orthopaedic Trauma Surgery: An Analysis of 56,000 patients.
- Author
-
Whiting PS, White-Dzuro GA, Greenberg SE, VanHouten JP, Avilucea FR, Obremskey WT, and Sethi MK
- Abstract
Background: Deep venous thrombosis (DVT) and pulmonary embolism (PE) are recognized as major causes of morbidity and mortality in orthopaedic trauma patients. Despite the high incidence of these complications following orthopaedic trauma, there is a paucity of literature investigating the clinical risk factors for DVT in this specific population. As our healthcare system increasingly emphasizes quality measures, it is critical for orthopaedic surgeons to understand the clinical factors that increase the risk of DVT following orthopaedic trauma., Objectives: Utilizing the ACS-NSQIP database, we sought to determine the incidence and identify independent risk factors for DVT following orthopaedic trauma., Patients and Methods: Using current procedural terminology (CPT) codes for orthopaedic trauma procedures, we identified a prospective cohort of patients from the 2006 to 2013 ACS-NSQIP database. Using Wilcoxon-Mann-Whitney and chi-square tests where appropriate, patient demographics, comorbidities, and operative factors were compared between patients who developed a DVT within 30 days of surgery and those who did not. A multivariate logistic regression analysis was conducted to calculate odds ratios (ORs) and identify independent risk factors for DVT. Significance was set at P < 0.05., Results: 56,299 orthopaedic trauma patients were included in the analysis, of which 473 (0.84%) developed a DVT within 30 days. In univariate analysis, twenty-five variables were significantly associated with the development of a DVT, including age (P < 0.0001), BMI (P = 0.037), diabetes (P = 0.01), ASA score (P < 0.0001) and anatomic region injured (P < 0.0001). Multivariate analysis identified several independent risk factors for development of a DVT including use of a ventilator (OR = 43.67, P = 0.039), ascites (OR = 41.61, P = 0.0038), steroid use (OR = 4.00, P < 0.001), and alcohol use (OR = 2.98, P = 0.0370). Compared to patients with upper extremity trauma, those with lower extremity injuries had significantly increased odds of developing a DVT (OR = 7.55, P = 0.006). The trend toward increased odds of DVT among patients with injuries to the hip/pelvis did not reach statistical significance (OR = 4.51, P = 0.22). Smoking was not found to be an independent risk factor for developing a DVT (P = 0.1217)., Conclusions: This is the largest study to date using the NSQIP database to identify risk factors for DVT in orthopaedic trauma patients. Although the incidence of DVT was low in our cohort, the presence of certain risk factors significantly increased the odds of developing a DVT following orthopaedic trauma. These findings will enable orthopaedic surgeons to target at-risk patients and implement post-operative care protocols aimed at reducing the morbidity and mortality associated with DVT in orthopaedic trauma patients.
- Published
- 2016
- Full Text
- View/download PDF
10. National Veterans Health Administration inpatient risk stratification models for hospital-acquired acute kidney injury.
- Author
-
Cronin RM, VanHouten JP, Siew ED, Eden SK, Fihn SD, Nielson CD, Peterson JF, Baker CR, Ikizler TA, Speroff T, and Matheny ME
- Subjects
- Aged, Female, Hospitalization, Hospitals, Veterans, Humans, Iatrogenic Disease, Logistic Models, Male, Middle Aged, Prognosis, ROC Curve, Retrospective Studies, Risk, United States, United States Department of Veterans Affairs, Acute Kidney Injury, Models, Statistical
- Abstract
Objective: Hospital-acquired acute kidney injury (HA-AKI) is a potentially preventable cause of morbidity and mortality. Identifying high-risk patients prior to the onset of kidney injury is a key step towards AKI prevention., Materials and Methods: A national retrospective cohort of 1,620,898 patient hospitalizations from 116 Veterans Affairs hospitals was assembled from electronic health record (EHR) data collected from 2003 to 2012. HA-AKI was defined at stage 1+, stage 2+, and dialysis. EHR-based predictors were identified through logistic regression, least absolute shrinkage and selection operator (lasso) regression, and random forests, and pair-wise comparisons between each were made. Calibration and discrimination metrics were calculated using 50 bootstrap iterations. In the final models, we report odds ratios, 95% confidence intervals, and importance rankings for predictor variables to evaluate their significance., Results: The area under the receiver operating characteristic curve (AUC) for the different model outcomes ranged from 0.746 to 0.758 in stage 1+, 0.714 to 0.720 in stage 2+, and 0.823 to 0.825 in dialysis. Logistic regression had the best AUC in stage 1+ and dialysis. Random forests had the best AUC in stage 2+ but the least favorable calibration plots. Multiple risk factors were significant in our models, including some nonsteroidal anti-inflammatory drugs, blood pressure medications, antibiotics, and intravenous fluids given during the first 48 h of admission., Conclusions: This study demonstrated that, although all the models tested had good discrimination, performance characteristics varied between methods, and the random forests models did not calibrate as well as the lasso or logistic regression models. In addition, novel modifiable risk factors were explored and found to be significant., (Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.)
- Published
- 2015
- Full Text
- View/download PDF
11. Machine learning for risk prediction of acute coronary syndrome.
- Author
-
VanHouten JP, Starmer JM, Lorenzi NM, Maron DJ, and Lasko TA
- Subjects
- Area Under Curve, Diagnostic Errors prevention & control, Humans, Logistic Models, Prognosis, ROC Curve, Acute Coronary Syndrome diagnosis, Algorithms, Artificial Intelligence, Risk Assessment methods
- Abstract
Acute coronary syndrome (ACS) accounts for 1.36 million hospitalizations and billions of dollars in costs in the United States alone. A major challenge to diagnosing and treating patients with suspected ACS is the significant symptom overlap between patients with and without ACS. There is a high cost to over- and under-treatment. Guidelines recommend early risk stratification of patients, but many tools lack sufficient accuracy for use in clinical practice. Prognostic indices often misrepresent clinical populations and rely on curated data. We used random forest and elastic net on 20,078 deidentified records with significant missing and noisy values to develop models that outperform existing ACS risk prediction tools. We found that the random forest (AUC = 0.848) significantly outperformed elastic net (AUC=0.818), ridge regression (AUC = 0.810), and the TIMI (AUC = 0.745) and GRACE (AUC = 0.623) scores. Our findings show that random forest applied to noisy and sparse data can perform on par with previously developed scoring metrics.
- Published
- 2014
12. Ocular findings at initial pan retinal photocoagulation for proliferative diabetic retinopathy predict the need for future pars plana vitrectomy.
- Author
-
Parikh R, Shah RJ, VanHouten JP, and Cherney EF
- Subjects
- Adult, Aged, Aged, 80 and over, Diabetic Retinopathy diagnosis, Female, Humans, Male, Middle Aged, Neovascularization, Pathologic diagnosis, Reoperation, Retrospective Studies, Risk Factors, Vitreous Hemorrhage diagnosis, Young Adult, Diabetic Retinopathy surgery, Iris blood supply, Laser Coagulation, Neovascularization, Pathologic surgery, Vitrectomy statistics & numerical data, Vitreous Hemorrhage surgery
- Abstract
Purpose: To determine the 1-year and 2-year likelihood of vitrectomy in diabetic patients undergoing initial pan retinal photocoagulation (PRP)., Methods: Diabetic eyes receiving initial PRP for proliferative diabetic retinopathy (PDR) were analyzed to determine their risk for vitrectomy based on clinical findings., Results: In total, 374 eyes of 272 patients were analyzed. The percentage of eyes undergoing vitrectomy 1 year and 2 years following initial PRP was 19.1% and 26.2%, respectively. Of the eyes in Group 1 (PDR alone), Group 2 (PDR and vitreous hemorrhage), and Group 3 (PDR and iris neovascularization, vitreous hemorrhage with traction or fibrosis, or fibrosis alone), the percentage receiving pars plana vitrectomy at 1 year and 2 years was 9.73% (18/185) and 15.7% (29/185), 26.9% (43/160) and 34.4% (55/160), and 37.9% (11/29) and 48.3% (14/29), respectively. Eyes in Group 2 had 2.78 times greater likelihood (P < 0.0001) and eyes in Group 3 had 3.54 times higher likelihood (P < 0.0001) of requiring pars plana vitrectomy within 2 years than those with PDR alone., Conclusion: Eyes receiving PRP for PDR with associated hemorrhage or traction were more likely to undergo pars plana vitrectomy within 1 year and 2 years following initial PRP compared with eyes with only PDR, providing important prognostic information for PRP-naive patients.
- Published
- 2014
- Full Text
- View/download PDF
13. A decision model of therapy for potentially resectable pancreatic cancer.
- Author
-
VanHouten JP, White RR, and Jackson GP
- Subjects
- Humans, Neoadjuvant Therapy, Decision Support Techniques, Pancreatic Neoplasms therapy
- Abstract
Background: Optimal treatment for potentially resectable pancreatic cancer is controversial. Resection is considered the only curative treatment, but neoadjuvant chemoradiotherapy may offer significant advantages., Materials and Methods: We developed a decision model for potentially resectable pancreatic cancer. Initial therapeutic choices were surgery, neoadjuvant chemoradiotherapy, or no treatment; subsequent decisions offered a second intervention if not prohibited by complications or death. Payoffs were calculated as the median expected survival. We gathered evidence for this model through a comprehensive MEDLINE search. One-way sensitivity analyses were performed., Results: Neoadjuvant chemoradiation is favored over initial surgery, with expected values of 18.6 and 17.7 mo, respectively. The decision is sensitive to the probabilities of treatment mortality and tumor resectability. Threshold probabilities are 7.0% mortality of neoadjuvant chemoradiotherapy, 69.2% resectability on imaging after neoadjuvant therapy, and 73.7% resectability at exploration after neoadjuvant therapy, 92.2% resectability at initial resection, and 9.9% surgical mortality following chemoradiotherapy. The decision is sensitive to the utility of time spent in chemoradiotherapy, with surgery favored for utilities less than 0.3 and -0.8, for uncomplicated and complicated chemoradiotherapy, respectively., Conclusions: The ideal treatment for potentially resectable pancreatic cancer remains controversial, but recent evidence supports a slight benefit for neoadjuvant therapy. Our model shows that the decision is sensitive to the probability of tumor resectability and chemoradiation mortality, but not to rates of other treatment complications. With minimal benefit of one treatment over another based on survival alone, patient preferences will likely play an important role in determining best treatment., (Published by Elsevier Inc.)
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.