11 results on '"Quinten S. Paterson"'
Search Results
2. Outcomes in the age of competency-based medical education: Recommendations for emergency medicine training in Canada from the 2019 symposium of academic emergency physicians
- Author
-
Daniel K. Ting, Alexandra Stefan, Andrew K Hall, Quinten S. Paterson, Teresa M. Chan, Stanley J. Hamstra, Fareen Zaver, Rob Woods, and Brent Thoma
- Subjects
medicine.medical_specialty ,Medical education ,020205 medical informatics ,business.industry ,Consultation process ,Survey result ,Context (language use) ,02 engineering and technology ,03 medical and health sciences ,0302 clinical medicine ,Qualitative analysis ,Emergency medicine ,0202 electrical engineering, electronic engineering, information engineering ,Emergency Medicine ,Medicine ,030212 general & internal medicine ,Tracking (education) ,business - Abstract
ObjectivesThe national implementation of competency-based medical education (CBME) has prompted an increased interest in identifying and tracking clinical and educational outcomes for emergency medicine training programs. For the 2019 Canadian Association of Emergency Physicians (CAEP) Academic Symposium, we developed recommendations for measuring outcomes in emergency medicine training in the context of CBME to assist educational leaders and systems designers in program evaluation.MethodsWe conducted a three-phase study to generate educational and clinical outcomes for emergency medicine (EM) education in Canada. First, we elicited expert and community perspectives on the best educational and clinical outcomes through a structured consultation process using a targeted online survey. We then qualitatively analyzed these responses to generate a list of suggested outcomes. Last, we presented these outcomes to a diverse assembly of educators, trainees, and clinicians at the CAEP Academic Symposium for feedback and endorsement through a voting process.ConclusionAcademic Symposium attendees endorsed the measurement and linkage of CBME educational and clinical outcomes. Twenty-five outcomes (15 educational, 10 clinical) were derived from the qualitative analysis of the survey results and the most important short- and long-term outcomes (both educational and clinical) were identified. These outcomes can be used to help measure the impact of CBME on the practice of Emergency Medicine in Canada to ensure that it meets both trainee and patient needs.
- Published
- 2020
3. A transparent and defensible process for applicant selection within a Canadian emergency medicine residency program
- Author
-
Quinten S. Paterson, Brent Thoma, Rob Woods, Riley J Hartmann, and Lynsey J Martin
- Subjects
medicine.medical_specialty ,Matching (statistics) ,business.industry ,Best practice ,Standardized approach ,Rank (computer programming) ,Attendance ,030208 emergency & critical care medicine ,Variance (accounting) ,03 medical and health sciences ,Administrative assistant ,0302 clinical medicine ,Emergency medicine ,Emergency Medicine ,Medicine ,030212 general & internal medicine ,business.job_title ,business ,Selection (genetic algorithm) - Abstract
ObjectivesThe Canadian Resident Matching Service (CaRMS) selection process has come under scrutiny due to the increasing number of unmatched medical graduates. In response, we outline our residency program's selection process including how we have incorporated best practices and novel techniques.MethodsWe selected file reviewers and interviewers to mitigate gender bias and increase diversity. Four residents and two attending physicians rated each file using a standardized, cloud-based file review template to allow simultaneous rating. We interviewed applicants using four standardized stations with two or three interviewers per station. We used heat maps to review rating discrepancies and eliminated rating variance using Z-scores. The number of person-hours that we required to conduct our selection process was quantified and the process outcomes were described statistically and graphically.ResultsWe received between 75 and 90 CaRMS applications during each application cycle between 2017 and 2019. Our overall process required 320 person-hours annually, excluding attendance at the social events and administrative assistant duties. Our preliminary interview and rank lists were developed using weighted Z-scores and modified through an organized discussion informed by heat mapped data. The difference between the Z-scores of applicants surrounding the interview invitation threshold was 0.18-0.3 standard deviations. Interview performance significantly impacted the final rank list.ConclusionsWe describe a rigorous resident selection process for our emergency medicine training program which incorporated simultaneous cloud-based rating, Z-scores, and heat maps. This standardized approach could inform other programs looking to adopt a rigorous selection process while providing applicants guidance and reassurance of a fair assessment.
- Published
- 2020
4. The revised Approved Instructional Resources score: An improved quality evaluation tool for online educational resources
- Author
-
Nadim Lalani, Shelaina Anderson, Jonas Wilmer, Marina Roure, Kristen Weersink, Katherine A. Stuart, Brent Benavides, Wisarut Bunchit, Colleen Sweeney, Robert R. Ehrman, Richard Tang, Kelvin Tran, Gregory Wanner, Drew Kalnow, Christine Roh, Kinjal Patel, Bill Fraser, Therese Mead, Stephen Carroll, Paul Schofield, Caley Flynn, Lindy Buzikievich, Lucy Chen, Tara Stratton, Jan Hansel, Hector C. Singson, Tanner Gronowski, Ali Jamal, Adeeb Saleh, Todd Taylor, Rayan Delbani, Phil Griffith, Michael J. Ward, Miranda Wan, Ashley Kilp, Anna Whalen-Browne, Logan Mills, Ali S. Raja, Perry Menzies, Christine Patterson, Sandra Viggers, Brendan Devine, Vanessa Rogers, Braeden Beaumont, Jennifer Baird, Paula Sneath, Natasha Chatham-Zvelebil, Brandon Herb, Harry Liu, Marie Decock, Sarah Mott, Elise Lovell, Mohammad Ali Jamil, Ken Edwards, Victor Jansen, Maia Dorsett, Jaasmit Khurana, Salim R. Rezaie, Alexander Hart, Fareen Zaver, Manpreet Singh, Ching-Hsing Lee, Suzanne Rannazzisi, Mike McDonnell, Loice Swisher, Rob Carey, Joe Walter, Andrew D. D’Alessandro, Bob Stuntz, James Stempien, Preston Fedor, Kelly Lien, Parisa Shahrabadi, Shauna Regan, Alan Taylor, Nilantha Lenora, Scott Anderson, Calvin H. Yeh, Jason Trickovic, David Calcara, Werner Oberholzer, Catherine Patocka, James Fukakusa, T. Oyedokun, Ivy Liu, Regina Hammock, Steve Liu, Kevin Cullison, Chris Belcher, Teresa Dunphy, Alexis Pelletier-Bui, Zander Laurie, Ashley Lubberdink, James Pearlman, James Huffman, Nikhil Tambe, Carolyn McQuarrie, Gerhard Tiwald, Gregor Prosen, David Lowe, Henry Swoboda, Jennifer Weekes, Kimberly Connors, Aaron Tyagi, Anali Maneshi, Patrick M. Lank, Emina Hajdinjak, Levi Johnston, Eric Chen, Abdulaziz S. Almehlisi, Zach Jarou, Noorin Walji, Alvin Chin, Tanis Quaife, Nikytha Antony, Lawrence Yau, Alexander Zozula, Gregory Costello, Louise Rang, John Mayo, Evan S. Schwarz, Victoria Brazil, P. Mukherj, Taylor Duda, Jaime Jordan, Susan McLellan, Alim Pardhan, Jared Baylis, Allan Mix, Cathy Grossman, Sean Dyer, Emily House, Eric Shappell, Colin Andrews, Mark Woodcroft, William D.T. Kent, Anthony Bryson, Nelson Wong, Pawan Gupta, Diptesh Aryal, Owen Scheirer, Morgan Oakland, Patrick Vallance, Brendan Moore, Mary R C Haas, Kenn Ghaffarian, Steve Montag, Elyse Berger Pelletier, Julianna Deutscher, Nina House, Keith Rosenberg, Sushant Chhabra, Viktor Gawlik, Michael Benham, Andrew Baker, Brent Thoma, Ernest Leber, Larissa Hattin, Casey Lyons, Timothy Chaplin, Kamini Premkumar, Shahbaz Syed, Ivanna Kruhlak, Stephanie Louka, Haakon Lenes, Rene Verbeek, SueLin Hilbert, Joshua Rudner, Julia Nood, Kelly van Diepen, Brian Whiteside, Karthryn T. Eastley, Julia Sheffield, Damjan Gaco, Sam Smith, Quinten S. Paterson, Teresa M. Chan, Jeremy Christensen, Jocelyn Andruko, Youness Elkhalidy, Cory Meeuwisse, Sheena Nandalal, Cara Weessies, Scott Knapp, Sheng Hsiang Ma, Meagan Fu, Veronica Coppersmith, William Denq, Vivian Jia, Kristina Lea, Hugh MacLeod, Simon Huang, Yingchun Lin, Wyatt Warawa, Will Sanderson, Brian Ficiur, Jessica G.Y. Luc, Taylor Nikel, Jessica Yee, Tina Choudhri, Patricia Van den Berg, Andrew Grock, Samantha Lam, Andrew Guy, Keeth Krishnan, Taku Taira, Eric Funk, Rachel Taylor, Ali Mulla, Sebastian Kohler, Kyle Kelson, Nicholas Bouchard, Stanislaw Haciski, Jesse Leontowicz, Paul Trinquero, Charlie Inboriboon, Justin Dueweke, Julian Botta, Emily Brumfield, Kat Butler, Patrick Meloy, Laleh Gharahbaghian, Andrew K. Hall, Maria Rosa Carrillo, Aubrey Powell, Louise Cassidy, Jesse May, Isabelle N. Colmers-Gray, Evelyn Tran, Sarah Batty, Vishal Puri, Randi Ramunno, Luis Vargas, Stephen Miazga, Justin Morgenstern, Michelle Lin, Andrew Griffith, Michael Susalla, Charlotte Alexander, Alex Ireland, Kerstin de Wit, Marcia L. Edmonds, Robert Sobehart, Rob Woods, Kirsty Challen, Dave Slessor, Abby Cosgrove, Eric Chochi, Onyeka Otugo, Amy F. Ho, Alexandra Gustafson, Zlata Vlodaver, Kerry Spearing, Ryan Raffel, Milan L Ridderikhof, Barbra Backus, Saeed Alqahtani, Paul Schunk, Anne Messman, Seth Kelly, Puneet Kapur, Andrew Little, Kathryn Chan, Sean Nugent, Rishi Khakhkhar, Mohammed Alkhalifah, Rachel Wang, Jesse Hill, Marc Phan, Jaroslaw Gucwa, Nick Mancuso, Paxton Ting, Matthew Wagner, Zafrina Poonja, Elisha Targonsky, Britni Sternard, Katherine Yurkiw, Manrique Umana, Jeff Hill, Matthew Willis, and Sherri L. Rudinsky
- Subjects
Medical education ,education.field_of_study ,Intraclass correlation ,business.industry ,Best practice ,Population ,MEDLINE ,Context (language use) ,Usability ,Original Contribution ,Emergency Nursing ,Education ,Emergency Medicine ,Thematic analysis ,education ,business ,Psychology ,Quality assurance - Abstract
Background Free Open-Access Medical education (FOAM) use among residents continues to rise. However, it often lacks quality assurance processes and residents receive little guidance on quality assessment. The Academic Life in Emergency Medicine Approved Instructional Resources tool (AAT) was created for FOAM appraisal by and for expert educators and has demonstrated validity in this context. It has yet to be evaluated in other populations. Objectives We assessed the AAT's usability in a diverse population of practicing emergency medicine (EM) physicians, residents, and medical students; solicited feedback; and developed a revised tool. Methods As part of the Medical Education Translational Resources: Impact and Quality (METRIQ) study, we recruited medical students, EM residents, and EM attendings to evaluate five FOAM posts with the AAT and provide quantitative and qualitative feedback via an online survey. Two independent analysts performed a qualitative thematic analysis with discrepancies resolved through discussion and negotiated consensus. This analysis informed development of an initial revised AAT, which was then further refined after pilot testing among the author group. The final tool was reassessed for reliability. Results Of 330 recruited international participants, 309 completed all ratings. The Best Evidence in Emergency Medicine (BEEM) score was the component most frequently reported as difficult to use. Several themes emerged from the qualitative analysis: for ease of use-understandable, logically structured, concise, and aligned with educational value. Limitations include deviation from questionnaire best practices, validity concerns, and challenges assessing evidence-based medicine. Themes supporting its use include evaluative utility and usability. The author group pilot tested the initial revised AAT, revealing a total score average measure intraclass correlation coefficient (ICC) of moderate reliability (ICC = 0.68, 95% confidence interval [CI] = 0 to 0.962). The final AAT's average measure ICC was 0.88 (95% CI = 0.77 to 0.95). Conclusions We developed the final revised AAT from usability feedback. The new score has significantly increased usability, but will need to be reassessed for reliability in a broad population.
- Published
- 2021
5. The development of entrustable professional activities reference cards to support the implementation of Competence by Design in emergency medicine
- Author
-
Quinten S. Paterson, Lara Witt, Lynsey J Martin, Brent Thoma, and Emily J. Stoneham
- Subjects
Adult ,Male ,medicine.medical_specialty ,Canada ,03 medical and health sciences ,0302 clinical medicine ,Resource development ,Surveys and Questionnaires ,Curriculum mapping ,medicine ,Humans ,030212 general & internal medicine ,Competence (human resources) ,Curriculum ,Design assessment ,Academic Medical Centers ,business.industry ,Internship and Residency ,030208 emergency & critical care medicine ,Mobile Applications ,Competency-Based Education ,Education, Medical, Graduate ,Helpfulness ,Emergency medicine ,Emergency Medicine ,Female ,Clinical Competence ,business ,Program Evaluation - Abstract
We designed two practical, user-friendly, low-cost, aesthetically pleasing resources, with the goal of introducing residents and observers to a new Competence by Design assessment system based on entrustable professional activities. They included a set of rotation- and stage-specific entrustable professional activities reference cards for bedside use by residents and observers and a curriculum board to organize the entrustable professional activities reference cards by stages of training based on our program's curriculum map. A survey of 14 emergency medicine residents evaluated the utilization and helpfulness of these resources. They had a positive impact on our program's transition to Competence by Design and could be successfully incorporated into other residency programs to support the introduction of entrustable professional activities-based Competence by Design assessment systems.Le groupe a conçu deux documents à la fois pratiques, conviviaux, peu coûteux et agréables à l’œil, dans le but de présenter aux résidents et aux observateurs un nouveau système d’évaluation du modèle d'acquisition des compétences par conception, fondé sur des activités professionnelles confiables. Les documents comprenaient un ensemble de fiches de référence illustrant des activités professionnelles confiables liées aux étapes de formation et aux stages cliniques, à utiliser au chevet par les résidents et les observateurs ainsi qu'un tableau cartonné représentatif du curriculum visant à répartir les fiches de référence liées aux activités professionnelles confiables selon les étapes de formation fondées sur la carte du programme. Quatorze résidents en médecine d'urgence ont évalué l'utilisation et l'utilité de ces documents. L'enquête a révélé que ces derniers avaient facilité la transition vers le modèle d'acquisition des compétences par conception, et se prêteraient bien à d'autres programmes de résidence afin d'aider à la mise en œuvre de systèmes d’évaluation de la nouvelle approche de formation, fondés sur des activités professionnelles confiables.
- Published
- 2019
6. Addressing the opioid crisis in the era of competency-based medical education: recommendations for emergency department interventions
- Author
-
Kathryn Dong, Rob Woods, Lynsey J Martin, Justin J. Koh, Melody Ong, and Quinten S. Paterson
- Subjects
Canada ,Harm reduction ,Social Determinants of Health ,business.industry ,Narcotic Antagonists ,Psychological intervention ,Transitional Care ,Emergency department ,Opioid-Related Disorders ,medicine.disease ,Competency-Based Education ,Patient Education as Topic ,Opioid ,Education, Medical, Graduate ,Emergency Medicine ,Humans ,Prescription Drug Monitoring Programs ,Medicine ,Buprenorphine, Naloxone Drug Combination ,Medical emergency ,Opioid Epidemic ,Substance use ,Emergency Service, Hospital ,business ,medicine.drug - Published
- 2019
7. Emergency physicians as human billboards for injury prevention: a randomized controlled trial
- Author
-
Shelby Huffman, Rob Woods, Quinten S. Paterson, Satyadeva Challa, Emily Sullivan, and Daniel Fuller
- Subjects
Adult ,Counseling ,Male ,medicine.medical_specialty ,Adolescent ,Poison control ,Suicide prevention ,Occupational safety and health ,Clothing ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,Randomized controlled trial ,law ,030225 pediatrics ,Intervention (counseling) ,Injury prevention ,medicine ,Craniocerebral Trauma ,Humans ,Child ,Physician-Patient Relations ,business.industry ,Human factors and ergonomics ,Repeated measures design ,030208 emergency & critical care medicine ,Saskatchewan ,Bicycling ,Emergency Medicine ,Physical therapy ,Female ,Head Protective Devices ,Emergency Service, Hospital ,business - Abstract
ObjectivesThe objective of this study was to evaluate the impact of a novel injury prevention intervention designed to prompt patients to initiate an injury prevention discussion with the ED physician, thus enabling injury prevention counselling and increasing bicycle helmet use among patients.MethodsA repeated measures 2 x 3 randomized controlled trial design was used. Fourteen emergency physicians were observed for two shifts each between June and August 2013. Each pair of shifts was randomized to either an injury prevention shift, during which the emergency physician would wear a customized scrub top, or a control shift. The outcomes of interest were physician time spent discussing injury prevention, current helmet use, and self-reported change in helmet use rates at one year. Logistic regression analyses were used to examine the impact of the intervention.ResultsThe average time spent on injury prevention for all patients was 3.3 seconds. For those patients who actually received counselling, the average time spent was 17.0 seconds. The scrub top intervention did not significantly change helmet use rates at one year. The intervention also had no significant impact on patient decisions to change or reinforcement of helmet use.ConclusionsOur study showed that the intervention did not increase physician injury prevention counselling or self-reported bicycle helmet use rates among patients. Given the study limitations, replication and extension of the intervention is warranted.
- Published
- 2016
8. The Social Media Index: Measuring the Impact of Emergency Medicine and Critical Care Websites
- Author
-
Brent Thoma, Michelle Lin, Quinten S. Paterson, Teresa M. Chan, Jordon Steeg, and Jason L. Sanders
- Subjects
medicine.medical_specialty ,Media ,Index (economics) ,Critical Care ,MEDLINE ,lcsh:Medicine ,Spearman's rank correlation coefficient ,Correlation ,Social ,Online medical education, medical education, social media, impact factor, blogs, podcasts ,medicine ,Social media ,Journal Impact Factors ,Original Research ,Internet ,Impact factor ,business.industry ,lcsh:R ,lcsh:Medical emergencies. Critical care. Intensive care. First aid ,Repeated measures design ,General Medicine ,lcsh:RC86-88.9 ,Index ,Emergency medicine ,Emergency Medicine ,Technology In Emergency Medicine ,Journal Impact Factor ,business ,Social Media - Abstract
Introduction: The number of educational resources created for emergency medicine and criticalcare (EMCC) that incorporate social media has increased dramatically. With no way to assess theirimpact or quality, it is challenging for educators to receive scholarly credit and for learners to identifyrespected resources. The Social Media index (SMi) was developed to help address this. Methods: We used data from social media platforms (Google PageRanks, Alexa Ranks, FacebookLikes, Twitter Followers, and Google+ Followers) for EMCC blogs and podcasts to derive threenormalized (ordinal, logarithmic, and raw) formulas. The most statistically robust formula wasassessed for 1) temporal stability using repeated measures and website age, and 2) correlationwith impact by applying it to EMCC journals and measuring the correlation with known journalimpact metrics. Results: The logarithmic version of the SMi containing four metrics was the most statistically robust.It correlated significantly with website age (Spearman r=0.372; p
- Published
- 2015
9. MP23: Giving medical students what they deserve - a rigorous, equitable and defensible CaRMS selection process
- Author
-
Lynsey J Martin, Rob Woods, Quinten S. Paterson, Riley J Hartmann, and Brent Thoma
- Subjects
Process (engineering) ,Computer science ,Management science ,Emergency Medicine ,Selection (genetic algorithm) - Abstract
Innovation Concept: The fairness of the Canadian Residency Matching Service (CaRMS) selection process has been called into question by rising rates of unmatched medical students and reports of bias and subjectivity. We outline how the University of Saskatchewan Royal College emergency medicine program evaluates CaRMS applications in a standardized, rigorous, equitable and defensible manner. Methods: Our CaRMS applicant evaluation methods were first utilized in the 2017 CaRMS cycle, based on published Best Practices, and have been refined yearly to ensure validity, standardization, defensibility, rigour, and to improve the speed and flow of data processing. To determine the reliability of the total application scores for each rater, single measures intraclass correlation coefficients (ICCs) were calculated using a random effects model in 2017 and 2018. Curriculum, Tool or Material: A secure, online spreadsheet was created that includes applicant names, reviewer assignments, data entry boxes, and formulas. Each file reviewer entered data in a dedicated sheet within the document. Each application was reviewed by two staff physicians and two to four residents. File reviewers used a standardized, criterion-based scoring rubric for each application component. The file score for each reviewer-applicant pair was converted into a z-score based on each reviewer's distribution of scores. Z-scores of all reviewers for a single applicant were then combined by weighted average, with the group of staff and group of residents each being weighted to represent half of the final file score. The ICC for the total raw scores improved from 0.38 (poor) in 2017 to 0.52 (moderate) in 2018. The data from each reviewer was amalgamated into a master sheet where applicants were sorted by final file score and heat-mapped to offer a visual aid regarding differences in ratings. Conclusion: Our innovation uses heat-mapped and formula-populated spreadsheets, scoring rubrics, and z-scores to normalize variation in scoring trends between reviewers. We believe this approach provides a rigorous, defensible, and reproducible process by which Canadian residency programs can appraise applicants and create a rank order list.
- Published
- 2019
10. A Systematic Review and Qualitative Analysis to Determine Quality Indicators forHealth Professions Education Blogs and Podcasts
- Author
-
Michelle Lin, Brent Thoma, Quinten S. Paterson, Teresa M. Chan, and W. Kenneth Milne
- Subjects
Blogging ,Research methodology ,media_common.quotation_subject ,MEDLINE ,Reviews ,computer.software_genre ,Qualitative analysis ,Mainstream ,Medicine ,Quality (business) ,Qualitative Research ,media_common ,Quality Indicators, Health Care ,Medical education ,Multimedia ,Education, Medical ,business.industry ,General Medicine ,Health professions ,Systematic review ,Health Occupations ,Diffusion of Innovation ,business ,computer ,Webcasts as Topic ,Qualitative research ,Computer-Assisted Instruction - Abstract
Background Historically, trainees in undergraduate and graduate health professions education have relied on secondary resources, such as textbooks and lectures, for core learning activities. Recently, blogs and podcasts have entered into mainstream usage, especially for residents and educators. These low-cost, widely available resources have many characteristics of disruptive innovations and, if they continue to improve in quality, have the potential to reinvigorate health professions education. One potential limitation of further growth in the use of these resources is the lack of information on their quality and effectiveness. Objective To identify quality indicators for secondary resources that are described in the literature, which might be applicable to blogs and podcasts. Methods Using a blended research methodology, we performed a systematic literature review using Google Scholar, MEDLINE, Embase, Web of Science, and ERIC to identify quality indicators for secondary resources. A qualitative analysis of these indicators resulted in the organization of this information into themes and subthemes. Expert focus groups were convened to triangulate these findings and ensure that no relevant quality indicators were missed. Results The literature search identified 4530 abstracts, and quality indicators were extracted from 157 articles. The qualitative analysis produced 3 themes (credibility, content, and design), 13 subthemes, and 151 quality indicators. Conclusions The list of quality indicators resulting from our analysis can be used by stakeholders, including learners, educators, academic leaders, and blog/podcast producers. Further studies are being conducted, which will refine the list into a form that is more structured and stratified for use by these stakeholders.
- Published
- 2015
11. Emergency Medicine and Critical Care Blogs and Podcasts: Establishing an International Consensus on Quality
- Author
-
Quinten S. Paterson, Teresa M. Chan, Brent Thoma, Michelle Lin, W. Kenneth Milne, and Jason L. Sanders
- Subjects
medicine.medical_specialty ,Consensus ,Internationality ,Blogging ,Critical Care ,Delphi Technique ,media_common.quotation_subject ,Population ,MEDLINE ,Delphi method ,Likert scale ,Credibility ,medicine ,Humans ,Social media ,Quality (business) ,education ,ComputingMilieux_MISCELLANEOUS ,media_common ,education.field_of_study ,business.industry ,Emergency medicine ,Emergency Medicine ,business ,Inclusion (education) ,Webcasts as Topic - Abstract
Study objective This study identified the most important quality indicators for online educational resources such as blogs and podcasts. Methods A modified Delphi process that included 2 iterative surveys was used to build expert consensus on a previously defined list of 151 quality indicators divided into 3 themes: credibility, content, and design. Aggregate social media indicators were used to identify an expert population of editors from a defined list of emergency medicine and critical care blogs and podcasts. Survey 1 consisted of the quality indicators and a 7-point Likert scale. The mean score for each quality indicator was included in survey 2, which asked participants whether to "include" or "not include" each quality indicator. The cut point for consensus was defined at greater than 70% "include." Results Eighty-three percent (20/24) of bloggers and 90.9% (20/22) of podcasters completed survey 1 and 90% (18/20) of bloggers and podcasters completed survey 2. The 70% inclusion criteria were met by 44 and 80 quality indicators for bloggers and podcasters, respectively. Post hoc, a 90% cutoff was used to identify a list of 14 and 26 quality indicators for bloggers and podcasters, respectively. Conclusion The relative importance of quality indicators for emergency medicine blogs and podcasts was determined. This will be helpful for resource producers trying to improve their blogs or podcasts and for learners, educators, and academic leaders assessing their quality. These results will inform broader validation studies and attempts to develop user-friendly assessment instruments for these resources.
- Published
- 2015
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.