17 results on '"Turner, Tari"'
Search Results
2. Living evidence syntheses: the emerging opportunity to increase evidence-informed health policy in Australia.
- Author
-
Chakraborty SP, Collie A, Hodder R, Majumdar SS, Sutherland K, Towler B, Vogel J, Wilson A, Wolfenden L, Green S, and Turner T
- Subjects
- Australia, Humans, Policy Making, Health Policy, Evidence-Based Medicine
- Published
- 2024
- Full Text
- View/download PDF
3. Living evidence and adaptive policy: perfect partners?
- Author
-
Turner T, Lavis JN, Grimshaw JM, Green S, and Elliott J
- Subjects
- Humans, Research Design, Uncertainty, Research Personnel, Policy Making, Health Policy
- Abstract
Background: While there has been widespread global acceptance of the importance of evidence-informed policy, many opportunities to inform health policy with research are missed, often because of a mismatch between when and where reliable evidence is needed, and when and where it is available. 'Living evidence' is an approach where systematic evidence syntheses (e.g. living reviews, living guidelines, living policy briefs, etc.) are continually updated to incorporate new relevant evidence as it becomes available. Living evidence approaches have the potential to overcome a major barrier to evidence-informed policy, making up-to-date systematic summaries of policy-relevant research available at any time that policy-makers need them. These approaches are likely to be particularly beneficial given increasing calls for policy that is responsive, and rapidly adaptive to changes in the policy context. We describe the opportunities presented by living evidence for evidence-informed policy-making and highlight areas for further exploration., Discussion: There are several elements of living approaches to evidence synthesis that might support increased and improved use of evidence to inform policy. Reviews are explicitly prioritised to be 'living' by partnerships between policy-makers and researchers based on relevance to decision-making, as well as uncertainty of existing evidence, and likelihood that new evidence will arise. The ongoing nature of the work means evidence synthesis teams can be dynamic and engage with policy-makers in a variety of ways over time; and synthesis topics, questions and methods can be adapted as policy interests or contextual factors shift. Policy-makers can sign-up to be notified when relevant new evidence is found, and can be confident that living syntheses are up-to-date and contain all research whenever they access them. The always up-to-date nature of living evidence syntheses means producers can rapidly demonstrate availability of relevant, reliable evidence when it is needed, addressing a frequently cited barrier to evidence-informed policymaking., Conclusions: While there are challenges to be overcome, living evidence provides opportunities to enable policy-makers to access up-to-date evidence whenever they need it and also enable researchers to respond to the issues of the day with up-to-date research; and update policy-makers on changes in the evidence base as they arise. It also provides an opportunity to build flexible partnerships between researchers and policy-makers to ensure that evidence syntheses reflect the changing needs of policy-makers., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
4. Building a bright, evidence-informed future: a conversation starter from the incoming editors.
- Author
-
Turner T and El-Jardali F
- Subjects
- Capacity Building organization & administration, Communication, Humans, Information Dissemination, Systems Integration, World Health Organization, Global Health, Health Policy, Policy Making, Research organization & administration
- Abstract
Health Research and Policy Systems (HARPS) has gone from strength to strength since it was established in 2003. As new Editors-in-Chief, we look forward to a bright future for HARPS, and we would like to start a conversation with you, HARPS readers, authors, editors and others, about how HARPS can best support ongoing progress and debate on evidence-informed health research policy and systems, particularly in developing countries. As a starting point for discussion, we would like to highlight three areas that we are passionate about, namely supporting an integrated community of researchers and policy-makers; building a focus on how health research and policy systems can support achievement of the Sustainable Development Goals; and strengthening our commitment to communicating and disseminating the work published in HARPS. We invite you to contribute your thoughts, ideas and suggestions on the future of HARPS, as we work together towards an evidence-informed future.
- Published
- 2017
- Full Text
- View/download PDF
5. Development and validation of SEER (Seeking, Engaging with and Evaluating Research): a measure of policymakers' capacity to engage with and use research.
- Author
-
Brennan SE, McKenzie JE, Turner T, Redman S, Makkar S, Williamson A, Haynes A, and Green SE
- Subjects
- Evidence-Based Practice, Feasibility Studies, Humans, Pilot Projects, Policy Making, Professional Practice, Self Report, Surveys and Questionnaires, Translational Research, Biomedical, Administrative Personnel, Health Policy, Research statistics & numerical data
- Abstract
Background: Capacity building strategies are widely used to increase the use of research in policy development. However, a lack of well-validated measures for policy contexts has hampered efforts to identify priorities for capacity building and to evaluate the impact of strategies. We aimed to address this gap by developing SEER (Seeking, Engaging with and Evaluating Research), a self-report measure of individual policymakers' capacity to engage with and use research., Methods: We used the SPIRIT Action Framework to identify pertinent domains and guide development of items for measuring each domain. Scales covered (1) individual capacity to use research (confidence in using research, value placed on research, individual perceptions of the value their organisation places on research, supporting tools and systems), (2) actions taken to engage with research and researchers, and (3) use of research to inform policy (extent and type of research use). A sample of policymakers engaged in health policy development provided data to examine scale reliability (internal consistency, test-retest) and validity (relation to measures of similar concepts, relation to a measure of intention to use research, internal structure of the individual capacity scales)., Results: Response rates were 55% (150/272 people, 12 agencies) for the validity and internal consistency analyses, and 54% (57/105 people, 9 agencies) for test-retest reliability. The individual capacity scales demonstrated adequate internal consistency reliability (alpha coefficients > 0.7, all four scales) and test-retest reliability (intra-class correlation coefficients > 0.7 for three scales and 0.59 for fourth scale). Scores on individual capacity scales converged as predicted with measures of similar concepts (moderate correlations of > 0.4), and confirmatory factor analysis provided evidence that the scales measured related but distinct concepts. Items in each of these four scales related as predicted to concepts in the measurement model derived from the SPIRIT Action Framework. Evidence about the reliability and validity of the research engagement actions and research use scales was equivocal., Conclusions: Initial testing of SEER suggests that the four individual capacity scales may be used in policy settings to examine current capacity and identify areas for capacity building. The relation between capacity, research engagement actions and research use requires further investigation.
- Published
- 2017
- Full Text
- View/download PDF
6. The development of ORACLe: a measure of an organisation's capacity to engage in evidence-informed health policy.
- Author
-
Makkar SR, Turner T, Williamson A, Louviere J, Redman S, Haynes A, Green S, and Brennan S
- Subjects
- Algorithms, Australia, Evidence-Based Medicine, Humans, Inservice Training, Interviews as Topic, Leadership, Organizational Culture, Biomedical Research organization & administration, Health Policy, Health Services Administration, Policy Making
- Abstract
Background: Evidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations' research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations' capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership)., Method: We used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe., Results: The ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4)., Conclusion: We utilised qualitative and quantitative methods to develop a system to assess and score organisations' capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.
- Published
- 2016
- Full Text
- View/download PDF
7. Using conjoint analysis to develop a system of scoring policymakers' use of research in policy and program development.
- Author
-
Makkar SR, Williamson A, Turner T, Redman S, and Louviere J
- Subjects
- Administrative Personnel, Checklist, Humans, Evidence-Based Medicine, Health Policy, Health Services Research, Policy Making, Program Development, Translational Research, Biomedical
- Abstract
Background: The importance of utilising the best available research evidence in the development of health policies, services, and programs is increasingly recognised, yet few standardised systems for quantifying policymakers' research use are available. We developed a comprehensive measurement and scoring tool that assesses four domains of research use (i.e. instrumental, conceptual, tactical, and imposed). The scoring tool breaks down each domain into its key subactions like a checklist. Our aim was to develop a tool that assigned appropriate scores to each subaction based on its relative importance to undertaking evidence-informed health policymaking. In order to establish the relative importance of each research use subaction and generate this scoring system, we conducted conjoint analysis with a sample of knowledge translation experts., Methods: Fifty-four experts were recruited to undertake four choice surveys. Respondents were shown combinations of research use subactions called profiles, and rated on a 1 to 9 scale whether each profile represented a limited (1-3), moderate (4-6), or extensive (7-9) example of research use. Generalised Estimating Equations were used to analyse respondents' choice data, which calculated a utility coefficient for each subaction. A large utility coefficient indicated that a subaction was particularly influential in guiding experts' ratings of extensive research use., Results: Utility coefficients were calculated for each subaction, which became the points assigned to the subactions in the scoring system. The following subactions yielded the largest utilities and were regarded as the most important components of each research use domain: using research to directly influence the core of the policy decision; using research to inform alternative perspectives to deal with the policy issue; using research to persuade targeted stakeholders to support a predetermined decision; and using research because it was a mandated requirement by the policymaker's organisation., Conclusions: We have generated an empirically derived and context-sensitive means of measuring and scoring the extent to which policymakers used research to inform the development of a policy document. The scoring system can be used by organisations to not only quantify the extent of their research use, but also to provide them with insights into potential strategies to improve subsequent research use.
- Published
- 2015
- Full Text
- View/download PDF
8. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy.
- Author
-
Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, Milat A, O'Connor D, Blyth F, Jorm L, and Green S
- Subjects
- Communication, Humans, Models, Theoretical, Health Policy, Health Services Research methods
- Abstract
The recent proliferation of strategies designed to increase the use of research in health policy (knowledge exchange) demands better application of contemporary conceptual understandings of how research shapes policy. Predictive models, or action frameworks, are needed to organise existing knowledge and enable a more systematic approach to the selection and testing of intervention strategies. Useful action frameworks need to meet four criteria: have a clearly articulated purpose; be informed by existing knowledge; provide an organising structure to build new knowledge; and be capable of guiding the development and testing of interventions. This paper describes the development of the SPIRIT Action Framework. A literature search and interviews with policy makers identified modifiable factors likely to influence the use of research in policy. An iterative process was used to combine these factors into a pragmatic tool which meets the four criteria. The SPIRIT Action Framework can guide conceptually-informed practical decisions in the selection and testing of interventions to increase the use of research in policy. The SPIRIT Action Framework hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with research. Where there is sufficient capacity, a series of research engagement actions might occur that facilitate research use. These hypotheses are being tested in ongoing empirical work., (Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
9. Protocol for the process evaluation of a complex intervention designed to increase the use of research in health policy and program organisations (the SPIRIT study).
- Author
-
Haynes A, Brennan S, Carter S, O'Connor D, Schneider CH, Turner T, and Gallego G
- Subjects
- Data Collection methods, Documentation, Government Agencies organization & administration, Humans, New South Wales, Randomized Controlled Trials as Topic, Health Plan Implementation organization & administration, Health Policy, Health Services Research statistics & numerical data, Process Assessment, Health Care organization & administration
- Abstract
Background: Process evaluation is vital for understanding how interventions function in different settings, including if and why they have different effects or do not work at all. This is particularly important in trials of complex interventions in 'real world' organisational settings where causality is difficult to determine. Complexity presents challenges for process evaluation, and process evaluations that tackle complexity are rarely reported. This paper presents the detailed protocol for a process evaluation embedded in a randomised trial of a complex intervention known as SPIRIT (Supporting Policy In health with Research: an Intervention Trial). SPIRIT aims to build capacity for using research in health policy and program agencies., Methods: We describe the flexible and pragmatic methods used for capturing, managing and analysing data across three domains: (a) the intervention as it was implemented; (b) how people participated in and responded to the intervention; and (c) the contextual characteristics that mediated this relationship and may influence outcomes. Qualitative and quantitative data collection methods include purposively sampled semi-structured interviews at two time points, direct observation and coding of intervention activities, and participant feedback forms. We provide examples of the data collection and data management tools developed., Discussion: This protocol provides a worked example of how to embed process evaluation in the design and evaluation of a complex intervention trial. It tackles complexity in the intervention and its implementation settings. To our knowledge, it is the only detailed example of the methods for a process evaluation of an intervention conducted as part of a randomised trial in policy organisations. We identify strengths and weaknesses, and discuss how the methods are functioning during early implementation. Using 'insider' consultation to develop methods is enabling us to optimise data collection while minimising discomfort and burden for participants. Embedding the process evaluation within the trial design is facilitating access to data, but may impair participants' willingness to talk openly in interviews. While it is challenging to evaluate the process of conducting a randomised trial of a complex intervention, our experience so far suggests that it is feasible and can add considerably to the knowledge generated.
- Published
- 2014
- Full Text
- View/download PDF
10. Rapid reviews in health policy: a study of intended use in the New South Wales' Evidence Check programme.
- Author
-
Moore, Gabriel Mary, Redman, Sally, Turner, Tari, and Haines, Mary
- Subjects
HEALTH policy ,LEGAL evidence ,POLICY sciences ,INFORMATION resources ,GOVERNMENT agencies - Abstract
Rapid reviews of research are a key way in which policy makers use research. This paper examines 74 rapid reviews commissioned by health policy agencies through the Sax Institute's Evidence Check programme. We examine what prompted policy makers to commission rapid reviews, their purpose, how and when they intended to use them, and how this varied by agency. Policy makers commissioned rapid reviews primarily as part of planned policy processes and to identify alternative solutions to problems. Government departments responsible for multiple policy domains were more likely to commission rapid reviews for agenda setting and to test new ideas. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. The development of SAGE: A tool to evaluate how policymakers' engage with and use research in health policymaking.
- Author
-
Makkar, Steve R., Brennan, Sue, Turner, Tari, Williamson, Anna, Redman, Sally, and Green, Sally
- Subjects
MEASURING instruments ,HEALTH policy ,RESEARCH evaluation ,MIXED methods research ,QUALITATIVE research - Abstract
It is essential that health policies are based on the best available evidence including that from research, to ensure their effectiveness in terms of both cost and health outcomes for the wider community. The present study describes the development of SAGE (Staff Assessment of enGagement with Evidence), a measure that combines an interview and document analysis to evaluate how policymakers engaged with research (i.e., how research was searched for, appraised, or generated, and whether interactions with researchers occurred), how policymakers used research (i.e., conceptually, instrumentally, tactically, or imposed), and what barriers impacted upon the use of research, in the development of a specific policy product. A multifaceted strategy was used to develop the SAGE interview and the accompanying interview-scoring tool. These included consultations with experts in health policy and research, review and analysis of the literature on evidence-informed policymaking and previous measures of research use, qualitative analysis of interviews with policymakers, and pilot-testing with senior policymakers. These steps led to the development of a comprehensive interview and scoring tool that captures and evaluates a broad range of key actions policymakers perform when searching for, appraising, generating, and using research to inform a specific policy product. Policy organizations can use SAGE to not only provide a thorough evaluation of their current level of research engagement and use, but to help shed light on programs to improve their research use capacity, and evaluate the success of these programs in improving the development of evidence-informed policies. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Using conjoint analysis to develop a system to score research engagement actions by health decision makers.
- Author
-
Makkar, Steve R., Williamson, Anna, Turner, Tari, Redman, Sally, and Louviere, Jordan
- Subjects
CONJOINT analysis ,PUBLIC health research ,SURVEY methodology ,HEALTH policy ,FEASIBILITY studies - Abstract
Background: Effective use of research to inform policymaking can be strengthened by policymakers undertaking various research engagement actions (e.g., accessing, appraising, and applying research). Consequently, we developed a thorough measurement and scoring tool to assess whether and how policymakers undertook research engagement actions in the development of a policy document. This scoring tool breaks down each research engagement action into its key 'subactions' like a checklist. The primary aim was to develop the scoring tool further so that it assigned appropriate scores to each subaction based on its effectiveness for achieving evidence-informed policymaking. To establish the relative effectiveness of these subactions, we conducted a conjoint analysis, which was used to elicit the opinions and preferences of knowledge translation experts. Method: Fifty-four knowledge translation experts were recruited to undertake six choice surveys. Respondents were exposed to combinations of research engagement subactions called 'profiles', and rated on a 1-9 scale whether each profile represented a limited (1-3), moderate (4-6), or extensive (7-9) example of each research engagement action. Generalised estimating equations were used to analyse respondents' choice data, where a utility coefficient was calculated for each subaction. A large utility coefficient indicates that a subaction was influential in guiding experts' ratings of extensive engagement with research. Results: The calculated utilities were used as the points assigned to the subactions in the scoring system. The following subactions yielded the largest utilities and were regarded as the most important components of engaging with research: searching academic literature databases, obtaining systematic reviews and peer-reviewed research, appraising relevance by verifying its applicability to the policy context, appraising quality by evaluating the validity of the method and conclusions, engaging in thorough collaborations with researchers, and undertaking formal research projects to inform the policy in question. Conclusions: We have generated an empirically-derived and context-sensitive method of measuring and scoring the extent to which policymakers engaged with research to inform policy development. The scoring system can be used by organisations to quantify staff research engagement actions and thus provide them with insights into what types of training, systems, and tools might improve their staff's research use capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. Developing definitions for a knowledge exchange intervention in health policy and program agencies: reflections on process and value.
- Author
-
Haynes, Abby, Turner, Tari, Redman, Sally, Milat, Andrew J., and Moore, Gabriel
- Subjects
- *
DEFINITIONS , *HEALTH policy , *HEALTH , *INFORMATION sharing , *HEALTH programs , *RESEARCH methodology - Abstract
The development of definitions is an integral part of the research process but is often poorly described. This paper details the iterative development of five definitions: Policy, Health policy-maker, Health policy agency, Policy documents, and Research findings. We describe the challenges of developing definitions in a large multidisciplinary team and the important methodological repercussions. We identify four factors that were most helpful in this process: (1) An emphasis on fit-for-purpose functionality, (2) Consultation with in-context experts, (3) Our willingness to amend terms as well as definitions, and to revisit some methods and goals as a consequence, and (4) Agreement that we would satisfice: accept ‘good enough’ solutions rather than struggle for optimality and consensus. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
14. How frequently should 'living' guidelines be updated? Insights from the Australian Living Stroke Guidelines
- Author
-
Tari Turner, Steve McDonald, Louise Wiles, Coralie English, Kelvin Hill, Turner, Tari, McDonald, Steve, Wiles, Louise, English, Coralie, and Hill, Kelvin
- Subjects
Stroke ,evidence ,Health Policy ,Australia ,Humans ,updating ,guidelines ,stroke ,living guidelines - Abstract
Background “Living guidelines” are guidelines which are continually kept up to date as new evidence emerges. Living guideline methods are evolving. The aim of this study was to determine how frequently searches for new evidence should be undertaken for the Australian Living Stroke Guidelines. Methods Members of the Living Stroke Guidelines Development Group were invited to complete an online survey. Participants nominated one or more recommendation topics from the Living Stroke Guidelines with which they had been involved and answered questions about that topic, assessing whether it met criteria for living evidence synthesis, and how frequently searches for new evidence should be undertaken and why. For each topic we also determined how many studies had been assessed and included, and whether recommendations had been changed. Results Fifty-seven assessments were received from 33 respondents, covering half of the 88 guideline topic areas. Nearly all assessments (49, 86%) were that the continual updating process should be maintained. Only three assessments (5%) deemed that searches should be conducted monthly; 3-monthly (14, 25%), 6-monthly (13, 23%) and yearly (17, 30%) searches were far more frequently recommended. Rarely (9, 16%) were topics deemed to meet all three criteria for living review. The vast majority of assessments (45, 79%) deemed the topic a priority for decision-making. Nearly half indicated that there was uncertainty in the available evidence or that new evidence was likely to be available soon. Since 2017, all but four of the assessed topic areas have had additional studies included in the evidence summary. For eight topics, there have been changes in recommendations, and revisions are underway for an additional six topics. Clinical importance was the most common reason given for why continual evidence surveillance should be undertaken. Workload for reviewers was a concern, particularly for topics where there is a steady flow of publication of small trials. Conclusions Our study found that participants felt that the vast majority of topics assessed in the Living Stroke Guidelines should be continually updated. However, only a fifth of topic areas were assessed as conclusively meeting all three criteria for living review, and the definition of “continual” differed widely. This work has informed decisions about search frequency for the Living Stroke Guidelines and form the basis of further research on methods for frequent updating of guidelines.
- Published
- 2022
15. Using conjoint analysis to develop a system to score research engagement actions by health decision makers
- Author
-
Anna Williamson, Tari Turner, Jordan J. Louviere, Sally Redman, Steve R. Makkar, Makkar, Steve R, Williamson, Anna, Turner, Tari, Redman, Sally, and Louviere, Jordan
- Subjects
knowledge translation ,Knowledge management ,Context (language use) ,Knowledge translation ,Decision Support Techniques ,Evidence-based policy ,evidence-based policy ,Humans ,evidence-informed policy ,Conjoint analysis ,Policy Making ,Health policy ,Measurement ,business.industry ,Research ,Health Policy ,Health services research ,Administrative Personnel ,health policy ,Evidence-informed policy ,utilisation ,Checklist ,policy maker ,Systematic review ,Scale (social sciences) ,Utilisation ,Policymaker ,conjoint analysis ,measurement ,business ,Psychology - Abstract
Background Effective use of research to inform policymaking can be strengthened by policymakers undertaking various research engagement actions (e.g., accessing, appraising, and applying research). Consequently, we developed a thorough measurement and scoring tool to assess whether and how policymakers undertook research engagement actions in the development of a policy document. This scoring tool breaks down each research engagement action into its key ‘subactions’ like a checklist. The primary aim was to develop the scoring tool further so that it assigned appropriate scores to each subaction based on its effectiveness for achieving evidence-informed policymaking. To establish the relative effectiveness of these subactions, we conducted a conjoint analysis, which was used to elicit the opinions and preferences of knowledge translation experts. Method Fifty-four knowledge translation experts were recruited to undertake six choice surveys. Respondents were exposed to combinations of research engagement subactions called ‘profiles’, and rated on a 1–9 scale whether each profile represented a limited (1–3), moderate (4–6), or extensive (7–9) example of each research engagement action. Generalised estimating equations were used to analyse respondents’ choice data, where a utility coefficient was calculated for each subaction. A large utility coefficient indicates that a subaction was influential in guiding experts’ ratings of extensive engagement with research. Results The calculated utilities were used as the points assigned to the subactions in the scoring system. The following subactions yielded the largest utilities and were regarded as the most important components of engaging with research: searching academic literature databases, obtaining systematic reviews and peer-reviewed research, appraising relevance by verifying its applicability to the policy context, appraising quality by evaluating the validity of the method and conclusions, engaging in thorough collaborations with researchers, and undertaking formal research projects to inform the policy in question. Conclusions We have generated an empirically-derived and context-sensitive method of measuring and scoring the extent to which policymakers engaged with research to inform policy development. The scoring system can be used by organisations to quantify staff research engagement actions and thus provide them with insights into what types of training, systems, and tools might improve their staff’s research use capacity. Electronic supplementary material The online version of this article (doi:10.1186/s12961-015-0013-z) contains supplementary material, which is available to authorized users.
- Full Text
- View/download PDF
16. The development of ORACLe: a measure of an organisation’s capacity to engage in evidence-informed health policy
- Author
-
Jordan J. Louviere, Anna Williamson, Sally Green, Steve R. Makkar, Sally Redman, Abby Haynes, Sue E. Brennan, Tari Turner, Makkar, Steve R, Turner, Tari, Williamson, Anna, Louviere, Jordan, Redman, Sally, Haynes, Abby, Green, Sally, and Brennan, Sue
- Subjects
Biomedical Research ,Inservice Training ,Knowledge management ,Organizational culture ,Assessment ,Knowledge translation ,Oracle ,Health administration ,Interviews as Topic ,03 medical and health sciences ,0302 clinical medicine ,Employee engagement ,Humans ,030212 general & internal medicine ,Discrete choice experiments ,Policy Making ,Health Services Administration ,Health policy ,Evidence ,Evidence-Based Medicine ,Capacity ,business.industry ,Research ,030503 health policy & services ,Health Policy ,Australia ,Health services research ,Research use ,health policy ,Evidence-based medicine ,health care ,Measure ,Organizational Culture ,Leadership ,Organisation ,Policymaker ,0305 other medical science ,Psychology ,business ,Algorithms - Abstract
Background Evidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations’ research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations’ capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership). Method We used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe. Results The ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4). Conclusion We utilised qualitative and quantitative methods to develop a system to assess and score organisations’ capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development. Electronic supplementary material The online version of this article (doi:10.1186/s12961-015-0069-9) contains supplementary material, which is available to authorized users.
- Full Text
- View/download PDF
17. Using conjoint analysis to develop a system of scoring policymakers’ use of research in policy and program development
- Author
-
Sally Redman, Tari Turner, Steve R. Makkar, Anna Williamson, Jordan J. Louviere, Makkar, Steve R, Williamson, Anna, Turner, Tari, Redman, Sally, and Louviere, Jordan
- Subjects
knowledge translation ,Sample (statistics) ,Knowledge translation ,Translational Research, Biomedical ,Evidence-based policy ,evidence-based policy ,Medicine ,Humans ,Use ,evidence-informed policy ,Program Development ,Policy Making ,Conjoint analysis ,Health policy ,Measurement ,Evidence-Based Medicine ,research ,Management science ,business.industry ,Research ,Health Policy ,Health services research ,Administrative Personnel ,health policy ,Evidence-based medicine ,Evidence-informed policy ,utilisation ,policymaker ,Checklist ,Scale (social sciences) ,Utilisation ,Policymaker ,conjoint analysis ,Health Services Research ,measurement ,use ,business - Abstract
Background The importance of utilising the best available research evidence in the development of health policies, services, and programs is increasingly recognised, yet few standardised systems for quantifying policymakers’ research use are available. We developed a comprehensive measurement and scoring tool that assesses four domains of research use (i.e. instrumental, conceptual, tactical, and imposed). The scoring tool breaks down each domain into its key subactions like a checklist. Our aim was to develop a tool that assigned appropriate scores to each subaction based on its relative importance to undertaking evidence-informed health policymaking. In order to establish the relative importance of each research use subaction and generate this scoring system, we conducted conjoint analysis with a sample of knowledge translation experts. Methods Fifty-four experts were recruited to undertake four choice surveys. Respondents were shown combinations of research use subactions called profiles, and rated on a 1 to 9 scale whether each profile represented a limited (1–3), moderate (4–6), or extensive (7–9) example of research use. Generalised Estimating Equations were used to analyse respondents’ choice data, which calculated a utility coefficient for each subaction. A large utility coefficient indicated that a subaction was particularly influential in guiding experts’ ratings of extensive research use. Results Utility coefficients were calculated for each subaction, which became the points assigned to the subactions in the scoring system. The following subactions yielded the largest utilities and were regarded as the most important components of each research use domain: using research to directly influence the core of the policy decision; using research to inform alternative perspectives to deal with the policy issue; using research to persuade targeted stakeholders to support a predetermined decision; and using research because it was a mandated requirement by the policymaker’s organisation. Conclusions We have generated an empirically derived and context-sensitive means of measuring and scoring the extent to which policymakers used research to inform the development of a policy document. The scoring system can be used by organisations to not only quantify the extent of their research use, but also to provide them with insights into potential strategies to improve subsequent research use. Electronic supplementary material The online version of this article (doi:10.1186/s12961-015-0022-y) contains supplementary material, which is available to authorized users.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.