Learning outcomes and evaluation metrics for training researchers to engage in science policy

0
Learning outcomes and evaluation metrics for training researchers to engage in science policy

In conducting a review of relevant literature, we focused on academic and professional fields that have as their purview the education and training of individuals, whether in an individual or organizational capacity, to engage in policy and politics. This restricted our search to those fields that define themselves as largely serving to educate practitioners, as opposed to generating academic research. We first identified relevant fields based on co-author expertise and a literature search and subsequently conducted keyword searches to locate articles on learning outcomes and evaluation metrics. We identified three broad domains: (1) science communication training, (2) public affairs and lobbying education, and (3) an emerging multidisciplinary scholarship on training researchers to engage in policy. Below, we review the literature from each of these domains to identify which learning outcomes and associated correlates they describe as important for training researchers for policy engagement. Examples of the measures these fields have developed to assess these constructs are included in Tables A1–A3 of the Supplementary Materials.

Science communication training

Globally, post-World War II discourses about the relationship between science and society increasingly have emphasized the expectation that government funding for research should lead to specific public gains (Guston, 2000), including improved economic growth, workforce education, industrial innovation, and decision-making. In the U.S., for example, the National Science Foundation’s (NSF) Directorate for Technology, Innovation and Partnerships (TIP) describes its mission as advancing “use-inspired” research (NSF, 2024) that both generates new knowledge and has immediate societal application (Stokes, 1997). As wider audiences participate in the production and use of science, and decision-makers and other stakeholders increasingly expect that scientific and technical information for decision-making is understandable, the demand for scientists and engineers to communicate their work with lay audiences has spurred the creation of science communication training opportunities (Newman, 2019). In the U.S. alone, Muindi and Luray (2023) identified more than 330 training opportunities for scientists in public engagement, 60% of which were categorized as science communication. Notably, 40% of the public engagement trainings were coded as focusing on policy and advocacy. Indeed, one of the primary stated goals for science communication, according to the National Academies of Sciences, Engineering, and Medicine, is “to influence people’s opinions, behavior, and policy preferences” (NASEM, 2017, 18). These goals align with the disciplinary roots of science communication in strategic communication (Besley and Dudo, 2022), which has been described as “the purposeful use of communication by an organization to fulfill its mission” (Hallahan et al., 2007, 3).

While science communication training has been accused of lacking a theoretically supported and evidence-based approach (NASEM, 2017), in recent decades, a scholarship has emerged to illuminate what outcomes trainers and researchers are trying to achieve through these efforts and to assess the extent to which they are attaining those goals (Besley et al., 2020; Rodgers et al., 2020; Dudo et al., 2021). As a result, the field of science communication has begun to develop typologies and associated measures of learning outcomes—such as knowledge and skills—that may be relevant to training researchers to engage in policy as well. We summarize these below. Baram-Tsabari and Lewenstein (2017) mapped six learning goals for science communication training that align with the educational literature’s focus on three domains—cognitive, affect, and behavior (Kraiger et al., 1993; David and Baram-Tsabari, 2019)—that are also often the foci when evaluating individual participant outcomes (W.K. Kellogg Foundation, 2004; J. D. Kirkpatrick and Kirkpatrick, 2016). Their six strands—adapted from Bell et al. (2009)—include affective responses, content knowledge, methods, reflection, participation, and identity. As described by David and Baram-Tsabari (2019), these strands can be incorporated into a four-level evaluation model, aligned with the Kirkpatrick training program assessment methodology to measure reaction, learning, behavior, and results (D. L. Kirkpatrick, 1967; J. D. Kirkpatrick and Kirkpatrick, 2016). Reaction entails how participants perceive the program, such as their views regarding the experience and attitudes; learning equates to the acquisition of new content knowledge and skills; and behavior refers to the extent to which these new practices are implemented following training. Finally, results constitute “the entirety of the outcomes and evaluates them in light of the program’s initial aims and goals” (David and Baram-Tsabari, 2019, 180). The authors do not recommend specific evaluation measures or attempt to summarize what knowledge, skills, or affective responses could potentially fall under these goals, but other scholars have attempted to do so.

A challenge in developing measures to assess whether learning outcomes have been met is that there are many potential contexts in which they will inevitably vary. While “know your audience” is a longstanding informal adage, it also holds true in the development of more formal strategic communication objectives (Besley and Dudo, 2022), implying that the skills, knowledge, and approach needed by the communicator may differ considerably whether they are conducting a media interview, giving a talk to a local organization, working with a community to co-produce new knowledge, or testifying to Congress. Aurbach et al. (2019) reviewed the science communication literature to identify foundational communication skills that are broadly applicable regardless of differences in communicators, contexts, and audiences. Each of their nine categories represent a set of skills (Table 1) ranging from identifying the researchers’ overarching goals to potential audiences in messaging, crafting narratives, visual design, nonverbal communication, writing, and engaging in dialog. Subsets of these skills also largely map to Baram-Tsabari and Lewenstein’s (2013) learning outcome typology and methodology for assessing science communication writing (Table 1; Table A-1, Supplementary Materials).

Table 1 Outcomes and correlates from science communication training.

Attempts to evaluate outcomes from specific training events have been met with mixed success (Rubega et al., 2021; Capers et al., 2022), demonstrating the difficulty both in designing and implementing measures that accurately capture the learning outcomes of interest. In a series of studies, Rubega et al. (2021) and Capers et al. (2022) tested communicator speech clarity, engagement, and credibility using external audiences as evaluators (Table 1; Table A-1, Supplementary Materials). While these studies did not find pre- and post-training effects in the audience evaluations, they did find reduced use of jargon (Capers et al., 2022).

Rodgers and colleagues developed the science communication training effectiveness (SCTE) scale (2020) to assess a broader range of knowledge, affect, and behavioral variables (Table 1; Table A-1, Supplementary Materials). In implementing the scale pre- and post-training, they were successfully able to identify significant intervention effects on science communication self-efficacy, oral presentation self-confidence, and science communication knowledge. They also found that participants’ attitudes and motivations during training had a greater impact on the intervention’s outcomes than those that participants had at the outset of the experience (Akin et al., 2021). The question of what motivates scientists to engage with lay audiences has been the subject of a number of studies (Poliakoff and Webb, 2007; Besley, 2015; Besley et al., 2018) and led to the development of self-efficacy and outcome expectations scales (Peterman et al., 2017) for the purpose of conducting training assessments (Table 1; Table A-1, Supplementary Materials), although to our knowledge, no results from the use of these assessments have been published.

Public affairs education and lobbying

While the focus of science communication training is on preparing researchers to strategically engage with lay audiences, including decision-makers, to achieve their individual public engagement goals, the multidisciplinary field of public affairs takes an organizational- and systems-level perspective. Further, it specifically focuses on influencing policy processes, drawing from the academic disciplines of organization and management science, political science, public administration, policy analysis, and communication (Timmermans, 2020). While the field does not focus on training scientists and engineers per se, the extent to which most policy issues currently involve scientific and technical expertise (Fischer, 2009) suggests that research experts would be a relevant audience. As John F. Kennedy wrote: “Lobbyists are in many cases expert technicians and capable of explaining complex and difficult subjects in a clear, understandable fashion” (1956). Over the last few decades, articles in the Journal of Public Affairs, Journal of Public Affairs Education, and Interest Groups and Advocacy have explored how to design educational curricula to train public affairs professionals and lobbyists (Newcomer and Allen, 2010; Griffin and Thurber, 2015; Holyoke et al., 2015; Powell and Saint-Germain, 2016; Timmermans, 2020). Accreditation standards from the Network of Schools of Public Policy, Affairs, and Administration (NASPAA) mandate that master’s programs address five domain competencies for student learning (2023, 7):

  1. (1)

    the ability to lead and manage in the public interest;

  2. (2)

    to participate in, and contribute to, the policy process;

  3. (3)

    to analyze, synthesize, think critically, solve problems and make evidence-informed decisions in a complex and dynamic environment;

  4. (4)

    to articulate, apply, and advance a public service perspective;

  5. (5)

    to communicate and interact productively and in culturally responsive ways with a diverse and changing workforce and society at large.

The requirement by NASPAA that programs conduct evaluation has sparked models for assessing public affairs education and learning outcomes (Newcomer and Allen, 2010). Newcomer and Allen identified short-term outcomes from classroom learning and field experiences, such as the acquisition of new knowledge and skills, and subsequent use in employment. In their theoretical model, these short-term outcomes are moderated by enabling student characteristics such as self-efficacy and reflection that result in longer-term public service outcomes. But the authors did not present standardized metrics for assessing these outcomes. Instead, programs have taken a diversity of approaches, ranging from reviews of student performance on assignments and exams to surveys and student ratings of instruction (Williams, 2002; Piskulich and Peat, 2014; Powell and Saint-Germain, 2016). Jennings (2019) pointed to the difficulty of assessing public affairs competencies and outcomes—“a challenging, complex, and messy affair” (p 15)—as the reason that few formal studies have been conducted. He recommended looking to the field of public education as a model for more sophisticated analyses. Among the few formal studies described in the literature are some that focus on external engagement components of public affairs education. For example, Sprague and Percy (2014) sought to assess practicum experiences at Stanford University in which students work in small groups to conduct policy analyses for a local government or nonprofit. In a survey of five years of classes, conducted post-graduation, the authors asked graduates to rate their skills pre- and post-practicum, and for their subsequent usefulness. Alternately, to assess service-learning programs, Levesque-Bristol and Richards (2014) created a short form of the Public Affairs Scale measuring civic learning, which they define as community engagement, cultural competence, and ethical leadership. Similar to science communication, in which defining goals and objectives are foundational skills (Aurbach et al., 2019), public affairs and lobbying also take a strategic approach (Fleisher, 2005; Griffin and Thurber, 2015; Holyoke et al., 2015; Timmermans, 2020). Many of the learning outcomes relate to the need to collect and analyze information in planning a course of action, such as considering policy issue dimensions, stakeholders, governance institutions, communication media and content, and ethical and legal considerations (Table 2). As Wippersberg et al. (2015) stated, students should be able to “analyze politics and policies, and align both with corporate goals” (p. 58). Scientists and engineers in industry and government may easily recognize the need to understand their organization’s interests—and rules—when conducting policy engagement, but even if academic researchers are not as cognizant, they can face implications, depending on their institution’s rules and legal strictures (CSLDF, 2021; CSLDF, 2022). As such, the public affairs institutional-level perspective has broad application to all researchers seeking to engage in policy. Timmermans (2020) describes this learning outcome as understanding the extent to which issues present not just risks, but also opportunities for organizations. Lastly, the public affairs literature recognizes that policy engagement is a team sport that entails working and communicating within a broad system of other institutions and stakeholders, and their associated networks. As a result, public affairs learning outcomes include relationship and coalition building, ethics, communication and message development, and an understanding of governance institutions and policy processes (Table 2). These skill sets align with calls for improvements to evidence-based policymaking research and processes, in which scholars have increasingly argued for taking a systems perspective (National Research Council, 2012).

Table 2 Outcomes and correlates from public affairs education.

Training researchers to engage in policy

A third area of literature, aligned with the highly multidisciplinary Use of Research Evidence field (URE) (Farley-Ripple et al., 2020), focuses on training researchers to engage in policy as a component of evidence-based policymaking processes. Recommendations for the curricular content of these trainings include: combining direct instruction with experiential learning to improve researchers’ knowledge and skills in policy-making processes; conducting policy-relevant research; analyzing policy; building relationships; communication; understanding how policymakers use research; voter attitudes and ideologies; researcher preferences for varying types of engagement; and knowledge of lobbying regulations (Scott et al., 2019; Crowley, Scott, Long, Green, Israel et al., 2021). While a few such programs have publicly shared their internal program evaluations (Alberts et al., 2018; Bankston et al., 2023), to our knowledge, only one—Penn State’s Research-to-Policy Collaboration model—has conducted experimental evaluations of the program’s impact on both researchers and legislative offices (Crowley, Scott, Long, Green, Giray et al., 2021; Crowley, Scott, Long, Green, Israel et al., 2021).

Two constructs that frequently appear in the literature are: (1) differing types of evidence use in policymaking contexts (e.g., instrumental, conceptual, strategic, tactical, imposed) (Scott et al., 2019; Crowley, Scott, Long, Green, Giray et al., 2021; Crowley, Scott, Long, Green, Israel et al., 2021; Long et al., 2021); and (2) the varying roles that researchers can take in engaging in policy (Steel et al., 2000; Steel et al., 2004; Singh et al., 2014). Each of the constructs describes an important dimension of the contexts in which researchers and decision-makers interact. Crowley, Scott, Long, Green, Israel, et al. (2021) found that congressional offices that participated in the program with trained researchers were not only more likely to value the idea of using research to inform how policies are understood, but also increased their use of research as assessed through the analysis of legislative texts. Researchers who participated in the program also demonstrated greater involvement in supporting congressional policymakers’ conceptual and imposed use of evidence; in the case of the latter, when policymakers were required or requested to do so (Crowley, Scott, Long, Green, Giray, et al., 2021). Measures of researchers’ preferences for various possible roles engaging in policy—whether as simply reporting study information, working closely with policymakers to integrate information in policy, or as issue advocates—have not to our knowledge been included in assessments of training. But these types of survey questions have been included in descriptive studies of attitudes toward engagement (Singh et al., 2014) and have been discussed within the context of developing training curricula (Scott et al., 2019). Measuring behavior changes post-intervention—including both policy engagement on the part of researchers and evidence use by policymakers—has often been the outcome of most interest, but typically relies on self-reports (Rocha, 2000; Alberts et al., 2018; Crowley, Scott, Long, Green, Giray et al., 2021). Other constructs of interest related to whether researchers choose to engage in policy and their experiences in doing so often include: competence perceptions or self-efficacy, outcome expectations or response efficacy, and perceived social norms (Table 3). However, it is important to note that, likely due to the highly multidisciplinary nature of this literature and lack of theoretical foundations, the terms used to describe and measure constructs are highly variable.

Table 3 Outcomes and correlates from training researchers to engage in policy.

Comparative themes across literatures

Many learning outcomes appear across two of the literatures; however, only one appears across all three of the articles we have highlighted here: communication skills (Table 4). Furthermore, the fields of public affairs and “training researchers to engage” emphasize skills in building relationships as an important distinct additional dimension. These two fields also underscore the importance of teaching knowledge and skills related to policy processes, and policy research and analysis. Other areas of consensus across at least two of each of the literatures include: self-efficacy and outcome expectations as a correlate (all three literatures), social norms as a correlate (science communication/training researchers), an understanding of legal requirements (public affairs/training researchers), and taking a strategic approach (science communication/public affairs).

Table 4 Key themes across the three literatures; bolded themes appear in two or more of them.

link

Leave a Reply

Your email address will not be published. Required fields are marked *