Abstract
To achieve key national security objectives, the U.S. government and the U.S. Department of Defense (DoD) must communicate effectively and credibly with a broad range of foreign audiences. DoD spends more than $250 million per year on inform, influence, and persuade (IIP) efforts, but how effective (and cost-effective) are they? How well do they support military objectives? Could some of them be improved? If so, how? It can be difficult to measure changes in audience behavior and attitudes, and it can take a great deal of time for DoD IIP efforts to have an impact. DoD has struggled with assessing the progress and effectiveness of its IIP efforts and in presenting the results of these assessments to stakeholders and decisionmakers. To address these challenges, a RAND study compiled examples of strong assessment practices across sectors, including defense, marketing, public relations, and academia, distilling and synthesizing insights and advice for the assessment of DoD IIP efforts and programs. These insights and attendant best practices will be useful to personnel who plan and assess DoD IIP efforts and those who make decisions based on assessments, particularly those in DoD and Congress who are responsible for setting national defense priorities and allocating the necessary resources. In addition to identifying where and why efforts have been successful, assessment can help detect imminent program failure early on, saving precious time and resources. An accompanying volume, Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade: Handbook for Practitioners, offers a quick-reference guide to the best practices presented here for personnel responsible for planning, executing, and assessing DoD IIP efforts.
http://www.rand.org/pubs/research_reports/RR809z1.html
Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade
Desk Reference
by Christopher Paul, Jessica Yeats, Colin P. Clarke, Miriam Matthews
Related Topics: Civil-Military Relations, Information Operations, Military Budgets and Defense Spending, Program Evaluation, Psychological Warfare
CitationView related products
Download
Purchase
DOWNLOAD EBOOK FOR FREE
Format File Size Notes
PDF file 3.4 MB Technical Details »
DOWNLOAD SUPPORT FILES
Metaevaluation Checklist
Format File Size Notes
zip file 0.1 MB Technical Details »
Research Questions
What are good practices for assessing U.S. Department of Defense (DoD) inform, influence, and persuade (IIP) efforts in terms of their effectiveness, cost-effectiveness, and extent to which they support the larger goals of military campaigns?
What can IIP planners and assessment practitioners learn from commercial marketing, public communication, academia, and other sectors, and which approaches are particularly applicable to DoD IIP activities?
How can planners ensure that they are reaching stakeholders and decisionmakers with necessary information about the outcomes of IIP efforts presented in the right way?
How can DoD better support the assessment of IIP efforts? And how could better assessments lead to more effective and efficient IIP efforts?
Abstract
To achieve key national security objectives, the U.S. government and the U.S. Department of Defense (DoD) must communicate effectively and credibly with a broad range of foreign audiences. DoD spends more than $250 million per year on inform, influence, and persuade (IIP) efforts, but how effective (and cost-effective) are they? How well do they support military objectives? Could some of them be improved? If so, how? It can be difficult to measure changes in audience behavior and attitudes, and it can take a great deal of time for DoD IIP efforts to have an impact. DoD has struggled with assessing the progress and effectiveness of its IIP efforts and in presenting the results of these assessments to stakeholders and decisionmakers. To address these challenges, a RAND study compiled examples of strong assessment practices across sectors, including defense, marketing, public relations, and academia, distilling and synthesizing insights and advice for the assessment of DoD IIP efforts and programs. These insights and attendant best practices will be useful to personnel who plan and assess DoD IIP efforts and those who make decisions based on assessments, particularly those in DoD and Congress who are responsible for setting national defense priorities and allocating the necessary resources. In addition to identifying where and why efforts have been successful, assessment can help detect imminent program failure early on, saving precious time and resources. An accompanying volume, Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade: Handbook for Practitioners, offers a quick-reference guide to the best practices presented here for personnel responsible for planning, executing, and assessing DoD IIP efforts.
Key Findings
Across Sectors, Best Practices for Assessing Efforts to Inform, Influence, and Persuade Efforts Adhere to a Handful of Common Principles
Effective assessment requires clear, realistic, and measurable goals.
Effective assessment starts in planning.
Effective assessment requires a theory of change or explicit logic of the effort that connects activities to objectives.
Change cannot be measured without a baseline.
Assessment over time requires continuity and consistency.
Assessment is iterative, not something planned and executed once.
Assessment requires resources, but any assessment that reduces the uncertainty is valuable.
DoD Has Historically Struggled to Assess the Progress and Effectiveness of Its IIP Efforts
There is a lack of shared understanding about how IIP efforts function, which broadens the scope of the assessment questions asked. Good accountability assessments would show not only that these efforts support broader military campaign and national security goals but also how they do so.
In complex operating environments, IIP efforts often face constraints, disruptors, and unintended consequences. Good assessment can help predict these challenges and overcome them when they do arise.
Good assessment can support learning from both success and failure. Well-designed, early assessment can help identify problems and get a struggling IIP effort on a path to success.
Organizations that do assessment well have cultures that value assessment. Organizing for assessment involves dedicating the necessary resources to the assessment process (5 percent is a common benchmark); ensuring leadership buy-in, advocacy, and willingness to learn from assessment results; training assessment personnel; and implementing a system of continuous assessment, data collection, and program change in response to assessment results.
Recommendations
DoD planners should develop IIP efforts according to assessment plans and should develop assessment plans according to stakeholder and decisionmaker needs.
Assessment practitioners should be explicit about their need for resources, information about campaign objectives, organizational support, and stakeholder expectations.
DoD leadership should ensure that IIP assessment efforts have the necessary advocacy, standards, doctrine and training, and access to expertise. Leaders also need to recognize that not every assessment must be conducted to the same standard.
DoD leadership should support the development of a clearinghouse of validated (and rejected) IIP measures to encourage sharing of successful approaches and learning from mistakes.
Congressional stakeholders should continue to demand accountability in assessment and be clearer about what is required and expected.
DoD reporting must acknowledge that congressional calls for accountability follow two lines of inquiry and must show how assessment meets them. Congress wants to see justification for spending and evidence of efficacy (traditional accountability), but it also wants support for assertions that IIP activities are appropriate military undertakings.
RELATED PRODUCTS
Report
REPORT
Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade: Handbook for Practitioners
Apr 17, 2015
Report
REPORT
Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade: An Annotated Reading List
Apr 17, 2015
Table of Contents
Chapter One
Identifying Best Practices and Methods for Assessment
Chapter Two
Why Evaluate? An Overview of Assessment and Its Utility
Chapter Three
Applying Assessment and Evaluation Principles to IIP Efforts
Chapter Four
Challenges to Organizing for Assessment and Ways to Overcome Them
Chapter Five
Determining What’s Worth Measuring: Objectives, Theories of Change, and Logic Models
Chapter Six
From Logic Models to Measures: Developing Measures for IIP Efforts
Chapter Seven
Assessment Design and Stages of Evaluation
Chapter Eight
Formative and Qualitative Research Methods for IIP Efforts
Chapter Nine
Research Methods and Data Sources for Evaluating IIP Outputs, Outcomes, and Impacts
Chapter Ten
Surveys and Sampling in IIP Assessment: Best Practices and Challenges
Chapter Eleven
Presenting and Using Assessments
Chapter Twelve
Conclusions and Recommendations
Appendix A
Assessing Assessments: The Metaevaluation Checklist
Appendix B
Survey Sampling Models and Management
Appendix C
Evaluating Inform, Influence, and Persuade Efforts: Examples and Additional Resources
Appendix D
Major Theories of Influence or Persuasion
[pdf=55652845497759722d98023a /]