Skip to content. | Skip to navigation

Personal tools
Log in
You are here: Home NACDD Initiatives Women's Health Depression as a Co-Morbidity of Diabetes Establishing Evaluation Measures

Establishing Evaluation Measures

Guidance into how to develop, document and maintain evaluation measures to gauge the impact (or lack thereof) of program efforts.


Before one can address evaluation measures, one must be clear about program goals and objectives, for that is basically what is going to be assessed.  With this initiative, the overall goal for health professionals is to help improve the health of individuals who have both depression and diabetes.  The primary objective is to help remove depression as an impediment to effective management of one’s diabetes, with a particular focus on women and minorities.

What measures are going to tell you that you are on the right course in pursuing those objectives or what changes need to be made in trying to achieve your objectives?  There are a number of activities that may be undertaken to get patients with diabetes more familiar with the fact that they are at increased risk for depression, that they should receive counsel from a health care provider or mental health counselor, that they should be screened, and, when appropriate, that they be under sustained treatment.

Also subject to evaluation are some of the processes outcomes.  How many key stakeholders responded affirmatively in joining you as a program partner or coalition member.  How many senior centers or diabetes support groups agreed to welcome your education or screening efforts?  How much exposure have you generated for key educational messages? 

The target audience for health professionals is mainly adults with diabetes who are at risk of depression, with special emphasis on women and minorities who disproportionately suffer from this co-morbidity.

Practical Evaluation Approaches for this Initiative

Evaluation measures for program activity and public health outcomes will depend on many things, including capabilities and resources.  Large-scale surveys, for example, will likely be beyond the capability of many health departments to conduct.  Discrete surveys may not be.  For example, if an objective is to increase an understanding of this co-morbidity among attendees in five senior centers, it may be possible to develop a questionnaire to determine awareness and understanding as a baseline, and then repeat the questionnaire at a later date following a series of educational presentations.

Similarly, if there is a diabetes support group, a questionnaire may be developed to establish a baseline to see how many patients have talked to their health care providers about depression, and repeat the questionnaire at a later point following educational intervention.

Qualitative measures that might assess the impact of stakeholders as partners or coalition members could also serve as an evaluation measure.  How many partnerships or coalition members joined and are active after a certain period of time?  How many initiated programs or projects as a result of their participation?

Tapping into stakeholder activity is another area where measures could be established.  For example, every October, many key mental health organizations support National Depression Screening Day, with screening sites identified throughout the country.  Promoting these sites to patients with diabetes and encouraging the sites to ask whether those screened have diabetes may be useful baseline information for evaluation measures, and an opportunity to form a productive partnership.

Guidance from the Experts

If you are new to evaluation measures, there are a number of sources that can provide you with guidance and examples.

In 1999, the Centers for Disease Control and Prevention (CDC) published in MMWR a “Framework for Program Evaluation in Public Health.”  This was the product of a huge undertaking that involved scores of CDC staff, staff from state and county health departments, academic settings, non-profit voluntary health organizations, and experts in the field.  It was prompted by the need to foster accountability for public health actions and to provide a vehicle for improving program activity.  It also noted that while program evaluation is an essential organizational practice, it is not carried out in a consistent manner across public health program areas.  The report:

  • summarizes essential elements of program evaluation;
  • provides a framework for conducting effective evaluation;
  • clarifies the steps in program evaluation;
  • reviews standards for effective program evaluation; and
  • addresses misconceptions regarding the purposes and methods of program evaluation.

The framework comprises six key steps in evaluation practice and standards for effective evaluation.

1. Engage the stakeholders
2. Describe the program
3. Focus the evaluation design
4. Gather credible evidence
5. Justify conclusions
6. Ensure use and share lessons learned.

The overall CDC report can be accessed at:

The W.K. Kellogg Foundation has an Evaluation Toolkit that talks of 20 different approaches to evaluation and has a section to help you select the one that is right for you.  It also addresses questions an evaluation may include as well as guidance for developing a plan and a budget.  Go to the Foundation’s Web Home Page and click on the “Knowledgebase” bar and scroll down to “Toolkit.”  Click on that and go to three toolkits, one of which is Evaluation.

The National Respite Network and Resource Center has a series of fact sheets on program evaluation along with a manual on evaluating and reporting outcomes.  Although the topic is respite care, there is relevant guidance  and steps outlined on evaluation measures per se, on local program evaluation, and on developing evaluation questions.  They can be found at: From that particular fact sheet, you can click on selected “Related Factsheets.”

A third source of tips and insights comes from a consulting firm—Authenticity Consulting, Inc. which specializes in non-profit capacity building.  It produced a “Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources.”  The document provides guidance toward basic planning and implementation of an outcomes-based evaluation process.  It also includes common myths to be aware of before you start planning:

  Myth:  Evaluation is a complex science.  I don’t have time to learn it.
  Myth:  It’s an event to get over with and then move on!
           Myth:  Evaluation is a whole new set of activities – we don’t have the resources.
  Myth:  There’s a right way to do outcomes evaluation.  What if I don’t get it right.
  Myth:  Funders will accept or reject my outcomes plan.
  Myth:  I always know what my clients need – I don’t need outcomes evaluation to tell me if I’m really meeting the needs of my clients or not.

The guide includes six steps:

1. Getting Ready
2. Choosing Outcomes
3. Selecting indicators
4. Planning Data/Info Collection
5. Piloting/Testing
6. Analyzing/Reporting Results

The guide can be accessed at: