ResearchHighlights

April 2000

Inside This Issue:

Evaluation plan can make or break funding proposals for sponsored projects

Kathleen SullivanKathleen Sullivan, Associate Professor in the School of Education, is an active researcher herself, and, as Director of the UM Center for Educational Research and Evaluation, she’s also a significant resource for UM researchers seeking ways to make their proposals for funding more competitive in an increasingly competitive environment. More and more, a solid evaluation plan as part of the project can make a difference between “funded” and “not funded” and according to Sullivan it shouldn’t be that hard to write. “Really,” she says, “it’s common sense.” Perhaps it’s not so for everyone, but Sullivan’s advice is easy enough to take and might have a big pay-off.

Things began to change in the 1970’s, Sullivan says. “It used to be, ‘take the money and do good.’ But the impacts weren’t always measured or communicated, so many programs lost support.” Now, Congress, state legislators, the public in general, are demanding accountability: Is our money being well spent? Do we know what we’re paying for, and how are we going to be sure we got it? An evaluation plan acknowledges that need for accountability and makes it part of the structure of the project. The requirement may, of course, be explicit in program guidelines or review criteria, most obviously in programs with an education focus or a social services or outreach component. With all funding programs under close scrutiny, however, even “fundamental” or “basic” science proposals can benefit from a clearly stated plan for tracking impact on the research and training environments.

Basically, Sullivan says, an evaluation plan can be integral to the development of the nature of the project. PIs know (and probably need to articulate) goals for the project. The evaluation component is the means by which progress toward and achievement of those goals can be monitored and communicated. “How can you tell that things are going the way they should be going?” Sullivan recommends building into the project specific “pulse-taking” measures. For example, if the project targets a specific population and promises a specific impact, figure out what information will be needed to make sure the project stays on track and that efforts are documented and outcomes are measurable.

The measurability is the hard part, Sullivan admits. Figuring out how to structure a goal so that outcomes can be measured takes some practice. Sullivan and her Center colleagues can help with that, and are happy to do so, even for projects in which the Center will not play a role. In general, think in numbers: how many students will learn how many technologies over the course of the project, for example? How many of the students will be from populations traditionally underrepresented in the area? Measurable outcomes are sometimes tacit in the goals of the program as announced in the guidelines.

In general, a good evaluation plan is part of a good proposal and part of the good management of a project. “It assures the agency that the project is being managed in a rational way and personnel are collecting data that feeds back into ‘quality improvement’ during the process.” The agency, in other words, sees accountability written into the project. Since the agency itself is most likely accountable to Congress or a governing body of some sort, having accountability spelled out in the proposal lets the funder know what it’s paying for and promises the data the funder will need for making its own arguments for future funding.

Sullivan urges PIs to incorporate evaluation from the beginning of the project. It should be part of the timeline (“integral to the flow” of the project, Sullivan says), and part of the personnel and budget for the project. And how much will all this cost? Not every project needs an evaluator, she notes, but everyone needs to be thinking about project evaluation from the beginning. Collecting the data and monitoring progress toward outcomes “should be part of somebody’s job description (or everybody’s),” Sullivan says. For projects whose primary focus in not on education or outreach, an evaluation plan doesn’t have to be extensive or elaborate, so the cost (keeping up with the documentation that measures impacts) can be an aspect of the PI/Co-PI function.

Sullivan and her Center can provide advice during the planning stages, gratis. Sullivan says, “The PIs have to set the direction,” of course. You can’t evaluate a project without goals, and you have to know what outcomes are expected, but Sullivan can assist with putting things in measurable terms and offering general guidance. “Even the most esoteric concepts can be brought down to measurable terms,” she claims.

For education/outreach projects, however, or for projects requiring multiple measurements, evaluation should show up in the proposed budget, covering the mechanisms, staffing, and expertise needed for identifying, collecting, and reporting appropriate data and for writing the summative evaluation. The agency won’t “hold it against you” for putting evaluation into the budget for the project, Sullivan says. “They expect to see it.” For bigger projects, evaluation probably needs to be formalized, with specific meetings to discuss how the project is going and to feed evolving insights back into the program. “Reader [of proposals] can tell whether people are thinking ‘good management’ practices.”

For several UM proposals, Sullivan’s input and expertise have already played a role in attracting funding. John O’Haver, Chemical Engineering, serves as PI for two programs in which Sullivan is evaluator and for which she wrote the evaluation plan. O’Haver says, “I attribute the successful funding of both the GK-12 and the statistics grants [both funded by the National Science Foundation] to her part in writing the evaluation. In both reviews they specifically mentioned the strength of having her do the evaluation.” Ken McGraw, Psychology, agrees that including an evaluator in a complex program is a good idea. He says, “There are two problems with doing your own evaluation. First, you can’t possibly be objective enough. Second, you won’t have the time. Using an evaluation expert like Kathleen Sullivan is definitely the way to go.”

Sullivan has been engaged in constructing and conducting evaluation activities for about 15 years now. Her Ph.D. from UM in Higher Education and Student Personnel builds on her earlier Master’s from Emory in Education Research and Evaluation. Sullivan joined the UM faculty in 1998, after thirteen years with the Mississippi Joint Legislative Committee on Performance Evaluation and Expenditure Review, first as a research methodologist and then as manager of the Evaluation Division. The UM Center for Educational Research and Evaluation, with Sullivan as director, provides research and evaluation services to local, state, and federal agencies, as well as serving as a resource for other UM departments and programs.

Sullivan stresses her interest in talking to faculty who are trying to include evaluation in their proposals (good advice, and free). She’s also interested in involving more faculty in the work of the Center. She can be reached at 915-5017 or ksull@olemiss.edu

External Grants and Contracts/Awards 1999-2000

Awards Received

January Awards $2,070,940 18 Awards for January
December Awards $5,212,529 24 Awards for February
YTD $25,255,099 184 Awards year-to-date

13% increase in total number of awards year-to-date

Proposals Submitted

January 2000 $21,130,634 42 Proposals Submitted
February 2000 $6,242,295 30 Proposals Submitted
YTD $106,677,371 273 Proposals year-to-date

44% increase in total number of requested year-to-date

Back to ResearchHighlights Issue Listing