4a – Summary & Evidence

Return to Standards Home Page

4a. Conduct needs assessments to inform the content and delivery of technology-related professional learning programs that result in a positive impact on student learning.

The first step toward any good plan for system wide change in our thinking about Professional Development is getting some good data to establish base lines for measuring growth and diagnosing what the training needs and interests are. As technology becomes more integrated into other subjects, as it should be, we may need to measure data that goes beyond tool usage statistics to find a way to evaluate the quality of instruction using technology and find some measures of technology’s impact of student learning. In Professional Development: What Works (Zepeda, 2012) the author states “The success of professional development is based on the extent to which change occurs.”  She suggests that there are two fundamental questions that we should ask:

  1. How does the leader know when transfer of new practice has occurred?
  2. How does the leader know whether or not student achievement has increased?

Coaches may see new practice being put into place by a teacher they’ve worked with but sometimes that may occur over long periods of time as a teacher becomes more comfortable with the technology. I find the student achievement more difficult to measure and I feel like I need to do more learning about how to separate out the effect of just the technology in that area.

Last fall my department developed a survey to assess the needs of our staff around resources and professional development.  Here’s a link to both the Elementary and Secondary surveys, without the data. We had an impressive 40% return rate from staff so we felt that we had a pretty good representation.

It was a good start and I learned a lot from the development of the survey as a part of our course (EDTC 6106). Here are some of my take aways:

  1. It is difficult to write questions without bias. There were definitely some things we wanted to know, which is why we did the survey, and there were projects we wanted to go forward with and we were hoping the data would support. It probably caused us to slant questions certain ways or leave out questions we didn’t want to hear answers too.  Going forward, I think it is important that we develop clear criteria for what we want a project or a training to accomplish from the beginning. If we have clear learning targets and outcomes we can not only develop the learning experiences to ensure we reach our goal but we can develop evaluation questions that are based on criteria.
  2. As coaches we are one step removed from the classroom but our objectives for programs and training ultimately should be having an impact on student learning. Challenging ourselves to identify questions that can measure the impact our work is having on student learning could be useful. It also may not be immediately obvious and hard to separate from other changes in instructional practice which will make it difficult but not impossible.
  3. It is useful to have a tool that will allow you to break data down in various ways and by various groups.  We used a tool called Advanced Summary, an add on to Google Forms, that let us filter data by building, grade level and other factors. Of course, you have to remember to collect data on those factors in the first place in order to collect it. We wished we would have added in more definition of job roles so we could have filtered answers by job as well.




Zepeda, S. (2012). Professional development: What Works (1st ed., p. 32). New York: Taylor & Frances.

<– Return to 4.Professional Development & Program Evaluation Page

–> To Indicator 4b