Uncategorized

Formative evaluation (of the design)

Dick, W., Carey, L., and Carey, J.O., (2005). The Systematic Design of Instruction, 6th ed. Boston, MA: Allyn & Bacon.

OK–this chapter in Dick and Carey caught me off guard. I was expecting a discussion of how to design formative assessments to provide regular feedback (self-checks) to the learner and instead discovered that formative assessment is to provide feedback to the designer for the purpose of revision. After I got over the surprise, the types of design feedback were straightforward and made a lot of practical sense (with the caveat that conducting all 5 types is impractical without a large learner population warranting a LONG development schedule ).

Here’s the complete  formative assessment plan; Dick and Carey augment this outline with a series of very useful(and generic) instruments and outlines, especially Table 10.3 (which summarizes the types) and the examples and surveys at the end of the chapter.

  1. Specialist evaluation (SME, learning specialist, learner specialist)
  2. Clinical evaluation (one-to-one)
    • Select 3 representatives of target population and at least one above average, average, and below average in task ability.
    • Clarity criteria
      • Message
      • Links (contexts, examples)
      • Procedures
    • Impact criteria
      • Attitude
      • Achievement
    • Feasibility criteria
      • Learner
      • Resources
    • Interactive nature of clinical
      • Establish rapport
      • Encourage learner to talk as she works through  material
      • Ask learner why he made a specific choice after each assessment step
  3. Small group evaluation
    • Select 8-20 representatives of target population from multiple sub-groups (ability, language, delivery familiarity, age, etc.).
    • Attitude survey
  4. Field trial evaluation
    • Select 30 representatives of target population.
    • Designer is observer only.
    • Attitude survey (of context)
  5. Performance (in-context) evaluation
    • Key questions:
      • Do learners find the the new skills appropriate in an authentic context?
      • What has been the impact on the organization?
      • What do learners (and the community) recommend for improving the instruction?
    • Allow time to pass after instruction so that new skills have a chance to be used.

The final type seems identical to Kirkpatrick’s 4th level of evaluation and the basis for (what little I know about) 3600 evaluation. At UTTC, because we typically deal with existing material (in some form), the caveat that formative evaluation of selected materials should proceed directly to field trial makes the process less daunting. I also appreciated the suggestion that delivering instruction with no formative evaluation can benefit by applying a field trial model. The best suggestion of all was almost buried in the final section on design disagreements: “Let’s have the learners tell us the answer.”

Leave a comment