Evaluation of education

Evaluating Education Effectively: Practical Tips and Examples

Evaluation of education is crucial for understanding the effectiveness of instructional design, delivery, and outcomes. This site is intended for university teachers and managers who want to evaluate both degree programmes and education for professional learners (Life Long Learning or LLL). This site outlines key evaluation frameworks and provides tips and examples. Whenever an item is specific to LLL offerings, we will indicate so.

Evaluation Frameworks

Effective evaluation of educational programs requires a structured approach to ensure meaningful and consistent results. Using established evaluation frameworks helps align assessments with learning goals and organizational priorities. These frameworks offer a systematic way to plan, carry out, and analyze evaluations, reducing bias and promoting focus on outcomes that truly matter. They also support informed decision-making and continuous improvement by providing reliable data. By applying a clear and consistent method, evaluations become more credible, comparable across programs, and ultimately more useful for enhancing the quality and impact of both academic and professional learning initiatives.

You can use this overview to help select the most suitable framework for your context:

Context

Best-Fit Framework(s)

Why Use It

Academic courses focused on conceptual topics

LTEM, Kirkpatrick (Levels 1–2), CIPP

LTEM evaluates learning depth; CIPP supports instructional planning; Kirkpatrick captures learner satisfaction and knowledge gain.

Training aimed at job performance improvement (LLL)

Kirkpatrick (Levels 3–4), LTEM (Tiers 6–8)

Focuses on behavioral change and results

Programs needing accountability and continuous improvement

CIPP, Guskey

CIPP aligns with planning and evaluation cycles; Guskey provides structured feedback levels.

Executive education or leadership training (LLL)

Brinkerhoff SCM, Kirkpatrick

SCM identifies real-world success stories; Kirkpatrick assess application and value.

Certification or compliance training

LTEM, Kirkpatrick, Pre/Post-Tests

LTEM supports transfer and recall; Pre/Post-Tests show objective learning; Kirkpatrick validates training effectiveness.

Evaluation Frameworks Explained:

  • One of the most widely used models for evaluating training and education is the Kirkpatrick Model, which consists of four levels (Kirkpatrick & Kirkpatrick, 2006):

    1. Reaction – How participants felt about the learning experience.
    2. Learning – The degree to which knowledge or skills increased.
    3. Behavior – The extent of behavior or performance change.
    4. Results – The final results, such as improved performance or ROI.

    While originally designed for corporate training, this model is adaptable to academic settings. For university students, Levels 1 and 2 are often easier to assess (e.g., via course evaluations and exams), whereas for professionals, Levels 3 and 4 may be more relevant (e.g., applying knowledge at work, organizational impact).

  • Thalheimer (2018) developed LTEM to address some of the limitations of Kirkpatrick’s model by emphasizing meaningful learning and actual transfer to real-world settings. LTEM comprises eight tiers:

    1. Attendance
    2. Activity
    3. Learner Perceptions
    4. Knowledge Retention (Basic)
    5. Knowledge Retention (Decision Making)
    6. Transfer to Work/Tasks
    7. Transfer Fidelity
    8. Results / Impact

    LTEM is especially helpful for professional education where learning must be applied on the job. For students, Tier 4 or 5 assessments are often used, but for professional learners, Tier 6–8 data is more desirable.

  • Stufflebeam’s CIPP model (Stufflebeam & Zhang, 2017) is frequently used in program evaluation. Unlike outcome-only models, CIPP evaluates across the lifecycle of a program:

    • Context – What needs are being addressed?
    • Input – Are resources and plans appropriate?
    • Process – How well is the program implemented?
    • Product – What are the results and are they being used?

    This model is highly flexible, making it useful in both student and professional settings. It is especially helpful in curriculum design, accreditation, and program improvement.

  • Several other frameworks offer complementary perspectives and tools for evaluating educational programs:

    Guskey’s Five Levels:
    Useful for evaluating professional development initiatives, with a focus on systemic change and support

    1.       Participants’ reactions
    2.       Participants’ learning
    3.       Organizational support and change
    4.       Participants’ use of new knowledge and skills
    5.       Student learning outcomes.

    Brinkerhoff’s Success Case Method:
    Focuses on finding the most and least successful cases to understand what works.

    • Best used when qualitative stories can complement quantitative data.
    • Helps uncover context-specific factors behind success/failure.

    ·Utilization-Focused Evaluation (Patton):
    Centers the evaluation design on its utility for key stakeholders.

    • Good for collaborative curriculum development or when multiple stakeholder needs must be addressed.
    • Iterative and adaptive.

    These approaches can be integrated with Kirkpatrick, LTEM, or CIPP depending on your goals, audience, and data availability.

Evaluation Methods

Selecting the right method to evaluate educational effectiveness is crucial for generating meaningful insights. Different methods serve different purposes, from capturing immediate learner feedback to assessing long-term impact on behavior and performance. The following list provides a brief overview of widely used evaluation methods, highlighting how they contribute to understanding the success and areas for improvement in educational programs.

Evaluation Methods Explained:

  • Used to gather self-reported feedback from learners on satisfaction, perceived learning, and applicability. A survey or questionnaire can include Likert scales and open-ended questions. More indepth information how to design a survey can be found below!

  • For degree programmes summative assessment (for grading) is integral part of the curriculum and its results are available and useful input for evaluations. More information how to analyse assessment results can be found here.

    However, summative assessment is not always a part of LLL or extra curricular offerings for students. Evaluate knowledge or skill gains  can be done by comparing learner performance before and after instruction, through simulations, case studies, or real-world tasks that require learners to demonstrate application of knowledge or skills. So very similar to the exams of (some) degree programmes, but without the summative weight (grading). The outcomes give important insights in to the effectiveness of the educational offering.

  • Conducted one-on-one or in small groups to explore deeper insights about learning experiences and perceived impact. Can be fully open or structured. A semi-structured approach is most useful (specific questions to ask prepared in advance, with space to ask follow-up questions that are not prepared)

  • Focus Group or Panel Meetings are structured discussions with selected participants to gather qualitative feedback on program (course) strengths and improvement areas. At the UT most (bachelor) degree programmes organise student panel sessions per module, for LLL offerings this can also be very valuable.

    During a panel discussion, you can focus on points of attention you would like to have comments and/or suggestions for. During these conversations, you can ask questions in great detail and obtain specific information. 

    A Focus Group or Panel Meeting can provide useful information at almost any point in time during the run of a course/module, as long as it is up and running so you can to discuss the experiences with the participants. 

    Tip:
    Make sure you don't bring up too many topics in the conversation! Make sure there are a few 'main topics' that you want to go deeper into. You can touch on other topics, but don't dwell on them for too long. Otherwise, the conversation will take too long and concentration will disappear.

  • Observations can be very valuable and help you to reflect on and improve your teaching and course quality. They can offer direct insights into classroom dynamics, instructional strategies, and student engagement. You don’t need complex tools or formal protocols, but a structured approach helpt to focus and guide the observation.

    Practical tips:

    1. Start Small and Purposeful

    Decide what aspect of your teaching you’d like to explore. For example:

    • Are students actively participating?
    • Is the pacing of your lecture effective?
    • Are your instructions clear during group activities?

    Focusing on one or two questions makes the process manageable and meaningful.

    2. Invite a Colleague

    Ask a fellow teacher or educational consultant to sit in on a class. Let them know what you’d like feedback on. A short observation (20–30 minutes) is often enough.

    You can also do a self-observation by recording a session (with student consent) and watching it later with your goals in mind.

    3. Use a Simple Observation Template

    Create a basic checklist or note-taking guide. For example:

    • What teaching methods were used (lecture, discussion, group work)?
    • How did students respond (engaged, distracted, asking questions)?
    • Were instructions and transitions clear?

    This helps focus the feedback and makes it easier to spot patterns.

    4. Reflect Using Gagné’s Nine Events of Instruction

    Gagné’s model offers a helpful lens for reflection. His nine events describe key steps in effective teaching, such as:

    1. Gaining attention
    2. Informing learners of objectives
    3. Stimulating recall of prior learning
    4. Presenting the content
    5. Providing guidance
    6. Eliciting performance
    7. Providing feedback
    8. Assessing performance
    9. Enhancing retention and transfer

    You can use these events to reflect on whether your lesson supported learning at each stage. CELT has an observation template available based on Gagnés model, you can ask our consultants about this. 

    5. Reflect and Adjust

    After the observation, take time to reflect:

    • What went well?
    • What surprised you?
    • What might you try differently next time?
  • Data extracted from learning management systems (LMS) can be used to analyze engagement, completion rates, time spent, etc. The options for the UT are limited at the moment, but you can track attendance and assignment completion by hand yourselve or in Canvas.

  • Return of Investment (ROI) for education can be interpreted in two ways: based on value and impact, and based on money. The ROI of money is mostely of interest to management and clients. We offer some practical tips for both:

    ROI of Value and Impact

    1. Time and Effort vs. Learning Outcomes

    • Are students gaining skills and knowledge that match the course goals?
    • Is the time invested by teachers and learners producing meaningful learning?

    2. Cost vs. Impact

    • Are the resources (staff time, materials, tools) justified by the course’s effectiveness?
    • Could similar outcomes be achieved more efficiently?

    3. Learner Progress and Application

    • Do students apply what they learn in real-world contexts (e.g., internships, jobs, research)?
    • Are professionals able to use new skills in their work soon after training?

    4. Satisfaction and Retention

    • Are students or professionals satisfied with the course?
    • Do they continue in the program or recommend it to others?

    Data and resources you can use:

    • Reflect on effort vs. outcome: Are learners achieving the intended learning goals with reasonable effort?
    • Gather feedback: Ask learners how they use what they’ve learned and whether it was worth their time.
    • Track engagement: Use simple analytics (e.g., resource usage, participation) to see if course design supports learning efficiently.
    • Compare formats: If you teach similar content in different formats (e.g., online vs. in-person), compare outcomes and effort.

    Financial ROI in University and Professional Education

    ROI = (Net Benefit / Cost) × 100%
    Where:

    • Net Benefit = Financial gains or savings resulting from the education
    • Cost = Total investment (e.g., tuition, staff time, materials, infrastructure)

    For University Education

    • Graduate Earnings: Compare average post-graduation salaries to the cost of the degree.
    • Employment Rates: Higher employability can signal strong ROI.
    • Time to Graduation: Shorter time frames reduce costs and increase early career earnings.
    • Retention and Completion: Fewer dropouts mean better use of resources.

    For Professional Education

    • Productivity Gains: Does the training lead to measurable improvements in work output or efficiency?
    • Reduced Errors or Costs: Are trained professionals making fewer costly mistakes?
    • Career Advancement: Promotions or new roles resulting from training can be quantified.
    • Client or Project Impact: For consultants or engineers, ROI may be tied to project success or client retention.

    Data you can use:

    • Cost of course development and delivery (staff hours, tools, materials)
    • Participant costs (fees, time investment)
    • Post-course outcomes (salary increases, promotions, performance metrics)
    • Organizational impact (efficiency, innovation, retention)

    Considerations and Limitations

    • ROI is easier to calculate in professional contexts than in academic ones.
    • Long-term benefits (e.g., critical thinking, innovation) are harder to quantify.
    • Ethical and societal value (e.g., equity, sustainability) may not be reflected in financial ROI.

Successfully applying evaluation frameworks requires thoughtful alignment with learner characteristics, institutional goals, and instructional design. The following strategies can help implement these frameworks effectively:

Designing Effective Surveys for Educational Evaluation

UT Guiding Principles for Student Surveys

For most degree programmes at the UT a survey is already conducted for each course and/or module (the SEQ). Contact your programme management for specific information regarding your context, before sending out a survey by yourself. 

Surveys are common tools for educational evaluation, but their design must be intentional to yield reliable and useful results. Most important to take into account are the following tips and guidelines:

More tips and examples can be found here:

  • Clear purpose and target audience
    ✅ Balanced mix of quantitative and qualitative items
    ✅ Simple, unambiguous wording
    ✅ Logical flow and layout
    ✅ Estimated completion time under 10 minutes
    ✅ Pilot tested
    ✅ Data collection and analysis plan in place

  • Question Type

    Description

    Pros

    Cons

    Likert Scale

    Respondents rate agreement (e.g., 1–5 scale)

    Easy to analyze; familiar to users

    May not capture nuance; central tendency bias

    Multiple Choice

    Choose one or more predefined answers

    Quick to answer; easy to quantify

    Can limit options; poorly designed choices may confuse

    Open-ended

    Free-text response

    Rich qualitative insight; captures unanticipated feedback

    Time-consuming to analyze; risk of low response quality

    Ranking

    Prioritize items in order

    Helps understand preferences or priorities

    Difficult for respondents; challenging to interpret at scale

    Dropdown/Select

    Pick from a collapsible list

    Saves space in long surveys

    Less visible; may hide important choices

    Yes/No or Binary

    Simple two-option questions

    Quick and easy to analyze

    Lacks nuance; not always suitable for complex questions

    Semantic Differential

    Rate on a scale between two bipolar adjectives (e.g., Useful — Useless)

    Allows for subtle distinctions in attitude

    May be misunderstood or misinterpreted by some respondents

  • Likert Scale:

    How satisfied are you with the course materials?

    [ ] Very dissatisfied
    [ ] Dissatisfied
    [ ] Neutral
    [ ] Satisfied
    [ ] Very satisfied

    Multiple Choice:

    What type of learning format do you prefer?

    [ ] In-person
    [ ] Online
    [ ] Blended

    Open-ended:

    What did you find most useful in this training session?

    __________________________________________________

    Ranking:

    Please rank the following learning methods in order of preference (1 = most preferred):

    [ ] Group work  [ ] Lectures  [ ] Hands-on activities  [ ] Online modules

    Dropdown:

    Select your programme:

    [ ▼ Select one ▼ ]

      - Mechanical Engineering

      - Business Administration

      - Health Sciences

      - Electrical Engineering

    Yes/No:

    Did the training meet your expectations?

    ( ) Yes  ( ) No

    Semantic Differential:

    Rate the usefulness of the training:

    Useless — — — — — — — Useful

  • This sample survey aligns with Thalheimer’s Learning-Transfer Evaluation Model (LTEM) and Performance Focussed Learner Surveys, by addressing multiple levels of learning effectiveness, from engagement to knowledge application in actionable formulations. This tiered approach encourages evaluation beyond satisfaction and helps capture application and long-term learning.

    As this framework is originally developed for professional learning, examples of survey questions tailored to that context are available online,f.e. Example Learner Survey » Work-Learning Research

    Tier 1 – Attendance and Participation

    1. How often did you attend class sessions (in person or online)? [Multiple choice]

    • 100% (All sessions)
    • 75–99%
    • 50–74%
    • Less than 50%

    Tier 2 – Activity/Engagement

    2. How actively did you participate in learning activities (e.g., discussions, group work)?

    • I rarely participated and found it difficult to stay engaged.
    • I occasionally participated and sometimes found the activities useful.
    • I frequently participated and felt somewhat more confident because of it.
    • I consistently participated and can now apply what I practiced in class.
    • I actively led learning activities and used them to reinforce my understanding outside the course.

    3. Which course activities helped you engage most with the material? [Multiple choice]

    • Lectures
    • In-class discussions
    • Online forums
    • Group assignments
    • Case studies
    • Other: ______

    Tier 3 – Learner Perceptions

    4. How would you describe the relevance of the course content to your academic or personal goals?

    • The content had little or no relevance to my goals.
    • I can now see some connections between the content and my goals.
    • The content was moderately relevant and supported my academic direction.
    • The content was highly relevant and helped me make concrete progress.
    • The content directly shaped or redefined my academic or personal goals.

     5. After completing the course, how would you summarize your learning experience?

    • I did not find the experience useful or satisfying.
    • It was somewhat useful but left me with unanswered questions.
    • I learned key things, but the experience could be improved.
    • I found the course engaging and feel better equipped.
    • This was a transformative learning experience for me.

    Tier 4 – Knowledge Retention (Basic)

    6. Now that you’ve completed the learning experience, how well do you feel you understand the concepts taught?

    • I am still at least somewhat confused about the concepts.
    • I am now somewhat familiar with the concepts.
    • I have a solid understanding of the concepts.
    • I am fully ready to use the concepts in my work.
    • I have an expert-level ability to use the concepts.

    7. Which of the following course concepts do you feel you understood well?

    [Add a checklist of key topics]

    Tier 5 – Decision-Making Competence

    8. How prepared do you feel to apply what you learned in academic or real-world scenarios (e.g., projects, internships)?

    • I am not sure how I would apply what I learned.
    • I have a vague idea of how it might apply.
    • I feel somewhat prepared to apply the learning in the near future.
    • I am prepared and already have a plan for application.
    • I have already started applying the learning and seen results.

    9. Please describe a situation where you applied or could apply a concept from this course. [Open-ended]

    Tier 6/7 – Transfer to Academic/Real Tasks (Optional for Follow-Up)

    10. Have you applied concepts from this course in other academic tasks (e.g., research, assignments in other courses)? [Yes/No]

    If yes, briefly describe how. [Open-ended]

    Tier 8 – Results (Optional, Post-Course)

    11. Has this course influenced your academic direction, career interests, or skill development goals? [Open-ended]

  • This survey is designed for professional learners and follows the CIPP (Context, Input, Process, Product) evaluation model. It helps educators and organizations gather meaningful feedback throughout the learning experience. The (open ended) questions are structured to capture the relevance, effectiveness, and real-world impact of the program. By aligning with the CIPP framework, this survey supports continuous improvement and ensures training investments yield measurable outcomes.

    Context Evaluation

    What specific needs or problems in your role led you to enroll in this learning program?

    How well did the course align with your personal or organizational goals?

    Input Evaluation

    To what extent were the resources (materials, time, technology, support) adequate for successful learning?

    Was the content appropriate for your level of experience and role responsibilities?

    Process Evaluation

    How would you rate the effectiveness of the instructional methods (e.g., case studies, interactive activities, peer learning)?

    Were there any barriers that limited your engagement during the program? Please describe.

    Product Evaluation

    What specific skills or knowledge did you gain as a result of this program?

    Have you already applied what you learned on the job? If so, describe an example.

    What measurable outcomes or changes have occurred in your work as a result of this training?

    What additional support or follow-up would help you sustain or enhance the application of this learning?

At the UT we use Crowdtech for surveys. Whitin Crowdtech the set-ups and question types in your examples are possible to be constructed. 

Additional information and further reading

References and sources

Contact Information

For questions or support regarding evaluation of education, please feel free to reach out via the contact information provided below: