Skip to content

icon picker
Data strategy

Plan learning objectives, the timeframe of expected change, and data collection methodology.

Program Planning with Evaluation in Mind

At Canva University we believe in demonstrating impact through learning outcomes. That means we consistently measure and report on the impact of our programs in a way that is meaningful to our stakeholders. We do this by focusing on learning transfer which is the process of putting learning to a work in a way that improves performance.
image.png
To find out more about the importance of focusing on learning transfer, read more about the
To do this, before any program is established, our learning designers/partners work with stakeholders to identify the learning needs, objectives and measurement criteria for success using this
@Program Planning Doc (demo)
.

Bloom’s Taxonomy and the Kirkpatrick Model

Globally, we use Bloom's Taxonomy to create our objectives and use the Kirkpatrick model to measure our success across all programs:
Level 1: Reaction – what are our learner's initial reaction to the material, facilitator and experience
Level 2: Learning – how much information was effectively absorbed during the training reflected against the learning objectives of the experience, session or program
Level 3: Behaviour – how much of the training has influenced the behaviour of the participant and evaluate how they've applied this information on the job
Level 4: Results – what impact has this training had on the business level

Here's an example of the standard we expect in our learning measurement for onboarding:
When measuring the success of the onboarding experience (30 - 90 days), we aim to measure enablement which refers specifically to how quickly people get to be productive and how people experience our culture. To measure that, we need to break down things that help people do their job demonstrated below.
image.png
Our formal reporting is done seasonally on the last week of that season and is submitted to the Head of Learning for discussion/review. reporting template will be circulated in a calendar invite but will roughly look like this. It is then sent to our stakeholders for feedback and input. We expect our learning designers and partners to report on:
Overall satisfaction of the program
How many people went through the program
Summary of what's working well / what could be improved
Risk factors: Are there any areas that are at risk of breaking, or are not performing as we expected? Are there any upcoming initiatives that might interrupt learning transfer for participants, or impact on the training overall?
Sample size of respondents
Specific Demographics: Makeup of specialties (if relevant) of the participants, tenure, team
Learning transfer
Learning application
Impact on business outcomes

Data Collection Methods

image.png
Interviews: Interviews are a means of gathering 1:1 information in a structured or semi-structured way
Focus groups: Focus groups are conducted as a means of gathering thematic information in a scaled manner
Online surveys: Online surveys are used to gather quantitative and qualitative information from a larger and often remote audience
Profiling and assessment: This can be used when current and future capability needs to be identified. It can also be used to inform learner personas
Facilitated workshops: Used when needing to engage and identify themes from a key stakeholder group
Observation: Used to understand how work or tasks are performed, particular when looking at job re-design or task related information
Change readiness, assessment, audit, review: Often used when needing reviewing processes, ecosystems and organisational capabilities
Desktop reviews: This includes reviewing existing client frameworks, models, strategy and job descriptions/role profiles that will inform other areas of the analysis

Other thoughts and ideas around data:
Longitudinal studies
6 Month Focus Group: We ask newbies directly for the impact of trainings they've experienced in the past using when curating our questions and referencing past scores they'd given us from their bootcamp surveys
Comparing against the industry benchmark: This will serve as our control group when we run AB testing.
AB testing: segmenting people into these quadrants and then measuring how they fare across different skills / competencies:
have taken mentoring course and has mentored outside of Canva
have not taken mentoring course but has mentored outside of Canva
has not taken mentoring course and hasn't mentored outside of Canva
has taken course but has not mentored outside of Canva
Pilots: Google measure the effectiveness of their programming by comparing a pilot group to people who haven’t received the training. They did this with their manager training program.

Poorly Performing Programs

When learning and development initiatives and programs fail to achieve their learning objectives of either an observed change in people's behaviour or shift in mindset, that's when we know there's something wrong. As learning practitioners, it's important to to set clear objectives upfront about what we want people to do as a result of the program and what we expect to have changed in the organisation as a result of the program 1 month after participants have finished – 6 months later.
Some signals for poorly performing programs may be:
Low engagement either in the form of participation or clickthroughs: this may mean we're not directly solving the problems that people have. We may not be clear during our performance consultations of what prime performance should be and aren't completely aware of what challenges people face within their roles.
Walt Disney Learning and Development Consultant David James says:
We've been fooled that classroom training is actually learning in the same way that we've chosen completion as a high metric of learning. It's simply presence and exposure. What we should be doing instead is fully understanding what people are trying to do and giving them support when they need it, tailored to their context and role. Let's measure efficacy not engagement

Low uptake on the mindset, skills or behaviours that we've identified as learning objectives for people who attend programs.
Why leadership training fails – and
to do about it (Beer et al., 2016)
"Participants in corporate education programs often tell us that the context in which they work makes it difficult for them to put what they've taught into practice"

In most follow up studies after leadership learning and development programs, it was found that most supervisors had regressed to their pre-training views. If the system doesn't change, it will set people up to fail.

Unclear direction on strategy and values communicated in the programs.

Most of our programs and initiatives don't specify concrete measurable behaviours and standards. For example, our definition of a mentor is very subjective and immeasurable. Is the purpose of being a mentor to connect someone to the right people or is it to up skill someone rapidly in technical skills, potentially to the expense of your own output? Are they meant to coach you through problems or act as a cultural ambassador helping you feel more included? Or all of these things? Mentoring has been reported to take up to 30% of a mentor's time, what happens when someone is spending 70% of their time mentoring someone?

Data Experimentation

If we ever were to get more experimental with our data with the aim of forecasting and predicting trends, determining correlations between initiatives and success metrics, we would then try and follow best data standard practices where we would have to consider these questions:
How much data should we aim to collect - what proportion of people? (sample size comparing between two cohorts, you have to have one control and one variant, it also has to be large enough → 100, normal distribution, law of large numbers)
What other data collection methods do we need to get a holistic understanding of our programs?
image.png
Examples to research:
Google do a good job of learning data so we might need to research there
Some ideas for eng onboarding involve: 1. In-session measure of learning 2. Overall satisfaction of boot camp and belonging to Canva / their team 3. Mentor / mentee reported accreditation of all material learned and demonstrated 4. Mentee confident in speaking up and asking for help
How does this apply to you?
Who
What
When
Why
What now?
Canva University
Use the Program Planning
to articulate your thought process behind planning out programs and the evaluation methods for it
Before designing a program or when redesigning a program
To be clear about the intentions of the program: By planning all the details out the objectives, methods for evaluation and expected impact, then it’ll make measurement and reporting easier.
Start following this process when designing / redesigning programs

Use the Bloom’s / Kirkpatrick’s
to breakdown the metrics of your program and how you’ll evaluate success
Before designing a program or when redesigning a program
To articulate the objectives and expectations at each learning level so we know what we’re measuring
To prove the impact of our learning programs
To understand areas of weakness with our current programs and iterate on those
Start following this process when designing / redesigning programs
Use the Season Reporting Template to report back on the health of your program and goals. The global dashboard with our level 1 metrics should make this process easier
The last week of each season
To report back to our stakeholders around our programs
Create a template form
Update our existing forms with demographics such as: Specialty, Team, Tenure (<6 months, 6-12 months, 1-2 years, 2-3 years, 3-4 years, 4-5 years, 5+ years), Age (20-24, 25-29), (30-34), (35-39), (40+)
Start following this process and add in ways to measure Levels 2, 3 and 4 of the KirkPatrick model.


Escalate to
@Alan Chowansky
or
@Polly Rose
if any program demonstrates signs of poor performance e.g. 3 months of low engagement, lack of visibility over any changes etc.
Whenever you notice the first signs
To prevent programs from breaking
Define poor performance when planning out programs / redesigning programs
Watch out for signs of poor performance
Learning Admin
Update the Global Learning Dashboard
Whenever new data comes in around attendance and satisfaction
To hold all our data in one space so we can refer to it when reporting
Update our MVP dashboard depending on what we need it do e.g. capture satisfaction? Merge with other sheets of data?
Data Strategy Templates
1
Name
Description
1
The purpose of the Program Planning doc is to plan and articulate the learning purpose and strategy for a Canva U program. Completing the doc will allow you to plan your work accordingly and ensure all Canva U programs are delivering the right outcomes in the most impactful way.
2
The Bloom’s Taxonomy and Kirkpatrick Model sheet will help you determine the objectives of your program / learning event and articulate measurable outcomes at each of the 4 Kirkpatrick levels.
3
The dashboards currently record attendance of all our sessions and programs.
No results from filter

👉 Next:

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.