“How do you measure soft skills?”
This was the question filling the Yes Futures office last month as we hosted a discussion for over 40 attendees from non-profits, schools, government and other sectors.
The success of the event (read more below) has led us to launch a Measuring Soft Skills Discussion Forum, the first meeting of which will take place on Tuesday 2nd February.
The purpose of this discussion forum will be to bring stakeholders from across the youth and education sectors to work together on a set of shared principles underpinning effective measurement of soft skills. We are keen to hear from organisations supporting young people’s learning across formal and non-formal settings. The aim is to positively influence policy to help all of us optimise our impact, and help guide more resources towards this critical area.
What happened at Yes Futures’ How Do You Measure Soft Skills event?
The event was inspired by significant interest received after Yes Futures won the 2015 Project Oracle Evidence Competition, for our Talent Toolbox measurement tool “a rare example of an innovative approach to measurement”.
Given the appetite in the sector for clarity and support with soft skills measurement, it came as no surprise that the room was full. Guided expertly by our Chair, Bethia McNeil (Director of The Centre for Youth Impact), the group set out to respond to the title question during the afternoon.
How do you measure soft skills? Of course, there is not a simple answer. Organisations working in the sector all do different things, in different ways and they define their outcomes differently. ‘Soft skills‘ is just one name for a collection of behaviours and attitudes defined in diverse ways, measured by different groups, for different goals, using different methods.
So where is there room for consistency and collaboration?
After identifying a common set of reasons why we all want to measure soft skills, from improving practices to securing funds for them, we broke up into groups to outline the risks and opportunities involved in soft skills measurement. We agreed on some key risks, including the possible effects of external factors on measurement and the possibility that standardising measurement practices may be detrimental given the diverse situations in which they are used. We also outlined several opportunities highlighting the importance of effective measurement, for example, the capacity to recognise achievements made by beneficiaries, the ability to feedback into programme design and the credibility that measurement can lend to soft skills. Two case studies served to draw on, nuance and give shape to these discussions.
Firstly, Ellie Garraway of Youth at Risk shared her experience in using Rosenberg’s self-esteem scale, with the aim of showing that attitudinal changes can occur in Youth at Risk’s beneficiaries and are correlated to actual life changes. The approach has allowed the organisation to get a better picture of what they do, to help youth articulate “being” instead of only “doing” and to predict the impact of their programmes, which has been useful for funding bids.
In the second case study, Sarah Wallbank, our CEO, explained the journey taken in creating a new measurement tool for our students, the Talent Toolbox. Developed in response to a dissatisfaction with existing tools, which seemed to be either too time-consuming or too reliant on self-judgement scales (which let’s be honest, are hard for adults to give accurate answers to, let alone young people!), the Talent Toolbox utilises evidence-based responses and collects a combination of quantitative and qualitative data. It is an integral part of the Yes Futures programmes, meaning that young people are invested in using the tool and its very use is an impactful part of their soft skill development.
Several desired ’principles‘ of effective measurement of soft skills were drawn out of these case studies and a final group discussion. Some of the initial principles discussed included:
Measurement should be integral to programme/development practise.
Measurement should prove impact.
Measurement should be proportionate to programmatic activities.
Measurement should be led by beneficiaries.
Measurement should be integral to the beneficiary’s journey.
Measurement should be comparable across interventions.
Measurement should be asset-based.
Measurement should be informed by collaboration with other organisations.
This last principle ultimately translated into a desire to keep the conversation going; as one attendee suggested, we would not be able to “crack” soft skills measurement in three hours.
Still, we’ve made a start! If you would like to join the Measuring Soft Skills Discussion Forum in February to continue this discussion, please sign up here!