
As we have found in our discussion of knowledge work, problem-solving is a core skill. It is critical for knowledge workers because:
We often have lots of data that tells us about what went wrong, but we don’t necessarily know why it went wrong.
If you are looking at a system or a process, you’re going to be trying to understand exactly why something happened. Knowing what happened is helpful, but it usually takes human analysis to determine the why. Virtually all systems, processes, and tasks are guided, directed, and executed by humans. This is why human analysis is needed, because we don’t have the best tools to handle the infinite number of variables that affect and human behaviour or choices.
Mindwork is even harder to determine, since it happens internally, and is not necessarily easily measured by outputs or behaviour.
As we have discussed before, mindwork is the manner in which a person naturally approaches knowledge work. In particular, mindwork is the instinctual thought process a person takes to reach a desired outcome. Thus, mindwork is intangible.
The issue we have with measuring knowledge work or mindwork is that we don't have a clear idea of what productivity looks like. When you look at the way someone's mind works you might think they're being productive, but there could be a lot of extra nonsense going on. You just do not know. You’re only aware of the outcome. So, we want to discuss ways you can attempt to measure mindwork through measurable outcomes.
There are a few ways you can try to measure mindwork:
I. Testing | Assessments | Certifications
First, you can use tests or assessments to determine someone's capability. Generally, this is going to be more theoretical. Sometimes there's practical information involved, but it is only practical in a theoretical or academic context.
For example, you could have an associate who takes a test on driving. Based on her score, you can conclude that she absolutely knows how to drive. But she might never have been behind the wheel of an actual car. It is a “practical” test in a theoretical context. Even if you created more dynamic questions, you aren’t really measuring her capability to drive well; you are measuring her ability to apply driving knowledge. Knowledge of driving is quite different from driving experience.
This is a knowledge-to-capability gap. Capability, in this context, is experience which is in alignment with the standards of quality and quantity desired. An employee might have a year of experience working with forklifts, but unless you can qualify what that means, you may have one of two undesirable possibilities:
The employee was around forklifts, and maybe touched one once. He worked with and around forklifts, but he always deferred to others with difficult or important loads. His experience isn’t proactive; it is hesitant and reactive. Translation: not great.
The employee actually worked with fork lifts, and did so regularly. But his safety procedures were non-existent. He was reckless, damaged product, and even injured his peers. He was proactive and feels comfortable, but he is a menace to anyone in the same warehouse. Translation: super not great.
One of the things you can do to combat the knowledge-to-capability gap is
Keep reading with a 7-day free trial
Subscribe to The Telodynamic Leader to keep reading this post and get 7 days of free access to the full post archives.