Intuition, Data, and Wall-Throwing

Intuition is an exceptionally powerful tool to Managers, Leaders, and Engineers alike. What some have called the 'twinge' the sensation that 'something is off' can be a great indicator, and I'd argue we should trust this indicator when it surfaces. It's our subconscious mind trying to poke our conscious mind into action with the old communication set it knows- feeling.

However, there are some important steps when following through on instinct. Just as we should trust those that report to us, but verify they are on the right path, that their model reflects the teams model toward- so too should we verify that our intuition is 'flagging' a true problem, or a symptom of a deeper problem that is starting to surface. If we react instinctively to mitigate only the surface problems, without a deeper understanding of their cause, we risk leaving the underlying flaw festering to throw further wrenches in the clockwork of our organizations.

How do we verify an emotion or sensation- with data. As a younger Engineer, I would often 'throw things at the wall to see if they stick" and in my field, I'd argue that's actually a good way to explore solution spaces. I've worked in fields that are highly specified, and fields which are less so- like Games. Games require creative solutions, and things like 'fun' can be less easily quantified than regulatory software requirements. The risk of 'wall-throwing' is low in game creation (with some caveats). If it doesn't work, revert. No harm no foul.

When managing, the risks of 'wall-throwing' can be great. Every minor change in process can create instability in cohesion, organizational debt, frustration in reporting members of teams. Clearly though, process needs to survive- especially in a period of expansive growth- so as leaders, how can we mitigate this?

The first step is recognition of a problem, sometimes we're lucky enough that this occurs through intuition. Other times it can result from a venting 1:1, or- maybe the problem's gotten even worse than that. (Maybe it's time to improve your tools for recognizing issues exist.)

Good Job, you've hit this one- you've got signal. Something is wrong.

The second step is data collection. I know, it's boring. "-but I wanna take action now! Let's chuck some stuff against the wall and see if it sticks." Well, hold on a second. How do you know it's being thrown against the right wall? How do you know it will stick for long enough. In fact, the worst kind of changes are the ones that stick long enough to take your mind off of them, but them slide down to the bottom of the wall and fall off. Additionally, large teams require more careful treading than small agile groups. It's easy to pivot 3 people. It's not easy to pivot 300.

Data collection will help you vet your signal is accurate and in the right direction. Use those management skill sets and communicate. Talk to reports, gather survey data. Look at a few examples of whatever is sending you bad signal. If it's a technical system, look at how it's used. If it's a people process, talk to the folks involved. Make sure it's more than one case, and if parallel problems arise- jot them down. You might start to notice an underlying trend to all of those parallel problems, now it's time to adjust your scope and solve the real problem underneath. As you collect this data, start formulating potential solutions and evaluating risk.

There's an important consideration while solution pondering that I want to underscore twice. No-action, or deductive solutions, are valid actions. I know it sounds crazy, we're paid to do things, not sit back and say nothing needs to be done- but some times the best action is inaction, particularly if risk of change weighted against project lifecycle isn't in a good place to facilitate change. Deductive solutions mean removing the problem by removing it's cause entirely, rather than massaging it. Don't like this new Jira process you were using? Other ways to gain it's benefits? Redundancy? Eliminate it. (Just make sure you're doing it backed by data, not because you hate it)

Your third step is application- but not quite at scale. Pilot a change- find a smaller team which typifies the problem but is easier to wrangle during testing of a solution. Try applying your solution to this sample group and collect more data. How does it trend? This doesn't need to be a formalized spreadsheet here, but you should at least have some backing that your solution is pushing things in the correct direction and any drawbacks are acceptable.

Your fourth step is tricky. It's deploying the solution at scale. Just like software development, this is where the gremlins will come out. We like to feel all safe and secure with our sample sets and statistics, but when the solution space entails an order of magnitude more people, that 1% risk that you didn't encounter in your sample sets will come out of it's lair, cackling and mocking you for your confidence.

If that 1% is worrisome, just imagine if we hadn't don't any data collection at all? Throw those dice, and let's play some process craps.

Watch your scale carefully and be ready to pivot if needed. Consider scaling in layers by sets of teams- it'll be easier to stomp out any fires that arise. Just like engineering solutions, management/leadership solutions tend not to survive through orders of magnitude of application, don't take it as a failure, instead recognize it as natural and adapt.

If we skip data collection, I think we're effectively gambling. You may be solving the wrong problems, you may be seeing problems shift to different areas instead of resolving, you may be making things worse but not having anyone want to tell you. It might just entirely pan out effortlessly and make you look like a mastermind. When it fails, however, it may fail spectacularly.

Grab your spreadsheet. Gather your data- even a little. It'll help you navigate forward.

R

Leave a Reply

Your email address will not be published. Required fields are marked *