Dirty data, done dirt cheap. Sounds great, but the truth is, dirty data is actually quite expensive. Whether you’re making decisions based on erroneous data, missing opportunities because of what you don’t know, or simply wasting time cleaning up data, the costs associated with dirty data add up quickly, and the problem is only getting worse as data gathering becomes increasingly automated.
Sources vary on the exact number, but somewhere between 25% - 30% of data you’re using is either erroneous or incomplete in some way. A lot of dirty data is attributable to human error, but it can be as simple as merging two data sets without a unique identifier, creating duplicate entries.
Whatever the cause, if you’re relying on data to make your decisions, you want it as clean as possible. That dashboard with the fancy graphs you’re using to help make sense of your business? All those outputs are dependent on clean data as inputs.
For a small or mid-sized business like an MSP, cleaning data is probably a manual process. With uncertain ROI, dirty data just gets left alone, on the hope that it won’t really matter. But of course, it does.
If your PSA is the source of truth, then anything to which the PSA connects is going to inherit that bad data along the way. If it goes into your documentation system, you can end up with a process repeated incorrectly 100 times over. If it goes into your billing portal, you can find yourself leaving money on the table. And when bad data gets to senior management, it can affect the quality of your strategic and tactical decision making.
This is where the 1-10-100 rule comes into play. If it takes $1 to clean data up front, it takes $10 to remediate it later, and can cost upwards of $100 if you use that bad data to make decisions. If you know how much bad data you have, you can start putting a dollar value on what it takes to clean up that data.
Why are we talking about dirty data? Drop us an email and we’ll let you know what we’re up to. The hint’s in the first line of this post.