It means accepting that data is inherently limited, flawed, and biased but using this to fuel our freedom to focus on meaningful and practical insights, not obsessing with exhaustive accuracy of “perfect data”
General or related thoughts on the subject:
- (2024-10-25) Data Modelling Meetup
- (2023-08-01) Summer Data Fest in Bratislava
- Data Talk – Dataismus
- DataCast – Lukáš Uhl
Especially when you consider the following from here:
The data of a mid-sized B2B SaaS product simply doesn’t have the potential energy of Google’s search histories, or of an Amazon’s browsing logs. If the latter examples are the new oil, the former is a new peat bog. No matter how good the tools are that clean and analyze it, how skilled the engineers are who are working on it, or how mature the culture is around it, it’ll never burn as hot or as bright…
We assume that there are diamonds buried in our rough data, if only we clean it properly, analyze it effectively, or stuff it through the right YC startup's new tool.
But what if there aren’t? Or what if they’re buried so deep that they’re impractical to extract? What if some data, no matter how clean and complete it is, just isn’t valuable to the business that owns it?
In the end of the day, it might about finding a balance between data dogmatism and data nihilism:
We don’t fall prey to data dogmatism, where we believe the map always matches the territory, nor do we become data nihilists and think that none of it matters. Instead, we focusing on tuning our sense of reality, what can be measured, what can’t and everything in between.
(Btw, worth noting that with what cannot be measured, we might get help from ChatGPT, Benn suggests.)