What’s The Definition of Big Data? Who Cares?
By Bill Franks, Jun 14, 2012
It has been entertaining to see how so many people are arguing over how to define big data. There is always another nuance that can be suggested. There is always another potential exception to any rule that is offered. In the end, I don’t think the energy being put into the discussions is of much tangible value from a business perspective versus really just being an academic exercise. Let’s explore why.
The goal of analytics is to leverage data to make a better business decisions. It is all about business value. Identifying data as “big” or not doesn’t add any business value. What organizations need to worry about is very simple: Is there a data source that isn’t currently being collected that has high potential value? If so, then it needs to be collected and analyzed. That’s all a business person should worry about. They need not care about if it is big, small, or something in between.
Let’s imagine a scenario where a meeting full of business and IT people come together in a large conference room to discuss a new data source. As part of the conversation, they reach an agreement that the new data source should (or should not) be considered big data. What has that done to help them move the ball forward? Nothing. What moves the ball forward is the business team agreeing that the new data is useful and worth analyzing. What moves the ball forward is when the IT team decides how to best make the data available based on the characteristics of the data. Progress is made with a focus on putting the data to work, not on semantics.
With that said, once I’ve decided that a data source is important, the characteristics of that data source can impact how I go about acquiring it and feeding it into my analytic processes. If the data is unusually big and/or unstructured, for example, I may need to leverage some techniques commonly associated with big data. However, that is a technical implementation consideration. The big decision as to whether the data was valuable enough to collect or not has nothing to do with what definitional bucket we might place the data source in.
Another common error is equating big data with the use of certain tools or techniques. However, the tools and techniques often apply more broadly than just for big data. For example, if I want to do sentiment analysis against all the social media commentary for a global organization, I may have quite a lot of data to deal with. I’ll also need some complex text analysis tools and sentiment algorithms. Now let’s assume I want to do a sentiment analysis on 10 comments about me personally. Guess what? I need the exact same text analysis tools and sentiment algorithms. I just don’t need them to scale to the same extent.
What the above point leads to is that much of what is being associated with “big data” is actually a function of “different data”. Text data requires different tools and techniques. Semi-structured data requires different handling than traditional structured data. However, these data types require different handling for both big and small volumes of it.
For those responsible for the technical implementation of big data, the exercise of understanding what makes it different and how it might be defined does have some value. I am not suggesting that all efforts in this area are a waste of time. How can you develop a tool or technique to handle data if you don’t understand what it contains? I am simply suggesting that too much emphasis has been put on the topic for audiences, such as a business user, who really don’t need to worry about it.
The next time somebody asks you how you define big data or if a certain data source should be considered to be big data, consider how you answer. Do you really need to have that discussion? Or do you need to change direction and focus the discussion on what the value of the data might be and how it can be leveraged for analysis? I believe you’ll usually make far more progress by going the latter direction.
About the author
Bill Franks is IIA’s Chief Analytics Officer, where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts and improve analytics performance. His focus is on translating complex analytics into terms that business users can understand and working with organizations to implement their analytics effectively. His work has spanned many industries for companies ranging from Fortune 100 companies to small non-profits.
Franks is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons, Inc., April, 2012). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. Franks’ second book The Analytics Revolution (John Wiley & Sons, Inc., September, 2014) lays out how to move beyond using analytics to find important insights in data (both big and small) and into operationalizing those insights at scale to truly impact a business. He is an active speaker who has presented at dozens of events in recent years. His blog, Analytics Matters, addresses the transformation required to make analytics a core component of business decisions.
Franks earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University. More information is available at www.bill-franks.com.