4 tips to take you beyond the big data hype cycle
As 2013 kicked off Gartner analyst Svetlana Sicular noted in her blog that big data is sliding down into the Trough of Disillusionment, a steep cliff in the Gartner Hype Cycle that follows the Peak of Inflated Expectations. (If you’re not familiar with the Gartner hype cycle, check out the illustration on Svetlana’s blog.)
In my experience with big data, there’s no reason for disillusionment. Big data analysis can create huge amounts of value. As with most worthwhile pursuits, it takes work to unlock that value. In the last three years, as a member of the CIO staff at Intel, I’ve spent a big chunk of my time developing business intelligence and analytics solutions that have resulted in tremendous cost and time savings and substantially improved time to market.
Beyond my own personal anecdotes, Gartner’s most recent Hype Cycle report seems to agree that there is in fact substance behind the hype: if you can stick it out past knowledge gathering and initial investment to actual deployment, you’ll move beyond disillusionment and start seeing results. As a matter of fact, many organizations are already finding the value in Big Data and investing even more heavily in related projects for 2014.
However, the report also notes that 2013 is the year of experimentation and early deployment, which is why many may not be singing the praises of big data initiatives just yet.
If you find yourself in this stage, there’s no reason to despair. Here are four tips for steering clear of the ‘trough of disillusionment’ and deriving value from your big data implementation.
Think even bigger.Think of a larger, more comprehensive model of business activity and figure out how you can populate it from as many data sources as possible. Then you can see the big picture. After you envision what infrastructure you need to support data at that scale, ask yourself if you could increase your data by a factor of 10 or more and still use the same infrastructure.
This is what Oregon Health & Science University (OHSU) is doing on a big data project to speed up analysis of human genomic profiles, which could help with creating personalized treatments for cancer as well as supporting many other types of scientific breakthroughs. Calculating about a terabyte of data per patient, multiplied by potentially millions, OHSU and its technology partners are developing infrastructure to handle the massive amount of data involved in sequencing an individual’s human genome and noting changes over time. With breakthroughs in big data processing, the cost for such sequencing could come down to as low as $1,000 per person for this once-elite research, which means demand will skyrocket. And when demand skyrockets, so will the data.
Find relevant data for the business. Learn from line of business leaders what their challenges are, what’s important to them, and what they need to know to increase their business impact. Then search for data to see if you can help them solve their business problems. That’s exactly what happened with Intel’s internal big data initiatives. We were asked to help the sales team focus on which resellers to engage, when, and with what products. In 2012, the results of this project drove an estimated $20 million in new revenue and opportunities, with more expected in 2013.
Be flexible. We are in a phase of rapid innovation. It isn’t like implementing enterprise resource planning. From a technology standpoint, you must be fluid, flexible, and ready to move to a different solution if the need arises. For example, the database architecture built to collect “smart grid” energy data in Austin, Texas, with Pecan Street Inc., a nonprofit group of universities, technology companies, and utility providers, is now on its third iteration. By Ron Kasabian, Intel read more