Much is made today of the concept of "Big Data." As with most new terms related to information technology, there is not a standard definition. Wikipedia defines Big Data as "data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time."
Perhaps the most interesting part of that definition is the concept of "tolerable elapsed time" to process that data. What exactly does this mean? Tolerable seems to be subjective. For example, during an emergency, micro-seconds can seem like hours. If the results of the process are vitally important, we may be willing to wait longer for those results. The time required to process the data will depend on the processes being run against the data. And of course, it all depends on the amount of processor power available. Nowadays, people have desktop PCs, which would have been considered supercomputers 25 years ago. This is obviously subjective criteria.
This kind of analysis is an interesting intellectual exercise, but why are organizations interested in this Big Data concept? The concept is very real. The amount of information available to us today is vast. Ninety percent of the data in the world was created within the last two years. Every day 2.2 million terabytes of new information is created. Organizations have invested billions of dollars in technology to create, capture, and store this information, all with the intent of giving employees, managers, and executives the information they need to make the best decisions.