What is Hadoop?
Hadoop is the most popular system for doing "Big Data": analyzing large data sets using multiple communicating computers in a datacenter. Really a suite of related programs, Hadoop has become popular because it hides the complexities of parallel processing, making it easy for programmers to harness the power of thousands of computers. Hadoop includes systems to implement basic technologies like distributed data storage and job scheduling, and a growing roster of user-level systems to help programmers analyze their data more easily in a variety of programming languages. Even as it evolves, Hadoop continues to be the world's most popular system for doing Big Data.
The Hadoop system, now hosted at Apache Software Foundation (hadoop.apache.org), consists of about a dozen top-level projects. It began in 2006 as an effort to create an open-source implementation of MapReduce, Google's system for exploiting parallelism in their large data centers. HadoopMapReduce is still at the center of the Hadoop "ecosystem," but compared to the Hadoop of 2006, today's Hadoop is more general, more efficient, and easier to use for a variety of applications. Among the systems in the ecosystem are Hive, which provides an SQL-like database querying facility for massive, distributed data; Pig, which uses its own language to simplify the construction of MapReduce "pipelines;" and Spark, which leverages the Hadoop ecosystem to provide MapReduce-like parallelism at greatly reduce cost.
Today, Hadoop is used by countless companies and universities, either in their own data-centers or on commercial systems like Amazon's EC2 or Google's GCP. Financial companies use Hadoop to analyze stock purchases; drug companies use it to explore drug interactions computationally; scientists at NASA use it to process massive streams of data from satellites. Its popularity comes from its simplicity, and from the growing suite of tools supporting not only basic parallel processing, but also applications like data mining and data visualization.
Increasingly, Python and R are being deployed within the Hadoop parallelism framework; the simplicity of using dynamic languages and the efficiency of parallel computers makes this a great solution for processing huge data sets.