What is Hadoop?

Posted on Oct 9, 2015

Hadoop is the most popular system for doing "Big Data": analyzing large data sets using multiple communicating computers in a datacenter.  Really a suite of related programs, Hadoop has become popular because it hides the complexities of parallel processing, making it easy for programmers to harness the power of thousands of computers. Hadoop includes systems to implement basic technologies like distributed data storage and job scheduling, and a growing roster of user-level systems to help programmers analyze their data more easily in a variety of programming languages.  Even as it evolves, Hadoop continues to be the world's most popular system for doing Big Data.

The Hadoop system, now hosted at Apache Software Foundation (hadoop.apache.org), consists of about a dozen top-level projects.  It began in 2006 as an effort to create an open-source implementation of MapReduce, Google's system for exploiting parallelism in their large data centers.  HadoopMapReduce is still at the center of the Hadoop "ecosystem," but compared to the Hadoop of 2006, today's Hadoop is more general, more efficient, and easier to use for a variety of applications.  Among the systems in the ecosystem are Hive, which provides an SQL-like database querying facility for massive, distributed data; Pig, which uses its own language to simplify the construction of MapReduce "pipelines;" and Spark, which leverages the Hadoop ecosystem to provide MapReduce-like parallelism at greatly reduce cost.

Today, Hadoop is used by countless companies and universities, either in their own data-centers or on commercial systems like Amazon's EC2 or Google's GCP.  Financial companies use Hadoop to analyze stock purchases; drug companies use it to explore drug interactions computationally; scientists at NASA use it to process massive streams of data from satellites.  Its popularity comes from its simplicity, and from the growing suite of tools supporting not only basic parallel processing, but also applications like data mining and data visualization.

Increasingly, Python and R are being deployed within the Hadoop parallelism framework; the simplicity of using dynamic languages and the efficiency of parallel computers makes this a great solution for processing huge data sets.

About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Spotlight alumni story Alumnus API artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Big Data bootcamp Bootcamp Prep Bundles California Cancer Research capstone Career citibike clustering Coding Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Industry Experts Job JP Morgan Chase Kaggle lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Open Data painter pandas Portfolio Development prediction Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest recommendation recommendation system regression Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Tableau Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping What to expect word cloud word2vec XGBoost yelp