Indeed Web Scraping-Data Science Job Market Outlook

Posted on May 20, 2018

The moment I heard the second project would involve web scraping, I knew which website I want to use for my first spider crawl., one of the most popular website for job-seekers, with detailed requirements for each position and new postings every few seconds, it seemed like a great choice research into the data scientist job market.


The Scraping Process:

The tool I used for this project is Scrapy. The key word I used for the project is "Data Science". Some issues I encountered while building my spider were the following:

First, the sponsored job postings are often duplicated. has 6 sponsored job postings every page and they randomly appear in each page repeatedly. luckily, they are linked using different url style from the regular postings. So I constructed my spider to get 10 regular postings per page to avoid duplicates.

Second, the limit of 100 pages per search posed a challenge. Although Indeed shows over 120k results for my search, I was limited to 100 pages(around 1K results) on my search. In order to get as much postings as possible, I conducted my search for 14 top hiring cities, with around 1K postings per each city. Although this is only 11% of the total postings, this should be a good sample for the whole posting population.



Tech companies are the obvious top hires by amount of job postings., Google, Facebook, Microsoft comprised about 1% of my total data set. I will explore their specific requirements later.

In order to extract information from each job posting. I constructed a list of keywords using Regex, and examined whether they appear in one particular job description or not.Β For those keywords, I also put in their abbreviations (Artificial IntelligenceΒ andΒ AI) and plural forms (Decision TreeΒ andΒ Decision Trees). Below are my findings:

The most popular programming tool is Python, followed closely by SQL. AWS(Amazon Web Service) ranks 6th here, probably because of Amazon is the top hiring company in this data set. Excel appears to still be widely used by companies for basic data manipulation. Big Data tools like Hadoop and Spark are also in high demand.

Below is a bar graph of the most popular skill sets. Machine learning is the most mentioned buzzword. This is not surprising since machine learning is the big idea that covers most of the data science topic. Visualization is also mentioned pretty often, which shows that in addition to modeling the ability to communicate findings to audiences is also appreciated.

The two graphs below show the popularity rank of toolkits and skill sets within each state. We can see that states like NY/CA/WA indicate more diversified needs while some states indicate more particular needs regarding specific programming languages and skill sets.

Do degrees matter in terms of looking for a data scientist jobs? The answer is yes. Sixty percent of the job postings require a Bachelor degree or Master degree. Twenty percent of them require a Ph.D.

How much are data scientists paid? The box plot below is based on about 500 job postings that included salary range. I extracted any salary with more than 5 digits(which indicates a yearly salary). This is a very small portion of my dataset but should be able to give us an idea.

Knowing that tech companies are the top hires. I want to find out more about their specific job requirements. I exported all job descriptions from Apple, Amazon, Microsoft, Google and Facebook. I used Rstudio to construct a comparison word cloud. After removing a lot of stop words, the key words start to arise to the surface. This is my favorite graph so far since I learned so much about each company just by googling the buzzwords in their word cloud.

Below is a commonality cloud for the above 5 companies. We can see that most of the popular words in our bar graphs appear here again including machine learning, Python, SQL, degrees, etc.

Topic Modeling using Gensim and Visualization with pyLDAvis

LDAvis maps topic similarity by calculating a semantic distance between topics. Finding the right parameters for LDA is an art by itself. For this project, i mostly used the default parameters and trained 2 models based on a topic of 10 and 20 separately. Below is an 2D illustration with LDAvis for the model with 20 topics. We already have some interesting readings browsing through each topic.

Topic 1 is about marketing. Topic 8 is about company culture. Topic 10 is about internships(actually a big portion in my dataset), a lot of company names showed up in the word ranking. Topic 16 is the biggest topic, it includes most of the programming tools and skill sets that relate to data science jobs.

The model here is more for exploration and practice. A lot of refining works like lemmatize tokens, computing bigram( so machine learning can be counted as one word), optimize the number of topics, etc, still needs to be applied.






About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI