Data Analysis on IT Jobs Demand
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Project Description:
This project is to show you data on the current IT Jobs demand in the USA.
By scrapping the IT Jobs posting Information from the Monster Jobs Website, the statistical data is collected and analyzed to provide you with the following insights:
- What are the States in the USA having a high demand for IT resources?
- What are the trending Skills and Job Titles in those States?
- Which cities in a state are actually searching for those resources of the relevant skills?
- In addition, you can make a statistical summary of two or more IT Skills to compare their current demand in your selected State and City.
- Also, you can find links to actual Job posts for your selected categories or skills.
Here is the link to the Shiny App: MONSTER IT Jobs Statistic by Huy Tran. Thank you for reading!
MONSTER IT Jobs Data Statistic App - Introduction:
Firstly, enter your Job Title or Titles (separated by a comma ",") of your interest, and click "Search" button.
For example: in the screenshot below, "Oracle" and "SQL Server" are entered.
- The US heat map is showing the demand for Oracle and SQL Server skillsets in the States.
- By selecting the State - California, the State map on the right-hand side is showing you which Cities in the State are actually looking for Oracle and SQL Server.
![]() |
The following bar charts are showing you the top hiring States and Cities by the selected Job Titles/Skillsets.
![]() |
And, by selecting a Time window - the following charts are showing you the number of Job postings during the time, and the actual links to the Job posts.
![]() |
Technical Briefing - Data Web Scraping Tactics:
-
Data from the Jobs Monster Website:
- Job Titles Information:
Location, Time and Number of Posting Information:
2. Using Python for web scraping:
- Packages Used:
- BeautifulSoup: to analyze HTML page and extract data from the web page.
- pandas: to store and process Data Frames in memory.
- re: to compile regular expression for text searching.
- urllib2: to download Web pages, and handle errors during the downloading process.
- pickle, os: to save data to local files, and restore data to memory for later processing.
- Tactics:
- To reduce the risk of getting disconnected or timeout problems during the process, target pages (HTML pages) were downloaded and saved to local files for the later processing.
- Though the process is running Mac machine with 16GB memory, the system often got hanged and chances are that you will be losing the data after hours of processing. For example: in this project, I had to download and process about 10,000 pages. So that, I had to make smaller chunks of pages, save them into the local disk, before loading them back to memory for data analyzing and extraction.
- Python codes snippets:
Web page download function:
![]() |
Extracting the number of posting information:
![]() |
state and the city information:
![]() |
Extracting posting time information:
![]() |
3. Data Frame Snapshot:
After extracting data from the Website with Python programs, the Pandas Data Frames are exported into CSV files for Shiny App.
Job Titles data frame - 882 entries:
![]() |
Location data frame - 7,582 entries:
![]() |
Time and Job posting data frame - 94,682 entries:
![]() |
Thank you for reading! Please access to the Shiny App at: MONSTER IT Jobs Statistic.
Questions: Email to huytquoc@gmail.com