Parsing Words on the Web using Python

Posted on Jul 31, 2015

Eszter D. Schoell, PhD

Published July 30, 2015

In searching the web for tools for non-profit organizations, it slowly became irksome to start clicking on links on a webpage, only to realize the page was not well maintained. In addition, pages were sometimes hard to classify - would this be a useful page for me or not? Searching the web can be a time consuming enterprise and I needed some quick metrics to help me make a fast decision about whether a webpage was worth my time or not.

I therefore wanted a tool that could give me an indication of how well the page was maintained and what the main points of the page were. This led me to write a Python class that could easily check any website and return the percentages of link status codes, as well as the frequency of words.

For the status of the page, I scraped the links and calculated the percentages of each type of status code. Webpage status codes range from 100 to 500 and indicate, for example, whether the page is okay, can be found or if there was an error (HTTP response codes). The percentages of the different codes can be used as an indicator of whether the page is up-to-date and possibly a good source of information or not.

Using the following webpage:, I ran the following code:

# Create object of website in python
s = wsite('')
# Pull in all text from site. This method also checks which agent to use.
# Pull in all links on page and remove duplicates

In order to determine whether the page is well maintained, the Python class collects the status of all the links on the page, plots a histogram of the statuses, and prints a png of the histogram to the working directory. Adding the parameter 'yes' to a method returns the output to the console for further analysis (default is 'no').


Fig.1: Info = 100s; Ok = 200s; Redirected = 300s; Client Error = 400s; Server Error = 500s; Broken = unreachable

This particular page does not seem to be well maintained - only 30.4% of the links go directly to the linked webpage while 6.5% (about 9 links) no longer work. To help put the percentages into perspective, the total number of sites is also printed onto the graphic.

The Python class also includes a method to count all words on the site. The frequency of certain words may help with deciding on the usefulness of the webpage. In counting the words, either all words are counted or certain words are excluded. Currently, the script has a simple list of words to be excluded that might not hold much information, such as prepositions and 'the.' The number of words returned can be changed; the default is five.


Figure 2

Figure 3

As expected, the top 5 most frequent words out of all the words on the not indicate whether the page will be useful (Figure 2). However, by excluding prepositions, the second histogram (Figure 3) may be more useful.

Following is a quiz of which histograms belong to which website:

1. Website trying to explain government spending

2. Washington Post article about government spending

3. Financial Times article about GOP spending cuts


Figure A


Figure B


Figure C

Even though simply counting the words on a webpage is an extremely simple metric, it could be used for a quick first glance at whether the webpage is something you wish to investigate further.

1:B, 2:C, 3:A

About Author

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp