Parsing Words on the Web using Python
Eszter D. Schoell, PhD
Published July 30, 2015
In searching the web for tools for non-profit organizations, it slowly became irksome to start clicking on links on a webpage, only to realize the page was not well maintained. In addition, pages were sometimes hard to classify - would this be a useful page for me or not? Searching the web can be a time consuming enterprise and I needed some quick metrics to help me make a fast decision about whether a webpage was worth my time or not.
I therefore wanted a tool that could give me an indication of how well the page was maintained and what the main points of the page were. This led me to write a Python class that could easily check any website and return the percentages of link status codes, as well as the frequency of words.
For the status of the page, I scraped the links and calculated the percentages of each type of status code. Webpage status codes range from 100 to 500 and indicate, for example, whether the page is okay, can be found or if there was an error (HTTP response codes). The percentages of the different codes can be used as an indicator of whether the page is up-to-date and possibly a good source of information or not.
Using the following webpage: http://www.socialbrite.org/cause-organizations/, I ran the following code:
# Create object of website in python s = wsite('http://www.socialbrite.org/cause-organizations/') # Pull in all text from site. This method also checks which agent to use. s.get_text() # Pull in all links on page and remove duplicates s.pullSites() s.removeDuplicates()
In order to determine whether the page is well maintained, the Python class collects the status of all the links on the page, plots a histogram of the statuses, and prints a png of the histogram to the working directory. Adding the parameter 'yes' to a method returns the output to the console for further analysis (default is 'no').
This particular page does not seem to be well maintained - only 30.4% of the links go directly to the linked webpage while 6.5% (about 9 links) no longer work. To help put the percentages into perspective, the total number of sites is also printed onto the graphic.
The Python class also includes a method to count all words on the site. The frequency of certain words may help with deciding on the usefulness of the webpage. In counting the words, either all words are counted or certain words are excluded. Currently, the script has a simple list of words to be excluded that might not hold much information, such as prepositions and 'the.' The number of words returned can be changed; the default is five.
As expected, the top 5 most frequent words out of all the words on the webpage.do not indicate whether the page will be useful (Figure 2). However, by excluding prepositions, the second histogram (Figure 3) may be more useful.
Following is a quiz of which histograms belong to which website:
Even though simply counting the words on a webpage is an extremely simple metric, it could be used for a quick first glance at whether the webpage is something you wish to investigate further.
1:B, 2:C, 3:A