How Xpath Plays Vital Role In Web Scraping Part 2

Posted on Oct 18, 2019
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

You can read the first part of the post here: How Xpath Plays Vital Role In Web Scraping

 

Here is a piece of content on ย Xpaths which is the follow up ofย How Xpath Plays Vital Role In Web Scrapingย 

Letโ€™s dive into a real-world example of scraping amazon website for getting information about deals of the day. Deals of the day in amazon can be found at thisย URL. So navigate to theย amazonย (deals of the day) in Firefox and find the XPath selectors. Right click on the deal you like and select โ€œInspect Element with Firebugโ€:

ย 
image00

 

If you observe the image below keenly, there you can find the source of the image(deal) and the name of the deal in src, alt attributeโ€™s respectively.

ย 
image01




So now letโ€™s write a generic XPath which gathers the name and image source of the product(deal).

ย ย //img[@role=โ€imgโ€]/@srcย ย ## for image source
ย ย //img[@role=โ€imgโ€]/@altย ย ย ## for product name

In this post, Iโ€™ll show you some tips we found valuable when using XPath in the trenches.

If you have an interest in Python and web scraping, you may have already played with the niceย requests libraryย to get the content of pages from the Web. Maybe you have toyed around usingย Scrapy selectorย orย lxmlย to make the content extraction easier. Well, now Iโ€™m going toย show you some tips I found valuable when using XPath in the trenches and weย are going to use bothย lxmlย andย Scrapy selectorย for HTML parsing.

 

Avoid using expressions which contains(.//text(), โ€˜search textโ€™) in your XPath conditions. Use contains(., โ€˜search textโ€™) instead.

Here is why: the expression .//text() yields a collection of text elements โ€” a node-set(collection of nodes).and when a node-set is converted to a string, which happens when it is passed as argument to a string function like contains() or starts-with(), results in the text for the first element only.

Scrapy Code:

 

from scrapy import Selector
html_code = โ€œโ€โ€<a href=โ€#โ€>Click here to go to the <strong>Next Page</strong></a>โ€โ€โ€
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()ย  ย  ย  ย  ย  ย # Letโ€™s type this only once
print xp(โ€˜//a//text()โ€™) ย ย ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย ย # Take a peek at the node-set
[uโ€™Click here to go to the โ€˜, uโ€™Next Pageโ€™]ย ย ย # output of above command
print xp(โ€˜string(//a//text())โ€™)ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย  ย ย ย # convert it to a string
ย ย [uโ€™Click here to go to the โ€˜] ย  ย  ย  ย  ย  ย  ย  ย ย  ย  ย  ย  ย ย ย # output of the above command

 

Letโ€™s do the above one by using lxml then you can implement XPath by both lxml or Scrapy selector as XPath expression is same for both methods.

 

lxml code:

from lxml import html
html_codeย =ย โ€œโ€โ€<a href=โ€#โ€>Click here to go to the <strong>Next Page</strong></a>โ€โ€โ€ย # Parse the text into a tree
parsed_body = html.fromstring(html_code)ย  # Perform xpaths on the tree
print parsed_body(โ€˜//a//text()โ€™)ย  ย ย ย  ย  ย  ย  ย  ย  ย  ย  ย  # take a peek at the node-set
[uโ€™Click here to go to the โ€˜, uโ€™Next Pageโ€™] ย  # output
print parsed_body(โ€˜string(//a//text())โ€™)ย ย ย ย ย ย ย ย ย ย ย ย ย ย # convert it to a string
[uโ€™Click here to go to the โ€˜] ย  ย  ย  ย  ย  ย  ย  ย  ย  ย # output

A node converted to a string, however, puts together the text of itself plus of all its descendants:

 

>>>ย xp(โ€˜//a[1]โ€™)ย ย # selects the first a node
[u'<a href=โ€#โ€>Click here to go to the <strong>Next Page</strong></a>โ€™]

>>>ย xp(โ€˜string(//a[1])โ€™)ย ย # converts it to string
[uโ€™Click here to go to the Next Pageโ€™]

Beware of the difference between //node[1] and (//node)[1]//node[1] selects all the nodes occurring first under their respective parents and (//node)[1] selects all the nodes in the document, and then gets only the first of them.

 

from scrapy import Selector

html_code = โ€œโ€โ€<ul class=โ€listโ€>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>

<ul class=โ€listโ€>
<li>4</li>
<li>5</li>
<li>6</li>
</ul>โ€โ€โ€

sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()

xp(โ€œ//li[1]โ€)ย # get all first LI elements under whatever it is its parent

[u'<li>1</li>โ€™, u'<li>4</li>โ€™]

xp(โ€œ(//li)[1]โ€)ย # get the first LI element in the whole document

[u'<li>1</li>โ€™]

xp(โ€œ//ul/li[1]โ€)ย ย # get all first LI elements under an UL parent

[u'<li>1</li>โ€™, u'<li>4</li>โ€™]

xp(โ€œ(//ul/li)[1]โ€)ย # get the first LI element under an UL parent in the document

[u'<li>1</li>โ€™]

 

Also,

//a[starts-with(@href, โ€˜#โ€™)][1] gets a collection of the local anchors that occur first under their respective parents and (//a[starts-with(@href, โ€˜#โ€™)])[1] gets the first local anchor in the document.

When selecting by class, be as specific as necessary.

If you want to select elements by a CSS class, the XPath way to do the same job is the rather verbose:

*[contains(concat(โ€˜ โ€˜, normalize-space(@class), โ€˜ โ€˜), โ€˜ someclass โ€˜)]

 

Letโ€™s cook up some examples:

>>>ย sel = Selector(text='<p class=โ€content-authorโ€>Someone</p><p class=โ€content text-wrapโ€>Some content</p>โ€™)

>>>ย xp = lambda x: sel.xpath(x).extract()

 

BAD: because there are multiple classes in the attribute

>>>ย xp(โ€œ//*[@class=โ€™contentโ€™]โ€)

[]

 

BAD: gets more content than we need

ย >>>ย xp(โ€œ//*[contains(@class,โ€™contentโ€™)]โ€)

ย  ย  ย [u'<p class=โ€content-authorโ€>Someone</p>โ€™,
ย  ย  ย u'<p class=โ€content text-wrapโ€>Some content</p>โ€™]

 

GOOD:

 

>>>ย xp(โ€œ//*[contains(concat(โ€˜ โ€˜, normalize-space(@class), โ€˜ โ€˜), โ€˜ content โ€˜)]โ€)
[u'<p class=โ€content text-wrapโ€>Some content</p>โ€™]

And many times, you can just use a CSS selector instead, and even combine the two of them if needed:

ALSO GOOD:

>>>ย sel.css(โ€œ.contentโ€).extract()
[u'<p class=โ€content text-wrapโ€>Some content</p>โ€™]

>>>ย sel.css(โ€˜.contentโ€™).xpath(โ€˜@classโ€™).extract()
[uโ€™content text-wrapโ€™]

 

Learn to use all the different axes.

It is handy to know how to use the axes, you can follow through theseย examples.

In particular, you should note that following and following-sibling are not the same thing, this is a common source of confusion. The same goes for preceding and preceding-sibling, and also ancestor and parent.

 

Useful trick to get text content

Here is another XPath trick that you may use to get the interesting text contents:ย ย 

//*[not(self::script or self::style)]/text()[normalize-space(.)]

This excludes the content from the script and style tags and also skip whitespace-only text nodes.

Tools & Libraries Used:

Firefox
Firefox inspect element with firebug
Scrapy : 1.1.1
Python : 2.7.12
Requests : 2.11.0

 

Read the original article here:

How Xpath Plays Vital Role In Web Scraping Part 2

About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI