How Xpath Plays Vital Role In Web Scraping Part 2
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
You can read the first part of the post here: How Xpath Plays Vital Role In Web Scraping
Here is a piece of content on Xpaths which is the follow up of How Xpath Plays Vital Role In Web Scraping
Letβs dive into a real-world example of scraping amazon website for getting information about deals of the day. Deals of the day in amazon can be found at this URL. So navigate to the amazon (deals of the day) in Firefox and find the XPath selectors. Right click on the deal you like and select βInspect Element with Firebugβ:
If you observe the image below keenly, there you can find the source of the image(deal) and the name of the deal in src, alt attributeβs respectively.

So now letβs write a generic XPath which gathers the name and image source of the product(deal).
//img[@role=βimgβ]/@src ## for image source
//img[@role=βimgβ]/@alt ## for product name
In this post, Iβll show you some tips we found valuable when using XPath in the trenches.
If you have an interest in Python and web scraping, you may have already played with the nice requests library to get the content of pages from the Web. Maybe you have toyed around using Scrapy selector or lxml to make the content extraction easier. Well, now Iβm going to show you some tips I found valuable when using XPath in the trenches and we are going to use both lxml and Scrapy selector for HTML parsing.
Avoid using expressions which contains(.//text(), βsearch textβ) in your XPath conditions. Use contains(., βsearch textβ) instead.
Here is why: the expression .//text() yields a collection of text elements β a node-set(collection of nodes).and when a node-set is converted to a string, which happens when it is passed as argument to a string function like contains() or starts-with(), results in the text for the first element only.
Scrapy Code:
from scrapy import Selector
html_code = βββ<a href=β#β>Click here to go to the <strong>Next Page</strong></a>βββ
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract() # Letβs type this only once
print xp(β//a//text()β) # Take a peek at the node-set
[uβClick here to go to the β, uβNext Pageβ] # output of above command
print xp(βstring(//a//text())β) # convert it to a string
[uβClick here to go to the β] # output of the above command
Letβs do the above one by using lxml then you can implement XPath by both lxml or Scrapy selector as XPath expression is same for both methods.
lxml code:
from lxml import html
html_code = βββ<a href=β#β>Click here to go to the <strong>Next Page</strong></a>βββ # Parse the text into a tree
parsed_body = html.fromstring(html_code) # Perform xpaths on the tree
print parsed_body(β//a//text()β) # take a peek at the node-set
[uβClick here to go to the β, uβNext Pageβ] # output
print parsed_body(βstring(//a//text())β) # convert it to a string
[uβClick here to go to the β] # output
A node converted to a string, however, puts together the text of itself plus of all its descendants:
>>> xp(β//a[1]β) # selects the first a node
[u'<a href=β#β>Click here to go to the <strong>Next Page</strong></a>β]
>>> xp(βstring(//a[1])β) # converts it to string
[uβClick here to go to the Next Pageβ]
Beware of the difference between //node[1] and (//node)[1]//node[1] selects all the nodes occurring first under their respective parents and (//node)[1] selects all the nodes in the document, and then gets only the first of them.
from scrapy import Selector
html_code = βββ<ul class=βlistβ>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>
<ul class=βlistβ>
<li>4</li>
<li>5</li>
<li>6</li>
</ul>βββ
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()
xp(β//li[1]β) # get all first LI elements under whatever it is its parent
[u'<li>1</li>β, u'<li>4</li>β]
xp(β(//li)[1]β) # get the first LI element in the whole document
[u'<li>1</li>β]
xp(β//ul/li[1]β) # get all first LI elements under an UL parent
[u'<li>1</li>β, u'<li>4</li>β]
xp(β(//ul/li)[1]β) # get the first LI element under an UL parent in the document
[u'<li>1</li>β]
Also,
//a[starts-with(@href, β#β)][1] gets a collection of the local anchors that occur first under their respective parents and (//a[starts-with(@href, β#β)])[1] gets the first local anchor in the document.
When selecting by class, be as specific as necessary.
If you want to select elements by a CSS class, the XPath way to do the same job is the rather verbose:
*[contains(concat(β β, normalize-space(@class), β β), β someclass β)]
Letβs cook up some examples:
>>> sel = Selector(text='<p class=βcontent-authorβ>Someone</p><p class=βcontent text-wrapβ>Some content</p>β)
>>> xp = lambda x: sel.xpath(x).extract()
BAD: because there are multiple classes in the attribute
>>> xp(β//*[@class=βcontentβ]β)
[]
BAD: gets more content than we need
>>> xp(β//*[contains(@class,βcontentβ)]β)
[u'<p class=βcontent-authorβ>Someone</p>β,
u'<p class=βcontent text-wrapβ>Some content</p>β]
GOOD:
>>> xp(β//*[contains(concat(β β, normalize-space(@class), β β), β content β)]β)
[u'<p class=βcontent text-wrapβ>Some content</p>β]
And many times, you can just use a CSS selector instead, and even combine the two of them if needed:
ALSO GOOD:
>>> sel.css(β.contentβ).extract()
[u'<p class=βcontent text-wrapβ>Some content</p>β]
>>> sel.css(β.contentβ).xpath(β@classβ).extract()
[uβcontent text-wrapβ]
Learn to use all the different axes.
It is handy to know how to use the axes, you can follow through these examples.
In particular, you should note that following and following-sibling are not the same thing, this is a common source of confusion. The same goes for preceding and preceding-sibling, and also ancestor and parent.
Useful trick to get text content
Here is another XPath trick that you may use to get the interesting text contents:
//*[not(self::script or self::style)]/text()[normalize-space(.)]
This excludes the content from the script and style tags and also skip whitespace-only text nodes.
Tools & Libraries Used:
Firefox
Firefox inspect element with firebug
Scrapy : 1.1.1
Python : 2.7.12
Requests : 2.11.0
Read the original article here: