How Xpath Plays Vital Role In Web Scraping Part 2
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
You can read the first part of the post here: How Xpath Plays Vital Role In Web Scraping
Here is a piece of content on ย Xpaths which is the follow up ofย How Xpath Plays Vital Role In Web Scrapingย
Letโs dive into a real-world example of scraping amazon website for getting information about deals of the day. Deals of the day in amazon can be found at thisย URL. So navigate to theย amazonย (deals of the day) in Firefox and find the XPath selectors. Right click on the deal you like and select โInspect Element with Firebugโ:
If you observe the image below keenly, there you can find the source of the image(deal) and the name of the deal in src, alt attributeโs respectively.
So now letโs write a generic XPath which gathers the name and image source of the product(deal).
ย ย //img[@role=โimgโ]/@srcย ย ## for image source
ย ย //img[@role=โimgโ]/@altย ย ย ## for product name
In this post, Iโll show you some tips we found valuable when using XPath in the trenches.
If you have an interest in Python and web scraping, you may have already played with the niceย requests libraryย to get the content of pages from the Web. Maybe you have toyed around usingย Scrapy selectorย orย lxmlย to make the content extraction easier. Well, now Iโm going toย show you some tips I found valuable when using XPath in the trenches and weย are going to use bothย lxmlย andย Scrapy selectorย for HTML parsing.
Avoid using expressions which contains(.//text(), โsearch textโ) in your XPath conditions. Use contains(., โsearch textโ) instead.
Here is why: the expression .//text() yields a collection of text elements โ a node-set(collection of nodes).and when a node-set is converted to a string, which happens when it is passed as argument to a string function like contains() or starts-with(), results in the text for the first element only.
Scrapy Code:
from scrapy import Selector
html_code = โโโ<a href=โ#โ>Click here to go to the <strong>Next Page</strong></a>โโโ
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()ย ย ย ย ย ย # Letโs type this only once
print xp(โ//a//text()โ) ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย # Take a peek at the node-set
[uโClick here to go to the โ, uโNext Pageโ]ย ย ย # output of above command
print xp(โstring(//a//text())โ)ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย # convert it to a string
ย ย [uโClick here to go to the โ] ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย # output of the above command
Letโs do the above one by using lxml then you can implement XPath by both lxml or Scrapy selector as XPath expression is same for both methods.
lxml code:
from lxml import html
html_codeย =ย โโโ<a href=โ#โ>Click here to go to the <strong>Next Page</strong></a>โโโย # Parse the text into a tree
parsed_body = html.fromstring(html_code)ย # Perform xpaths on the tree
print parsed_body(โ//a//text()โ)ย ย ย ย ย ย ย ย ย ย ย ย # take a peek at the node-set
[uโClick here to go to the โ, uโNext Pageโ] ย # output
print parsed_body(โstring(//a//text())โ)ย ย ย ย ย ย ย ย ย ย ย ย ย ย # convert it to a string
[uโClick here to go to the โ] ย ย ย ย ย ย ย ย ย ย # output
A node converted to a string, however, puts together the text of itself plus of all its descendants:
>>>ย xp(โ//a[1]โ)ย ย # selects the first a node
[u'<a href=โ#โ>Click here to go to the <strong>Next Page</strong></a>โ]
>>>ย xp(โstring(//a[1])โ)ย ย # converts it to string
[uโClick here to go to the Next Pageโ]
Beware of the difference between //node[1] and (//node)[1]//node[1] selects all the nodes occurring first under their respective parents and (//node)[1] selects all the nodes in the document, and then gets only the first of them.
from scrapy import Selector
html_code = โโโ<ul class=โlistโ>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>
<ul class=โlistโ>
<li>4</li>
<li>5</li>
<li>6</li>
</ul>โโโ
sel = Selector(text=html_code)
xp = lambda x: sel.xpath(x).extract()
xp(โ//li[1]โ)ย # get all first LI elements under whatever it is its parent
[u'<li>1</li>โ, u'<li>4</li>โ]
xp(โ(//li)[1]โ)ย # get the first LI element in the whole document
[u'<li>1</li>โ]
xp(โ//ul/li[1]โ)ย ย # get all first LI elements under an UL parent
[u'<li>1</li>โ, u'<li>4</li>โ]
xp(โ(//ul/li)[1]โ)ย # get the first LI element under an UL parent in the document
[u'<li>1</li>โ]
Also,
//a[starts-with(@href, โ#โ)][1] gets a collection of the local anchors that occur first under their respective parents and (//a[starts-with(@href, โ#โ)])[1] gets the first local anchor in the document.
When selecting by class, be as specific as necessary.
If you want to select elements by a CSS class, the XPath way to do the same job is the rather verbose:
*[contains(concat(โ โ, normalize-space(@class), โ โ), โ someclass โ)]
Letโs cook up some examples:
>>>ย sel = Selector(text='<p class=โcontent-authorโ>Someone</p><p class=โcontent text-wrapโ>Some content</p>โ)
>>>ย xp = lambda x: sel.xpath(x).extract()
BAD: because there are multiple classes in the attribute
>>>ย xp(โ//*[@class=โcontentโ]โ)
[]
BAD: gets more content than we need
ย >>>ย xp(โ//*[contains(@class,โcontentโ)]โ)
ย ย ย [u'<p class=โcontent-authorโ>Someone</p>โ,
ย ย ย u'<p class=โcontent text-wrapโ>Some content</p>โ]
GOOD:
>>>ย xp(โ//*[contains(concat(โ โ, normalize-space(@class), โ โ), โ content โ)]โ)
[u'<p class=โcontent text-wrapโ>Some content</p>โ]
And many times, you can just use a CSS selector instead, and even combine the two of them if needed:
ALSO GOOD:
>>>ย sel.css(โ.contentโ).extract()
[u'<p class=โcontent text-wrapโ>Some content</p>โ]
>>>ย sel.css(โ.contentโ).xpath(โ@classโ).extract()
[uโcontent text-wrapโ]
Learn to use all the different axes.
It is handy to know how to use the axes, you can follow through theseย examples.
In particular, you should note that following and following-sibling are not the same thing, this is a common source of confusion. The same goes for preceding and preceding-sibling, and also ancestor and parent.
Useful trick to get text content
Here is another XPath trick that you may use to get the interesting text contents:ย ย
//*[not(self::script or self::style)]/text()[normalize-space(.)]
This excludes the content from the script and style tags and also skip whitespace-only text nodes.
Tools & Libraries Used:
Firefox
Firefox inspect element with firebug
Scrapy : 1.1.1
Python : 2.7.12
Requests : 2.11.0
Read the original article here: