使用Python,Flipkart.com产品'价格'和产品'标题'提取

时间:2013-05-04 21:45:50

标签: python python-2.7 web-scraping beautifulsoup

我编写了以下Python代码,用于从flipkart.com中提取指定项目的PRICE

import urllib2
import bs4
import re

item="Wilco Classic Library: Autobiography Of a Yogi (Hardcover)"
item.replace(" ", "+")
link = 'http://www.flipkart.com/search/a/all?query={0}&vertical=all&dd=0&autosuggest[as]=off&autosuggest[as-submittype]=entered&autosuggest[as-grouprank]=0&autosuggest[as-overallrank]=0&autosuggest[orig-query]=&autosuggest[as-shown]=off&Search=%C2%A0&otracker=start&_r=YSWdYULYzr4VBYklfpZRbw--&_l=pMHn9vNCOBi05LKC_PwHFQ--&ref=a2c6fadc-2e24-4412-be6a-ce02c9707310&selmitem=All+Categories'.format(item)
r = urllib2.Request(link, headers={"User-Agent": "Python-urlli~"})
try:
    response = urllib2.urlopen(r)
except:
    print "Internet connection error"  
thePage = response.read()
soup = bs4.BeautifulSoup(thePage)
firstBlockSoup = soup.find('div', attrs={'class': 'fk-srch-item'})
priceSoup=firstBlockSoup.find('b',attrs={'class':'fksd-bodytext price final-price'})
price=priceSoup.contents[0]
print price

titleSoup=firstBlockSoup.find('a',attrs={'class':'fk-srch-title-text fksd-bodytext'})
title=titleSoup.findAll('b')
print title

执行上面的代码打印PRICE没有问题。

Rs. 138 

但标题如下:

[<b>Wilco</b>, <b>Classic</b>, <b>Library</b>, <b>Autobiography</b>, <b>Of</b>, <b>a</b>, <b>Yogi</b>, <b>Hardcover</b>] 

如果您查看product page的源代码(使用'Inspect element')

,原因就会很明显

现在,如何以适当的格式提取TITLE以便打印:

Wilco Classic Library: Autobiography Of a Yogi (Hardcover)

2 个答案:

答案 0 :(得分:1)

firstBlockSoup标记获取标题会更容易:

>>> firstBlockSoup.attrs['data-item-name']
'Wilco Classic Library: Autobiography Of a Yogi (Hardcover)'

答案 1 :(得分:1)

只需使用text

上的titleSoup方法即可
>>> titleSoup=firstBlockSoup.find('a',attrs={'class':'fk-srch-title-text fksd-bodytext'})
>>> titleSoup.text
u'Wilco Classic Library: Autobiography Of a Yogi (Hardcover)'

这也有效:

invalid_tags = ['b']
titleSoup=firstBlockSoup.find('a',attrs={'class':'fk-srch-title-text fksd-bodytext'})

for tag in invalid_tags: 
    for match in titleSoup.findAll(tag):
       match.replaceWithChildren()
print "".join(titleSoup.contents)