我已经编写了一个刮刀,它仅用于废弃与关键字匹配的网站。这是代码:
class MySpider(CrawlSpider):
name = 'smm'
allowed_domains = []
start_urls = ['http://en.wikipedia.org/wiki/Social_media']
rules = (
Rule(SgmlLinkExtractor(deny=('statcounter.com/','wikipedia','play.google','books.google.com','github.com','amazon','bit.ly','wikimedia','mediawiki','creativecommons.org')), callback="parse_items", follow= True),
)
def parse_items(self, response):
items = []
#Define keywords present in metadata to scrap the webpage
keywords = ['social media','social business','social networking','social marketing','online marketing','social selling',
'social customer experience management','social cxm','social cem','social crm','google analytics','seo','sem',
'digital marketing','social media manager','community manager']
#Extract webpage keywords
metakeywords = response.xpath('//meta[@name="keywords"]').extract()
#Discard empty keywords
if metakeywords != []:
#Compare keywords and extract if one of the defined keyboards is present in the metadata
if (keywords in metaKW for metaKW in metakeywords):
for link in response.xpath("//a"):
item = SocialMediaItem()
item['SourceTitle'] = link.xpath('/html/head/title').extract()
item['TargetTitle'] = link.xpath('text()').extract()
item['link'] = link.xpath('@href').extract()
item['webKW'] = metakeywords
outbound = str(link.xpath('@href').extract())
if 'http' in outbound:
items.append(item)
return items
然而,我认为我遗漏了一些东西,因为它也废弃了没有gicen关键字的网站。你能帮忙解决这个问题吗? 谢谢!
达尼
答案 0 :(得分:1)
如果您想检查metakeywords
列表中是否有任何关键字,请使用any:
if any(key in metakeywords for key in keywords):
答案 1 :(得分:0)
我认为它在这个奇怪的if语句中。
if (keywords in metaKW for metaKW in metakeywords)
试试这个:
for metaKW in metakeywords:
if metaKW in keywords:
# code...
break
这样就没有理由检查是否有元素。因此,您可以删除if metakeywords != []