如何使用特定的set关键字抓取特定网站中的最新文章?

时间:2015-06-12 17:32:54

标签: python-2.7

我正在尝试使用python代码根据文章名称等关键词抓取特定网站上的文章链接。但我没有获得适当链接的链接。

import sys
import requests
from bs4 import BeautifulSoup
import urllib.request
from urlparse import urlparse
def extract_article_links(url,data):
    req = urllib.request.Request(url,data)
    response = urllib.request.urlopen(req)
    responseData = response.read()
    #r = requests.get(url)
    soup = BeautifulSoup(responseData.content)
    links = soup.find_all('a')
    for link in links:
       try:
            #if 'http' in link:
            print ("<a href='%s'>%s</a>" % (link.get('href'),link.text))
       except Exception as e :
            print (e)
    responseData = soup.find_all("div",{"class:info"})
    print responseData
    for item in responseData:
        print (item.contents[0].text)
        print (item.contents[1].text)
    if __name__ == "__main__":
       from sys import argv
       if (len(argv)<2):
          print"Insufficient arguments..!!"
          sys.exit(1)
        url = sys.argv[1]
        values = {'s':'article','submit':'search'}
        data = urlparse.urlencode(values)
        data = data.encode('utf-8')
        extract_article_links(url,data)

2 个答案:

答案 0 :(得分:0)

尝试lxml,分析html并找到您要查找的元素,然后您可以使用xpath轻松完成此操作:

from lxml import html
print map (lambda link: link, html.fromstring(source).xpath('//a/@href'))

当然您需要根据您要查找的属性修改xpath。

答案 1 :(得分:0)

试试这个

import requests
from bs4 import BeautifulSoup

def extract_article_links(url,data):
    soup = BeautifulSoup(requests.get('http://www.hindustantimes.com/Search/search.aspx?q={}&op=All&pt=all&auth=all'.format(data)).content)
    responseData = soup.find("ul",{'class':'searchNews'}) 
    _a, _li = responseData.find_all('a'), responseData.find_all('li')
    for i,j in zip(_a,_li):
        print '='*40,'\nLink: ',i['href'], '\nTitle: ',i.contents[0], '\nContent: \n\t', j.p.get_text(),'\n'

if __name__ == "__main__":
    url = "http://www.hindustantimes.com/"
    extract_article_links(url,'article')