Twitter Python Crawler的爬行机制问题

时间:2011-08-05 00:18:23

标签: python html twitter web-crawler

下面是我的twitter抓取机制的一小段代码:

from BeautifulSoup import BeautifulSoup
import re
import urllib2

url = 'http://mobile.twitter.com/NYTimesKrugman'

def gettweets(soup):
    tags = soup.findAll('div', {'class' : "list-tweet"})#to obtain tweet of a follower
    for tag in tags: 
        print tag.renderContents()
        print ('\n\n')

def are_more_tweets(soup):#to check whether there is more than one page on mobile   twitter 
    links = soup.findAll('a', {'href': True}, {id: 'more_link'})
    for link in links:
        b = link.renderContents()
        test_b = str(b)
        if test_b.find('more'):
            return True
        else:
            return False

def getnewlink(soup): #to get the link to go to the next page of tweets on twitter 
    links = soup.findAll('a', {'href': True}, {id : 'more_link'})
    for link in links:
        b = link.renderContents()
        if str(b) == 'more':
            c = link['href']
            d = 'http://mobile.twitter.com' +c
            return d

def checkforstamp(soup): # the parser scans a webpage to check if any of the tweets are older than 3 months
    times = soup.findAll('a', {'href': True}, {'class': 'status_link'})
    for time in times:
        stamp = time.renderContents()
        test_stamp = str(stamp)
        if test_stamp == '3 months ago':  
            print test_stamp
            return True
        else:
            return False


response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup(html)
gettweets(soup)
stamp = checkforstamp(soup)
tweets = are_more_tweets(soup)
print 'stamp' + str(stamp)
print 'tweets' +str (tweets)
while (stamp is False) and (tweets is True): 
    b = getnewlink(soup)
    print b
    red = urllib2.urlopen(b)
    html = red.read()
    soup = BeautifulSoup(html)
    gettweets(soup)
    stamp = checkforstamp(soup)
    tweets = are_more_tweets(soup)
print 'done' 

问题是,在我的Twitter抓取器打了大约3个月的推文之后,我希望它停止转到用户的下一页。但是,似乎并没有这样做。它似乎不断寻找下一页的推文。我相信这是因为checkstamp持续评估为False。有没有人有任何关于我如何修改代码的建议,以便只要有更多的推文(由are_more_tweets机制验证)并且它还没有达到3个月的推文,爬虫就会一直在寻找推文的下一页? ??谢谢!

编辑 - 请参阅以下内容:

from BeautifulSoup import BeautifulSoup
import re
import urllib

url = 'http://mobile.twitter.com/cleversallie'
output = open(r'C:\Python28\testrecursion.txt', 'a') 

def gettweets(soup):
    tags = soup.findAll('div', {'class' : "list-tweet"})#to obtain tweet of a follower
    for tag in tags: 
        a = tag.renderContents()
        b = str (a)
        print(b)
        print('\n\n')

def are_more_tweets(soup):#to check whether there is more than one page on mobile twitter 
    links = soup.findAll('a', {'href': True}, {id: 'more_link'})
    for link in links:
        b = link.renderContents()
        test_b = str(b)
        if test_b.find('more'):
            return True
        else:
            return False

def getnewlink(soup): #to get the link to go to the next page of tweets on twitter 
    links = soup.findAll('a', {'href': True}, {id : 'more_link'})
    for link in links:
        b = link.renderContents()
        if str(b) == 'more':
            c = link['href']
            d = 'http://mobile.twitter.com' +c
            return d

 def checkforstamp(soup): # the parser scans a webpage to check if any of the tweets are older than 3 months
    times = soup.findAll('a', {'href': True}, {'class': 'status_link'})
    for time in times:
        stamp = time.renderContents()
        test_stamp = str(stamp)
        if not (test_stamp[0]) in '0123456789':
            continue
        if test_stamp == '3 months ago':
            print test_stamp
            return True
        else:
            return False


response = urllib.urlopen(url)
html = response.read()
soup = BeautifulSoup(html)
gettweets(soup)
stamp = checkforstamp(soup)
tweets = are_more_tweets(soup)
while (not stamp) and (tweets): 
    b = getnewlink(soup)
    print b
    red = urllib.urlopen(b)
    html = red.read()
    soup = BeautifulSoup(html)
    gettweets(soup)
    stamp = checkforstamp(soup)
    tweets = are_more_tweets(soup)
 print 'done' 

1 个答案:

答案 0 :(得分:1)

您的soup.findall()正在与您的模式匹配的链接中选取一个图片代码(具有href属性和class status-link)。

尝试:

,而不是始终return在第一个链接上
for time in times:
    stamp = time.renderContents()
    test_stamp = str(stamp)
    print test_stamp
    if not test_stamp[0] in '0123456789':
        continue
    if test_stamp == '3 months ago':  
        return True
    else:
        return False

如果没有以数字开头,那么会跳过该链接,因此您实际上可能会找到正确的链接。将print语句保留在那里,这样您就可以看到是否有其他类型的链接以您还需要过滤的数字开头。

修改:您在times第一个项目始终返回。我更改了它,因此忽略了任何不以数字开头的链接

但是,如果找不到带有数字的任何链接,则会导致它返回None。这可以正常工作,除非您将while not stamp and tweets更改为while stamp is False and tweets is True。将其更改回while not stamp and tweets,它会正确地将NoneFalse视为相同,并且应该有效。