我试图计算政治家在某些演讲中使用的收缩次数。我有很多演讲,但这里有一些网址作为样本:
every_link_test = ['http://www.millercenter.org/president/obama/speeches/speech-4427',
'http://www.millercenter.org/president/obama/speeches/speech-4424',
'http://www.millercenter.org/president/obama/speeches/speech-4453',
'http://www.millercenter.org/president/obama/speeches/speech-4612',
'http://www.millercenter.org/president/obama/speeches/speech-5502']
我现在有一个非常粗略的计数器 - 它只计算所有这些链接中使用的总收缩数。例如,以下代码返回上面五个链接的79,101,101,182,224
。但是,我想链接filename
,我在下面创建的变量,所以我会有类似(speech_1, 79),(speech_2, 22),(speech_3,0),(speech_4,81),(speech_5,42)
的内容。这样,我就可以跟踪每个单独演讲中使用的收缩次数。我的代码出现以下错误:AttributeError: 'tuple' object has no attribute 'split'
这是我的代码:
import urllib2,sys,os
from bs4 import BeautifulSoup,NavigableString
from string import punctuation as p
from multiprocessing import Pool
import re, nltk
import requests
reload(sys)
url = 'http://www.millercenter.org/president/speeches'
url2 = 'http://www.millercenter.org'
conn = urllib2.urlopen(url)
html = conn.read()
miller_center_soup = BeautifulSoup(html)
links = miller_center_soup.find_all('a')
linklist = [tag.get('href') for tag in links if tag.get('href') is not None]
# remove all items in list that don't contain 'speeches'
linkslist = [_ for _ in linklist if re.search('speeches',_)]
del linkslist[0:2]
# concatenate 'http://www.millercenter.org' with each speech's URL ending
every_link_dups = [url2 + end_link for end_link in linkslist]
# remove duplicates
seen = set()
every_link = [] # no duplicates array
for l in every_link_dups:
if l not in seen:
every_link.append(l)
seen.add(l)
def processURL_short_2(l):
open_url = urllib2.urlopen(l).read()
item_soup = BeautifulSoup(open_url)
item_div = item_soup.find('div',{'id':'transcript'},{'class':'displaytext'})
item_str = item_div.text.lower()
splitlink = l.split("/")
president = splitlink[4]
speech_num = splitlink[-1]
filename = "{0}_{1}".format(president, speech_num)
return item_str, filename
every_link_test = every_link[0:5]
print every_link_test
count = 0
for l in every_link_test:
content_1 = processURL_short_2(l)
for word in content_1.split():
word = word.strip(p)
if word in contractions:
count = count + 1
print count, filename
答案 0 :(得分:0)
而不是print count, filename
,您应该将这些数据保存到数据结构中,例如字典。由于processURL_short_2
已被修改为返回元组,因此您需要将其解压缩。
data = {} # initialize a dictionary
for l in every_link_test:
content_1, filename = processURL_short_2(l) # unpack the content and filename
for word in content_1.split():
word = word.strip(p)
if word in contractions:
count = count + 1
data[filename] = count # add this to the dictionary as filename:count
这会为您提供{'obama_4424':79, 'obama_4453':101,...}
之类的字典,让您轻松存储和访问已解析的数据。
答案 1 :(得分:0)
正如错误消息所述,您无法按照使用方式使用拆分。 split是用于字符串的。
所以你需要改变这个:
for word in content_1.split():
到此:
for word in content_1[0]:
我通过运行您的代码选择[0]
,我认为这会为您提供您要搜索的文本块。
@ TigerhawkT3有一个很好的建议你也应该遵循他们的答案: