字符串切片的不可写的Int错误

时间:2011-03-27 21:06:56

标签: python string slice

我正在写一个webscraper,我有一个表格,里面有我想要下载,保存和以后分析的.pdf文件的链接。我正在使用美味的汤,我有汤找到所有的链接。它们通常是美丽的汤标签对象,但我把它们变成了字符串。该字符串实际上是一堆垃圾,链接文本埋在中间。我想删除那个垃圾,然后离开链接。然后我将把它们变成一个列表,然后让python下载它们。 (我的计划是让python保留一个pdf链接名称列表以跟踪它的下载内容,然后它可以根据这些链接名称或其中的一部分命名文件)。

但.pdfs有变量名长度,例如:

  • I_am_the_first_file.pdf
  • And_I_am_the_seond_file.pdf

并且它们存在于表中,它们有一堆垃圾文本:

  • a href =://blah/blah/blah/I_am_the_first_file.pdf [以及偶然进入我的字符串的其他注释]
  • a href =://blah/blah/blah/And_I_am_the_seond_file.pdf [以及偶然进入我的字符串的其他注释]

所以我想剪切(“切片”)字符串的前部和最后部分,然后留下指向我的url的字符串(以下是我的程序所需的输出):

  • ://blah/blah/blah/I_am_the_first_file.pdf
  • ://blah/blah/blah/And_I_am_the_seond_file.pdf

如您所见,第二个文件在字符串中的字符数多于第一个。所以我不能这样做:

string[9:40]

或者其他什么,因为它适用于第一个文件但不适用于第二个文件。

所以我试图在字符串切片的末尾提出一个变量,如下所示:

string[9:x]

其中x是字符串中以'.pdf'结尾的位置(我的想法是使用string.index('。pdf')函数来执行此操作。

但是t3h失败了,因为我在尝试使用变量来执行此操作时出错

("TypeError: 'int' object is unsubscriptable")

除了搞乱字符串之外,可能有一个简单的答案和一个更好的方法来做到这一点,但是你们比我更聪明,我认为你会直截了当地知道。

到目前为止,这是我的完整代码:

import urllib, urllib2

from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen("mywebsite.com")

soup = BeautifulSoup(page)

table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.

for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them

   pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)

   if 'pdf' in pdf_link_string:
#some links in the table are .html and I don't want those, I just want the pdfs.

      end_of_link = pdf_link_string.index('.pdf')
#I want to know where the .pdf file extension ends because that's the end of the link, so I'll slice backward from there

      just_the_link = end_of_link[9:end_of_link]
#here, the first 9 characters are junk "a href = yadda yadda yadda".  So I'm setting a variable that starts just after that junk and goes to the .pdf (I realize that I will actualy have to do .pdf + 3 or something to actually get to the end of string, but this makes it easier for now).

      print just_the_link
#I debug by print statement because I'm an amatuer

行(从底部开始的第二行): just_the_link = end_of_link[9:end_of_link]

返回错误(TypeError: 'int' object is unsubscriptable

另外,“:”应该是超文本传输​​协议冒号,但它不会让我发布b / c newbs不能发布超过2个链接所以我把它们拿出来。

2 个答案:

答案 0 :(得分:1)

just_the_link = end_of_link[9:end_of_link]

这是您的问题,就像错误消息所说的那样。 end_of_link是一个整数 - pdf_link_string中的“.pdf”索引,您在前一行中计算过。因此,您无法对其进行切片。您想要切片pdf_link_string

答案 1 :(得分:0)

听起来像是正则表达式的工作:

import urllib, urllib2, re

from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen("mywebsite.com")

soup = BeautifulSoup(page)

table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.

for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them

   pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)


   if 'pdf' in pdf_link_string:
      pdfURLPattern = re.compile("""://(\w+/)+\S+.pdf""")
      pdfURLMatch = pdfURLPattern.search(line)

#If there is no match than search() returns None, otherwise the whole group (group(0)) returns the URL of interest.
      if pdfURLMatch:
         print pdfURLMatch.group(0)