无法读取txt文件中的url 我想逐个读取并打开txt中的url地址,我想从url地址源获取带有regex的标题标题 错误消息:
回溯(最近一次呼叫最后):文件" Mypy.py",第14行,在 UrlsOpen = urllib2.urlopen(listSplit)文件" /usr/lib/python2.7/urllib2.py" ;,第154行,在urlopen中 返回opener.open(url,data,timeout)文件" /usr/lib/python2.7/urllib2.py" ;,第420行,打开 req.timeout = timeout AttributeError:' list'对象没有属性'超时'
Mypy.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import requests
import urllib2
import threading
UrlListFile = open("Url.txt","r")
UrlListRead = UrlListFile.read()
UrlListFile.close()
listSplit = UrlListRead.split('\r\n')
UrlsOpen = urllib2.urlopen(listSplit)
ReadSource = UrlsOpen.read().decode('utf-8')
regex = '<title.*?>(.+?)</title>'
comp = re.compile(regex)
links = re.findall(comp,ReadSource)
for i in links:
SaveDataFiles = open("SaveDataMyFile.txt","w")
SaveDataFiles.write(i)
SaveDataFiles.close()
答案 0 :(得分:0)
当您呼叫urllib2.urlopen(listSplit)
时,listSplit是一个需要string or request object的列表。这是一个简单的修复迭代listSplit而不是将整个列表传递给urlopen。
同样re.findall()
将返回搜索到的每个ReadSource的列表。您可以通过以下几种方式处理:
我选择通过制作列表列表来处理它
websites = [ [link, link], [link], [link, link, link]
并迭代两个列表。这使得你可以为每个网站的每个网址列表做一些特定的事情(放在不同的文件中......)。
您还可以将website
列表展平为仅包含链接,而不是包含链接的其他列表:
links = [link, link, link, link]
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import urllib2
from pprint import pprint
UrlListFile = open("Url.txt", "r")
UrlListRead = UrlListFile.read()
UrlListFile.close()
listSplit = UrlListRead.splitlines()
pprint(listSplit)
regex = '<title.*?>(.+?)</title>'
comp = re.compile(regex)
websites = []
for url in listSplit:
UrlsOpen = urllib2.urlopen(url)
ReadSource = UrlsOpen.read().decode('utf-8')
websites.append(re.findall(comp, ReadSource))
with open("SaveDataMyFile.txt", "w") as SaveDataFiles:
for website in websites:
for link in website:
pprint(link)
SaveDataFiles.write(link.encode('utf-8'))
SaveDataFiles.close()