我有一个python线程问题。我一直在寻找一天以上,并没有变得更好,所以我想找到帮助。我使用python3.4
。
第一个问题是:
class myThread (threading.Thread):
def __init__(self, url):
threading.Thread.__init__(self)
self.url = url
def run(self):
spider (url)
我正在代码的某个部分使用toBeProcessed +'/robots.txt'
。如果我使用上面的方法,它不会给我错误 - 但它仍然无法正常工作,并非所有线程都运行。如果我使用以下方法,它会告诉我unsupported operand type(s) for +: '_thread._local' and 'str'
:
def run(self):
spider (self.url)
请注意,我确实有此声明toBeProcessed = threading.local()
。
第二个问题是关于代码的其余部分,只有两个线程完成工作,线程的其余部分 - 无论它们的编号都不起作用。
完整代码:
def spider(url,superMaxPages):
print(threading.current_thread())
toBeProcessed = threading.local()
data = threading.local()
parser = threading.local()
links = threading.local()
lock = threading.Lock()
writeLock = threading.Lock()
# Start from the beginning of our collection of pages to visit:
while 1:
if LinkParser.numVisited > maxPages:
print ('max pages reached')
break
lock.acquire()
try:
if not url:
time.sleep(0.01)
lock.release()
continue
print('to be processed ')
toBeProcessed = url.pop()
except:
print('threading error')
lock.release()
# In case we are not allowed to read the page.
rp = robotparser.RobotFileParser()
rp.set_url(toBeProcessed +'/robots.txt')
rp.read()
if not(rp.can_fetch("*", toBeProcessed)):
continue
LinkParser.visited.append(toBeProcessed)
LinkParser.numVisited += 1
writeLock.acquire()
try:
f.write(toBeProcessed+'\n')
finally:
writeLock.release()
try:
parser = LinkParser()
data, links = parser.getLinks(toBeProcessed)
# Add the pages that we visited to the end of our collection
url = url + links
print("One more page added from &i",threading.get_ident())
except:
print(" **Failed!**")
class myThread (threading.Thread):
def __init__(self, url, maxPages):
threading.Thread.__init__(self)
self.maxPages = maxPages
self.url = url
def run(self):
spider (self.url, maxPages)
没有像这样初始化网址url = []
这就是我运行我的线程的方式,
myThread( spider, (url,maxPages) ).start