这里的工作是将一个从https://xxx.xxx.xxx/xxx/1.json
到https://xxx.xxx.xxx/xxx/1417749.json
的站点的API抓取,并将其精确地写入mongodb。为此,我有以下代码:
client = pymongo.MongoClient("mongodb://127.0.0.1:27017")
db = client["thread1"]
com = db["threadcol"]
start_time = time.time()
write_log = open("logging.log", "a")
min = 1
max = 1417749
for n in range(min, max):
response = requests.get("https:/xx.xxx.xxx/{}.json".format(str(n)))
if response.status_code == 200:
parsed = json.loads(response.text)
inserted = com.insert_one(parsed)
write_log.write(str(n) + "\t" + str(inserted) + "\n")
print(str(n) + "\t" + str(inserted) + "\n")
write_log.close()
但是要花很多时间来完成任务。这里的问题是我如何加快这一过程。
答案 0 :(得分:10)
您可以做几件事:
来自here的并行代码
from threading import Thread
from Queue import Queue
q = Queue(concurrent * 2)
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
try:
for url in open('urllist.txt'):
q.put(url.strip())
q.join()
except KeyboardInterrupt:
sys.exit(1)
this question的可重用连接时间
>>> timeit.timeit('_ = requests.get("https://www.wikipedia.org")', 'import requests', number=100)
Starting new HTTPS connection (1): www.wikipedia.org
Starting new HTTPS connection (1): www.wikipedia.org
Starting new HTTPS connection (1): www.wikipedia.org
...
Starting new HTTPS connection (1): www.wikipedia.org
Starting new HTTPS connection (1): www.wikipedia.org
Starting new HTTPS connection (1): www.wikipedia.org
52.74904417991638
>>> timeit.timeit('_ = session.get("https://www.wikipedia.org")', 'import requests; session = requests.Session()', number=100)
Starting new HTTPS connection (1): www.wikipedia.org
15.770191192626953
答案 1 :(得分:6)
您可以在两个方面改进代码:
使用Session
,这样就不会在每次请求时都重新安排连接并保持打开状态; </ p>
在代码中使用asyncio
使用并行性;
在这里https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html
答案 2 :(得分:4)
您可能正在寻找的是异步抓取。我建议您创建一些URL批次,即5个URL(尽量不要破坏网站),并以异步方式刮取它们。如果您不太了解异步,请使用Google for libary asyncio。希望能为您提供帮助:)
答案 3 :(得分:4)
asyncio也是一种解决方案
import time
import pymongo
import json
import asyncio
from aiohttp import ClientSession
async def get_url(url, session):
async with session.get(url) as response:
if response.status == 200:
return await response.text()
async def create_task(sem, url, session):
async with sem:
response = await get_url(url, session)
if response:
parsed = json.loads(response)
n = url.rsplit('/', 1)[1]
inserted = com.insert_one(parsed)
write_log.write(str(n) + "\t" + str(inserted) + "\n")
print(str(n) + "\t" + str(inserted) + "\n")
async def run(minimum, maximum):
url = 'https:/xx.xxx.xxx/{}.json'
tasks = []
sem = asyncio.Semaphore(1000) # Maximize the concurrent sessions to 1000, stay below the max open sockets allowed
async with ClientSession() as session:
for n in range(minimum, maximum):
task = asyncio.ensure_future(create_task(sem, url.format(n), session))
tasks.append(task)
responses = asyncio.gather(*tasks)
await responses
client = pymongo.MongoClient("mongodb://127.0.0.1:27017")
db = client["thread1"]
com = db["threadcol"]
start_time = time.time()
write_log = open("logging.log", "a")
min_item = 1
max_item = 100
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(min_item, max_item))
loop.run_until_complete(future)
write_log.close()
答案 4 :(得分:3)
尝试分块请求并使用MongoDB批量写入操作。
这可以通过以下方式节省大量时间 * MongoDB写延迟 *同步网络通话延迟
但是不要增加并行请求计数(块大小),这会增加服务器的网络负载,服务器可能会将此视为DDoS攻击。
答案 5 :(得分:3)
假设您不会受到API的限制并且没有速率限制,那么这段代码应使处理过程加快50倍(可能会更快,因为现在所有请求都使用同一会话发送)。
import pymongo
import threading
client = pymongo.MongoClient("mongodb://127.0.0.1:27017")
db = client["thread1"]
com = db["threadcol"]
start_time = time.time()
logs=[]
number_of_json_objects=1417750
number_of_threads=50
session=requests.session()
def scrap_write_log(session,start,end):
for n in range(start, end):
response = session.get("https:/xx.xxx.xxx/{}.json".format(n))
if response.status_code == 200:
try:
logs.append(str(n) + "\t" + str(com.insert_one(json.loads(response.text))) + "\n")
print(str(n) + "\t" + str(inserted) + "\n")
except:
logs.append(str(n) + "\t" + "Failed to insert" + "\n")
print(str(n) + "\t" + "Failed to insert" + "\n")
thread_ranges=[[x,x+number_of_json_objects//number_of_threads] for x in range(0,number_of_json_objects,number_of_json_objects//number_of_threads)]
threads=[threading.Thread(target=scrap_write_log, args=(session,start_and_end[0],start_and_end[1])) for start_and_end in thread_ranges]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
with open("logging.log", "a") as f:
for line in logs:
f.write(line)
答案 6 :(得分:2)
很多年前,我碰巧遇到了同样的问题。我从未对基于python的答案感到满意,这些答案太慢或太复杂。切换到其他成熟的工具后,速度很快,而且我再也没有回来。
最近,我使用这样的步骤来加快流程,如下所示。
aria2c -x16 -d ~/Downloads -i /path/to/urls.txt
下载这些文件这是我到目前为止提出的最快的过程。
就抓取网页而言,我什至下载了必要的* .html,而不是一次访问该页面,实际上没有任何区别。当您点击访问页面时,使用requests
或scrapy
或urllib
之类的python工具,该页面仍会为您缓存并下载整个Web内容。
答案 7 :(得分:1)
首先创建所有链接的列表,因为所有链接都相同,只需对其进行迭代即可。
list_of_links=[]
for i in range(1,1417749):
list_of_links.append("https:/xx.xxx.xxx/{}.json".format(str(i)))
t_no=2
for i in range(0, len(list_of_links), t_no):
all_t = []
twenty_links = list_of_links[i:i + t_no]
for link in twenty_links:
obj_new = Demo(link,)
t = threading.Thread(target=obj_new.get_json)
t.start()
all_t.append(t)
for t in all_t:
t.join()
class Demo:
def __init__(self, url):
self.json_url = url
def get_json(self):
try:
your logic
except Exception as e:
print(e)
只需增加或减少t_no,您就可以更改线程数。