有什么办法可以加快我的python程序的速度吗?

时间:2020-04-25 20:34:06

标签: python-3.x optimization web-scraping

我正在研究一个发布项目,我需要提取ID以获得免费的全文和免费的pmc文章。这就是我的代码。

import requests
from bs4 import BeautifulSoup
from Bio import Entrez

Entrez.email = "abc@gmail.com"     # Always tell NCBI who you are
handle = Entrez.esearch(db="pubmed", term="cough")
record = Entrez.read(handle)
count = record['Count']
handle = Entrez.esearch(db="pubmed", term="cough", retmax=count)
record = Entrez.read(handle)


free_article_ids = []
for id_ in record['IdList']:
    req = requests.get(f"https://www.ncbi.nlm.nih.gov/pubmed/{id_}")
    soup = BeautifulSoup(req.text, 'lxml')

    status = soup.find('span', {'class':'status_icon'})


    if status is None:
        continue
    elif status.text in ["Free full text", "Free PMC Article"]:
        free_article_ids.append(id_)
print(free_article_ids)

我的代码存在问题,因为要花太多时间才能得出结果,所以我想加快这一过程。我该怎么办?

1 个答案:

答案 0 :(得分:0)

使用多线程并发下载。推荐一个简单的框架。

from Bio import Entrez
from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain
class MySpider(Spider):
  name = 'ncbi.nlm.nih.gov'
  start_urls = []

  def __init__(self):
    Entrez.email = "abc@gmail.com"     # Always tell NCBI who you are
    handle = Entrez.esearch(db="pubmed", term="cough")
    record = Entrez.read(handle)
    count = record['Count']
    handle = Entrez.esearch(db="pubmed", term="cough", retmax=count)
    record = Entrez.read(handle)
    for id_ in record['IdList']:
      self.start_urls.append(f"https://www.ncbi.nlm.nih.gov/pubmed/{id_}")
    Spider.__init__(self,self.name) #necessary

  free_article_ids = []
  def extract(self,url,html,models,modelNames):
    doc = SimplifiedDoc(html)
    status = doc.select('span.status_icon')
    if status and status.text in ["Free full text", "Free PMC Article"]:
      id = url.split('/')[-1]
      self.free_article_ids.append(id)
      return {"Urls": [], "Data": {"id":id}}

    return True
SimplifiedMain.startThread(MySpider())

还有更多示例。 https://github.com/yiyedata/simplified-scrapy-demo