使用Requests.get抓取网址列表

时间:2016-03-20 18:15:32

标签: python csv

我正在尝试抓取CSV文件中包含的网址列表。这些URL列在CSV的第6列中。网址格式为:https://www.targetdomain.com/mainDirectoryName/subDirectoryName/pageName

我没有使用以下代码正确读取CSV中的数据。我在哪里编码错误?

list_of_urls = open(filename).read()

for i in range(6,len(list_of_urls)):

    try:
        url=str(list_of_urls[i][0])
        #crawl urls
        secondCrawlRequest = requests.get(url, headers=http_headers, timeout=5)

        raw_html = secondCrawlRequest.text
    except requests.ConnectionError as e:
        logging.exception(e)
    except requests.HTTPError as e:
        logging.exception(e)
    except requests.Timeout as e:
        logging.exception(e)
    except requests.RequestException as e:
        logging.exception(e)
        sys.exit(1)

2 个答案:

答案 0 :(得分:3)

您应该使用csv.reader

import csv 

with open(filename, newline='') as csvfile:
    reader = csv.reader(csvfile)
    for row in reader:
        try:
            # 0-based column numbering, so 6th column is number 5
            response = requests.get(row[5], headers=http_headers, timeout=5)
            print(response.text)
        except (requests.ConnectionError, requests.HTTPError, requests.Timeout) as e:
            logging.exception(e)
        except requests.RequestException as e:
            logging.exception(e) 
            sys.exit(1)

如果您需要跳过标题行,可以通过调用next(reader)

来执行此操作
 reader = csv.reader(csvfile)
 next(reader)  # consumes one input row discarding it
 for row in reader: ...

答案 1 :(得分:0)

如果url相对于csv中的列或行没有固定的出现,你可以简单地使用正则表达式并逐行读取文件,如下所示:

import re
import requests

filename = 'shitty_url.csv'
with open(filename, 'r') as csvfile:
    for line in csvfile:
        url_pattern = re.search('https:\/\/(.+?) ', line)
        if url_pattern:
            found_url = url_pattern.group(1)
            url = 'https://%s' % found_url
            crawler = requests.get(url, timeout=5)

希望这有助于:)