我正试图从PGA网站上搜集数据,以获得美国所有高尔夫球场的清单。我想抓取数据并输入CSV文件。我的问题是在运行我的脚本后出现此错误。任何人都可以帮助修复此错误以及如何提取数据?
以下是错误消息:
文件" /Users/AGB/Final_PGA2.py",第44行,中
writer.writerow(行)UnicodeEncodeError:' ascii'编解码器不能对字符u' \ u201c'进行编码。在 位置35:序数不在范围内(128)
下面的脚本;
import csv
import requests
from bs4 import BeautifulSoup
courses_list = []
for i in range(906): # Number of pages plus one
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
g_data2=soup.find_all("div",{"class":"views-field-nothing"})
for item in g_data2:
try:
name = item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
print name
except:
name=''
try:
address1=item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
except:
address1=''
try:
address2=item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
except:
address2=''
try:
website=item.contents[1].find_all("div",{"class":"views-field-website"})[0].text
except:
website=''
try:
Phonenumber=item.contents[1].find_all("div",{"class":"views-field-work-phone"})[0].text
except:
Phonenumber=''
course=[name,address1,address2,website,Phonenumber]
courses_list.append(course)
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
答案 0 :(得分:1)
你不应该在Python 3上得到错误。这里的代码示例修复了代码中的一些不相关的问题。它解析给定网页上的指定字段并将其保存为csv:
#!/usr/bin/env python3
import csv
from urllib.request import urlopen
import bs4 # $ pip install beautifulsoup4
page = 905
url = ("http://www.pga.com/golf-courses/search?page=" + str(page) +
"&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0"
"&course_type=both&has_events=0")
with urlopen(url) as response:
field_content = bs4.SoupStrainer('div', 'views-field-nothing')
soup = bs4.BeautifulSoup(response, parse_only=field_content)
fields = [bs4.SoupStrainer('div', 'views-field-' + suffix)
for suffix in ['title', 'address', 'city-state-zip', 'website', 'work-phone']]
def get_text(tag, default=''):
return tag.get_text().strip() if tag is not None else default
with open('pga.csv', 'w', newline='') as output_file:
writer = csv.writer(output_file)
for div in soup.find_all(field_content):
writer.writerow([get_text(div.find(field)) for field in fields])
答案 1 :(得分:0)
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
将其更改为:
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row.encode('utf-8'))
或者:
import codecs
....
with codecs.open('PGA_Final.csv','a', encoding='utf-8') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)