我正在抓取以下网站:https://news.ycombinator.com/jobs。我有代码可以抓取网站并将所需的信息存储在本地数据库中。我需要抓取的信息是:
我的问题是:如何改善我的脚本以执行以下任务?
import mysql.connector
from mysql.connector import errorcode
from bs4 import BeautifulSoup
import requests
url = "https://news.ycombinator.com/jobs"
response = requests.get(url, timeout=5)
content = BeautifulSoup(response.content, "html.parser")
table = content.find("table", attrs={"class":"itemlist"})
array = []
for elem in table.findAll("a", attrs={"class":"storylink"}):
array.append(elem.text)
try:
# open the database connection
cnx = mysql.connector.connect(user='root', password='mypassword',
host='localhost', database='scraping')
insert_sql = ('INSERT INTO `jobs` (`listing`) VALUES (%s)')
# get listing data
listing_data = array
# loop through all listings executing INSERT for each with the cursor
cursor = cnx.cursor()
for listing in listing_data:
print('Storing data for %s' % (listing))
cursor.execute(insert_sql, (listing,))
# commit the new records
cnx.commit()
# close the cursor and connection
cursor.close()
cnx.close()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print('Something is wrong with your username or password')
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print('Database does not exist')
else:
print(err)
else:
cnx.close()
答案 0 :(得分:1)
1)您可以设置cron作业以使此脚本定期运行。
2)DOM中还缺少一些内容:
<tr class="athing" id="20190856">
<td align="right" valign="top" class="title"><span class="rank"></span></td> <td></td><td class="title">...
每个职位发布都有一个唯一的ID(根据HN API文档:https://github.com/HackerNews/API),因此只需抓取该ID并确保您的数据库中没有该ID。
您也可以只使用API而不用抓取HTML!