我正在尝试读写Mysql的所有输出。当我的蜘蛛开始抓取时,我想从MySQL数据库中获取所有URL,因此我尝试创建一个读取数据的函数。
readdata.py:
import mysql.connector
from mysql.connector import Error
from itemadapter import ItemAdapter
def dataReader(marketName):
try:
connection = mysql.connector.connect(host='localhost',
database='test',
user='root',
port=3306,
password='1234')
sql_select_Query = "SELECT shop_URL FROM datatable.bot_markets WHERE shop_name='"+marketName+"';"
cursor = connection.cursor()
cursor.execute(sql_select_Query)
records = cursor.fetchall()
return records
except Error as e:
print("Error reading data from MySQL table", e)
finally:
if (connection.is_connected()):
connection.close()
cursor.close()
print("MySQL connection is closed")
我想按如下方式从我的Spider调用此函数。
我的蜘蛛:
import scrapy
import re
import mysql.connector
from ..items import FirstBotItem
from scrapy.utils.project import get_project_settings
from first_bot.readdata import dataReader
class My_Spider(scrapy.Spider):
name = "My_Spider"
allowed_domains = ["quotes.toscrape.com/"]
start_urls = dataReader(name)
def parse(self, response):
location = "quotes"
for product in response.xpath('.//div[@class="product-card product-action "]'):
product_link = response.url
prices = product.xpath('.//div[@class="price-tag"]/span[@class="value"]/text()').get()
if prices != None:prices = re.sub(r"[\s]", "", prices)
title = product.xpath('.//h5[@class="title product-card-title"]/a/text()').get()
unit = product.xpath('.//div[@class="select single-select"]//i/text()').get()
if unit != None: unit = re.sub(r"[\s]", "", unit)
item = FirstBotItem()
item['LOKASYON'] = location
item['YEAR'] = 2020
item['MONTH'] = 8
yield item
我在使用start_urls时出错了,但是我不知道。我收到此错误。
_set_url
raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__)
TypeError: Request url must be str or unicode, got tuple:
2020-08-24 15:46:31 [scrapy.core.engine] INFO: Closing spider (finished)
我的主要任务是从数据库中获取所有URL。因为有人会将URL写在同一网站上,所以Spider会自动抓取。
答案 0 :(得分:1)
您可以尝试通过以下方式更改dataReader
方法中的逻辑:
records = cursor.fetchall()
return records
至:
records = cursor.fetchall()
records_list = []
for rec in records:
records_list.append(rec)
return records_list
答案 1 :(得分:1)
您应该在dataReader函数中写return list(records)
而不是return records
。