我正在设置一个flask应用程序,该应用程序将允许我输入一个字符串,并将该字符串参数传递给我的Spider以便对网页进行抓取。我很难让蜘蛛在表单提交(集成scrapy&flask)的新闻上运行。
我看了下面的代码片段解决方案,但无济于事: Run Scrapy from Flask, Running Scrapy spiders in a Celery task, Scrapy and celery `update_state`
显然,有多种方法可以完成任务。但是-上面的每个代码段似乎都不起作用。
routes.py
mask
蜘蛛
from flask import render_template, flash, redirect, url_for, session, jsonify
from flask import request
from flask_login import login_required
from flask_login import logout_user
from app import app, db
from app.forms import LoginForm
from flask_login import current_user, login_user
from app.models import User
from werkzeug.urls import url_parse
from app.forms import RegistrationForm, SearchForm
#from app.tasks import scrape_async_job
import pprint
import requests
import json
@app.route('/')
@app.route('/index', methods=['GET','POST'])
@login_required
def index():
jobvisuals = [
{
'Job': 'Example',
'Desc': 'This job requires a degree...',
'link': 'fakelink',
'salary': '10$/hr',
'applied': 'Boolean',
'interview': 'Boolean'}]
params = {
'spider_name': 'Indeedspider',
'start_requests': True
}
response = requests.get('http://localhost:9080/crawl.json', params).json()
data = response
pprint.pprint(data)
form = SearchForm()
if request.method == 'GET':
return render_template('index.html', title='home', jobvisuals=jobvisuals, form=form, search=session.get('search',''))
job_find = request.form['search']
session['search'] = job_find
if form.validate_on_submit():
print('Working on this feature :D')
flash('Searching for job {}').format(form.search.data)
return render_template('index.html', title='Home', jobvisuals=jobvisuals, form=form)
预期:
我在输入框中输入作业,作业在提交时传递给Spider,Spider确实抓取了scrape.com,并仅拉出第一页并在索引页上返回该数据。
实际: 不确定从哪里开始。
有人能指出我正确的方向吗?