无法启动Scrapy,因为我不断获得" INSTALLED_APPS。" %(模块,名称)

时间:2017-12-02 00:38:11

标签: python django python-3.x web-scraping scrapy

我试图运行一个scrapy脚本,但今天当我开始它时,我得到了:

django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.

为了解决这个问题,我补充道:

from django.conf import settings

settings.configure()

现在我明白了:

    raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.

使用:

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()

现在:

    "INSTALLED_APPS." % (module, name)
RuntimeError: Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.

然后我去了Anaconda的scrapy文件夹并尝试添加:

INSTALLED_APPS = [
        'myAppName.apps.myAppNameConfig',
        'django.contrib.admin',
        'django.contrib.auth',
        'django.contrib.contenttypes',
        'django.contrib.sessions',
        'django.contrib.messages',
        'django.contrib.staticfiles',

这通常适用于Django,所以我认为它不会被批准。

有人可以帮助我摆脱一系列错误错误吗?我可以启动scrapy吗?我今天一直试图推出scrapy程序,但感觉我正在不断寻求不同的错误。

我是否在思考这个问题,它就像删除几行代码或一个错误的目录一样简单?

由于

from django.conf import settings

settings.configure()

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()


from scrapy import cmdline


import csv
import scrapy


# -*- coding: utf-8 -*-
import scrapy


from scrapy import cmdline
cmdline.execute("scrapy crawl jobs".split())
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from jobs.items import jobs

from csv import DictWriter, QUOTE_MINIMAL
from lxml import html

import requests

SEARCH_PAGE_URL = "https://www.richelieu.com/us/en/search?s=%20"

0 个答案:

没有答案