我的芹菜redis任务不在heroku服务器上的django应用程序中工作

时间:2016-09-17 21:30:09

标签: python django heroku deployment redis

我的任务在我的本地服务器上工作正常但是当我将它推送到Heroku时,没有任何反应。没有错误消息。我是一个新手,当涉及到这个和本地我会通过做

启动工人
celery worker -A blog -l info. 

所以我猜这个问题可能与此有关。因为我不知道这样做。我怀疑我应该在我的应用程序中这样做。继承我的代码

celery.py

import os

from celery import Celery

from django.conf import settings

# set the default Django settings module for the 'celery' program.
os.environ.setdefault(
    'DJANGO_SETTINGS_MODULE', 'gettingstarted.settings'
)

app = Celery('blog')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

my tasks.py

import requests
import random
import os

from bs4 import BeautifulSoup
from .celery import app
from .models import Post
from django.contrib.auth.models import User


@app.task
def the_star():
    def swappo():
        user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" '
        user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" '
        user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" '
        user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" '

        agent_list = [user_one, user_two, user_thr, user_for]
        a = random.choice(agent_list)
        return a

    headers = {
        "user-agent": swappo(),
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
        "accept-encoding": "gzip,deflate,sdch",
        "accept-language": "en-US,en;q=0.8",
    }

    # scraping from worldstar
    url_to = 'http://www.worldstarhiphop.com'
    html = requests.get(url_to, headers=headers)
    soup = BeautifulSoup(html.text, 'html5lib')
    titles = soup.find_all('section', 'box')
    name = 'World Star'

    if os.getenv('_system_name') == 'OSX':
        author = User.objects.get(id=2)
    else:
        author = User.objects.get(id=3)

    def make_soup(url):
        the_comments_page = requests.get(url, headers=headers)
        soupdata = BeautifulSoup(the_comments_page.text, 'html5lib')
        comment = soupdata.find('div')
        para = comment.find_all('p')
        kids = [child.text for child in para]
        blu = str(kids).strip('[]')
        return blu

    cleaned_titles = [title for title in titles if title.a.get('href') != 'vsubmit.php']
    world_entries = [{'href': url_to + box.a.get('href'),
                      'src': box.img.get('src'),
                      'text': box.strong.a.text,
                      'comments': make_soup(url_to + box.a.get('href')),
                      'name': name,
                      'url': url_to + box.a.get('href'),
                      'embed': None,
                      'author': None,
                      'video': False
                      } for box in cleaned_titles][:10] # The count

    for entry in world_entries:
        post = Post()
        post.title = entry['text']
        title = post.title
        if not Post.objects.filter(title=title):
            post.title = entry['text']
            post.name = entry['name']
            post.url = entry['url']
            post.body = entry['comments']
            post.image_url = entry['src']
            post.video_path = entry['embed']
            post.author = entry['author']
            post.video = entry['video']
            post.status = 'draft'
            post.save()
            post.tags.add("video", "Musica")
    return world_entries

my views.py

def shopan(request):
    the_star.delay()
    return redirect('/')

我有REDIS_URL的多个实例

所以我跑了

heroku redis:promote REDIS_URL

这就是我在环境变量中使用的内容,您可以在上面看到。我怎样才能做到这一点?

1 个答案:

答案 0 :(得分:1)

您需要在Procfile中添加一个条目,告诉Heroku启动Celery工作人员:

worker:celery worker -A blog -l info