自动扫描RSS源并填充WebContent模型

时间:2016-08-04 11:47:28

标签: python json django rss

我正在尝试创建一个Django服务器应用程序(当前在本地主机上),它将定期检查由模型Blogger提供的给定RSS源(即每小时一次),从中提取数据然后提供模型的数据WebContent

到目前为止,我已在http://127.0.0.1:8000/api/blogger/创建了一个数据端点,它输出以下信息:

[
    {
        "id": "c384f191-662f-43f9-a39d-2da737e7cbb8",
        "name": "Patricia Bright",
        "avatar": "http://127.0.0.1:8000/media/img/1470305802086_IMG_5921.JPG",
        "rss_url": "http://patriciabright.co.uk/?feed=rss2",
    },
    {
        "id": "dc70ca6b-94cc-4ba9-a0c8-0d907f7ab020",
        "name": "Shirley B. Eniang",
        "avatar": "http://127.0.0.1:8000/media/img/1470305797487_photo.jpg",
        "rss_url": "http://shirleyswardrobe.com/feed/",
    }
]

现在,我想循环上面的rss_url值并从每个RSS提要中提取特定信息,以便为模型WebContent提供数据。我想每小时运行一次,并且在填充模型WebContent之前应该检查数据是否已经存在(所以我没有得到任何重复的请求)。

这是我迄今为止在models.py

中所做的工作
from uuid import uuid4
from time import time
from django.db import models
from django.contrib.contenttypes.models import ContentType
import feedparser


def get_upload_avatar_path(instance, filename):
    timestamp = int(round(time() * 1000))
    path = "img/%s_%s" % (timestamp, filename)
    return path


class Blogger(models.Model):
    """
    Blogger model
    """
    id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
    name = models.CharField(max_length=255, null=True, default=None)
    avatar = models.ImageField(upload_to=get_upload_avatar_path, blank=True, null=True, default=None, max_length=255)
    url = models.CharField(max_length=255, null=True, default=None)
    rss_url = models.CharField(max_length=255, null=True, default=None)
    instagram_url = models.CharField(max_length=255, null=True, default=None)
    twitter_url = models.CharField(max_length=255, null=True, default=None)
    youtube_url = models.CharField(max_length=255, null=True, default=None)

    class Meta:
        verbose_name_plural = "Bloggers"

    def __str__(self):
        return "%s" % (self.name)

    def generate_web_content(self):
        """
        Scan for blogger RSS feeds and generate web content
        :return: None
        """
        web_content = WebContent.objects.create(user_profile=self)
        self._scan_web_content(web_content)

    def _scan_web_content(self, web_content=None):
        """
        Scan blogger RSS feeds
        :param report: Associated WebContent object
        :return: None
        """
        urls = Blogger.objects.all()
        d = feedparser.parse(urls['rss_url'])
        for post in d.entries:
            blogger = self
            title = post.title.encode('ascii', 'ignore')
            url = post.link.encode('ascii', 'ignore')


class WebContent(models.Model):
    """
    Model to store blogger web content
    """
    id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
    blogger = models.ForeignKey(Blogger)
    title = models.CharField(max_length=255, null=True, default=None)
    url = models.CharField(max_length=255, null=True, default=None)

    class Meta:
        verbose_name_plural = "Web Content"

我设法在一个单独的python文件中模拟一个实现,该文件运行良好。我想我正试图将它移植到我的Django应用程序中。

import feedparser
import json
import sys
import os


os.system('cls')


# Import json
with open('bloggers.json') as jsonfile:
    j = json.load(jsonfile)


for blogger in j['bloggers']:
    print (blogger['name'])
    print "---------------------"

    d = feedparser.parse(blogger['rssUrl'])
    for post in d.entries:
        print post.title.encode('ascii', 'ignore') + ": " + post.link.encode('ascii', 'ignore') + "\n"

任何帮助都将不胜感激。

1 个答案:

答案 0 :(得分:1)

您的代码中似乎存在许多问题:

  1. 在方法generate_web_content中,您通过传递参数WebContent来创建user_profile=self对象,而它应该是blogger=self

  2. 在方法_scan_web_content中,您查询了所有Blogger个对象,如:

    urls = Blogger.objects.all()
    

    所以,urls是一个查询集对象,你不能像urls['rss_url']那样访问密钥,而应该这样做

    d = feedparser.parse(self.rss_url)
    
  3. 在for循环中,您应该将属性添加到作为参数传递的WebContent对象中:

    for post in d.entries:
        web_content.blogger = self
        web_content.title = post.title.encode('ascii', 'ignore')
        web_content.url = post.link.encode('ascii', 'ignore')
    web_content.save()
    

    否则此方法不起作用。

  4. 希望它澄清!