许多Facebook粉丝页面现在采用以下格式 - https://www.facebook.com/TiltedKiltEsplanade,其中“TiltedKiltEsplanade”是页面所有者声明的名称的示例。但是,同一页面的RSS源位于https://www.facebook.com/feeds/page.php?id=414117051979234&format=rss20,其中414117051979234是一个ID,可以通过访问https:// 图形 .facebook.com / TiltedKiltEsplanade并查找最后一个来确定页面上列出的数字ID(页面顶部有两个相似的ID,但可以忽略这些ID)。
我有上面描述格式的Facebook粉丝页面列表,我想快速获取与这些页面对应的数字ID,以便我可以将它们全部添加到RSS阅读器中。刮掉这些页面最简单的方法是什么?我熟悉Scrapy,但我不确定它是否可以使用,因为页面的图形版本没有以允许轻松抓取的方式标记(据我所知)
感谢。
答案 0 :(得分:4)
图形请求的输出是JSON对象。这比HTML内容更容易处理。
这将是您正在寻找的简单实现:
# file: myspider.py
import json
from scrapy.http import Request
from scrapy.spider import BaseSpider
class MySpider(BaseSpider):
name = 'myspider'
start_urls = (
# Add here more urls. Alternatively, make the start urls dynamic
# reading them from a file, db or an external url.
'https://www.facebook.com/TiltedKiltEsplanade',
)
graph_url = 'https://graph.facebook.com/{name}'
feed_url = 'https://www.facebook.com/feeds/page.php?id={id}&format=rss20'
def start_requests(self):
for url in self.start_urls:
# This assumes there is no trailing slash
name = url.rpartition('/')[2]
yield Request(self.graph_url.format(name=name), self.parse_graph)
def parse_graph(self, response):
data = json.loads(response.body)
return Request(self.feed_url.format(id=data['id']), self.parse_feed)
def parse_feed(self, response):
# You can use the xml spider, xml selector or the feedparser module
# to extract information from the feed.
self.log('Got feed: %s' % response.body[:100])
输出:
$ scrapy runspider myspider.py
2014-01-11 02:19:48-0400 [scrapy] INFO: Scrapy 0.21.0-97-g21a8a94 started (bot: scrapybot)
2014-01-11 02:19:48-0400 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2014-01-11 02:19:48-0400 [scrapy] DEBUG: Overridden settings: {}
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Enabled item pipelines:
2014-01-11 02:19:49-0400 [myspider] INFO: Spider opened
2014-01-11 02:19:49-0400 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-01-11 02:19:49-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-01-11 02:19:49-0400 [myspider] DEBUG: Crawled (200) <GET https://graph.facebook.com/TiltedKiltEsplanade> (referer: None)
2014-01-11 02:19:50-0400 [myspider] DEBUG: Crawled (200) <GET https://www.facebook.com/feeds/page.php?id=414117051979234&format=rss20> (referer: https://graph.facebook.com/TiltedKiltEsplanade)
2014-01-11 02:19:50-0400 [myspider] DEBUG: Got feed: <?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
xmlns:media="http://search.yahoo.com
2014-01-11 02:19:50-0400 [myspider] INFO: Closing spider (finished)
2014-01-11 02:19:50-0400 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 578,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 6669,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 1, 11, 6, 19, 50, 849162),
'log_count/DEBUG': 9,
'log_count/INFO': 3,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2014, 1, 11, 6, 19, 49, 221361)}
2014-01-11 02:19:50-0400 [myspider] INFO: Spider closed (finished)