我设置了一个使用delayed_job的scraper,以便它在后台运行。
class Scraper
def do_scrape
# do some scraping stuff
end
handle_asynchronously :do_scrape
end
现在我可以注释掉handle_asynchronously
行,打开控制台并运行刮刀就好了。它完全符合我的预期。
然而,当我试图将刮伤作为延迟工作时,它似乎根本没有做任何事情。除此之外,它似乎也没有记录任何重要的内容。
以下是我的日志从排队作业到运行rake jobs:work
的方式。
County Load (1.0ms) SELECT "counties".* FROM "counties" WHERE "counties"."name" = 'Fermanagh' LIMIT 1
(0.1ms) BEGIN
SQL (20.5ms) INSERT INTO "delayed_jobs" ("attempts", "created_at", "failed_at", "handler", "last_error", "locked_at", "locked_by", "priority", "run_at", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING "id" [["attempts", 0], ["created_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["failed_at", nil], ["handler", "---
# serialized object omitted for conciseness
nmethod_name: :refresh_listings_in_the_county_without_delay\nargs: []\n\n"], ["last_error", nil], ["locked_at", nil], ["locked_by", nil], ["priority", 0], ["run_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["updated_at", Mon, 30 May 2011 21:19:25 UTC +00:00]]
(0.9ms) COMMIT
Delayed::Backend::ActiveRecord::Job Load (0.4ms) SELECT "delayed_jobs".* FROM "delayed_jobs" WHERE (locked_by = 'host:David-Tuites-MacBook-Pro.local pid:7743' AND locked_at > '2011-05-30 17:19:32.116511') LIMIT 1
(0.1ms) BEGIN
SQL (0.3ms) DELETE FROM "delayed_jobs" WHERE "delayed_jobs"."id" = $1 [["id", 42]]
(0.4ms) COMMIT
正如您所看到的,它似乎只是插入一份工作然后立即将其删除?这种刮削方法至少需要几分钟。
最糟糕的是,它昨晚完美运作,我想不出一件事,我做的不同。我尝试将gem修复到以前的版本,因为它最近更新了,但似乎没有解决问题。
有什么想法吗?
答案 0 :(得分:1)
您是否配置了延迟作业以删除失败的作业?在初始值设定项中查找以下设置:
Delayed::Worker.destroy_failed_jobs = true
如果是,则将其设置为false并查看delayed_jobs表中的异常,因为它失败并进一步调试。