问题:如果立即丢弃作业结果,为什么redis会填满?
我使用redis作为队列来异步创建PDF,然后将结果保存到我的数据库中。自保存以来,我不需要在以后访问该对象,因此我不需要在处理完Redis后将结果存储在Redis中。
为了防止结果停留在redis中,我将TTL
设置为0
:
parameter_dict = {
"order": serializer.object,
"photo": base64_image,
"result_ttl": 0
}
django_rq.enqueue(procces_template, **parameter_dict)
问题是虽然redis工作人员说工作立即到期:
15:33:35 Job OK, result = John Doe's nail order to 568 Broadway
15:33:35 Result discarded immediately.
15:33:35
15:33:35 *** Listening on high, default, low...
Redis仍然填满并抛出:
ResponseError: command not allowed when used memory > 'maxmemory'
如果作业结果尚未存储,是否还需要在redis / django-rq中设置另一个参数以防止redis填满?
更新
在post之后,由于redis中的作业失败,我预计内存可能会填满。
使用此代码段:
def print_redis_failed_queue():
q = django_rq.get_failed_queue()
while True:
job = q.dequeue()
if not job:
break
print job
这是redis中密钥转储的粘贴框:
在这里张贴这篇文章太久了。它的大小似乎支持我的理论。但是使用:
def delete_redis_failed_queue():
q = django_rq.get_failed_queue()
count = 0
while True:
job = q.dequeue()
if not job:
print "{} Jobs deleted.".format(count)
break
job.delete()
count += 1
Doest清除redis,就像我期望的那样。如何在redis中更准确地转储密钥?我能正确清理工作吗?
答案 0 :(得分:2)
虽然孤立作业的原因未知,但问题是通过以下代码解决的:
import redis
from rq.queue import Queue, get_failed_queue
from rq.job import Job
redis = Redis()
for i, key in enumerate(self.redis.keys('rq:job:*')):
job_number = key.split("rq:job:")[1]
job = Job.fetch(job_number, connection=self.redis)
job.delete()
在我的特殊情况下,在每个工作的竞争之后调用此代码段(实际上是delete_orphaned_jobs()
方法),确保Redis不会填满,孤立的工作将被处理。有关该问题的更多详细信息,请参阅已打开django-rq issue中对话的链接。
在诊断此问题的过程中,我还创建了一个utility class,用于轻松检查和删除作业/孤立作业:
class RedisTools:
'''
A set of utility tools for interacting with a redis cache
'''
def __init__(self):
self._queues = ["default", "high", "low", "failed"]
self.get_redis_connection()
def get_redis_connection(self):
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
self.redis = redis.from_url(redis_url)
def get_queues(self):
return self._queues
def get_queue_count(self, queue):
return Queue(name=queue, connection=self.redis).count
def msg_print_log(self, msg):
print msg
logger.info(msg)
def get_key_count(self):
return len(self.redis.keys('rq:job:*'))
def get_queue_job_counts(self):
queues = self.get_queues()
queue_counts = [self.get_queue_count(queue) for queue in queues]
return zip(queues, queue_counts)
def has_orphanes(self):
job_count = sum([count[1] for count in self.get_queue_job_counts()])
return job_count < self.get_key_count()
def print_failed_jobs(self):
q = django_rq.get_failed_queue()
while True:
job = q.dequeue()
if not job:
break
print job
def print_job_counts(self):
for queue in self.get_queue_job_counts():
print "{:.<20}{}".format(queue[0], queue[1])
print "{:.<20}{}".format('Redis Keys:', self.get_key_count())
def delete_failed_jobs(self):
q = django_rq.get_failed_queue()
count = 0
while True:
job = q.dequeue()
if not job:
self.msg_print_log("{} Jobs deleted.".format(count))
break
job.delete()
count += 1
def delete_orphaned_jobs(self):
if not self.has_orphanes():
return self.msg_print_log("No orphan jobs to delete.")
for i, key in enumerate(self.redis.keys('rq:job:*')):
job_number = key.split("rq:job:")[1]
job = Job.fetch(job_number, connection=self.redis)
job.delete()
self.msg_print_log("[{}] Deleted job {}.".format(i, job_number))
答案 1 :(得分:1)
你可以使用&#34;黑洞&#34;来自http://python-rq.org/docs/exceptions/的job.cancel()
的异常处理程序:
def black_hole(job, *exc_info):
# Delete the job hash on redis, otherwise it will stay on the queue forever
job.cancel()
return False