我有一个非常不可靠的API,我使用Python请求。我一直在考虑使用requests_cache并将expire_after
设置为999999999999,就像我见过其他人一样。
唯一的问题是,我不知道API何时再次运行,如果数据已更新。如果requests_cache将自动自动更新并删除旧条目。
我试过阅读文档,但我无法在任何地方看到这一点。
答案 0 :(得分:1)
requests_cache
时间过去之前, expire_after
不会更新。在这种情况下,它不会检测到您的API已恢复到工作状态。
我注意到该项目已添加了我过去实施的选项;您现在可以在配置缓存时设置old_data_on_error
选项;请参阅CachedSession
documentation:
old_data_on_error - 如果更新失败,如果
True
它将返回过期的缓存响应。
如果后端更新失败,它将重用现有的缓存数据,而不是删除该数据。
在过去,我创建了自己的requests_cache
会话设置(加上小补丁),如果后端发出500错误或超时(使用短暂超时),则会重用expire_after
之外的缓存值处理有问题的API层,而不是依赖expire_after
:
import logging
from datetime import (
datetime,
timedelta
)
from requests.exceptions import (
ConnectionError,
Timeout,
)
from requests_cache.core import (
dispatch_hook,
CachedSession,
)
log = logging.getLogger(__name__)
# Stop logging from complaining if no logging has been configured.
log.addHandler(logging.NullHandler())
class FallbackCachedSession(CachedSession):
"""Cached session that'll reuse expired cache data on timeouts
This allows survival in case the backend is down, living of stale
data until it comes back.
"""
def send(self, request, **kwargs):
# this *bypasses* CachedSession.send; we want to call the method
# CachedSession.send() would have delegated to!
session_send = super(CachedSession, self).send
if (self._is_cache_disabled or
request.method not in self._cache_allowable_methods):
response = session_send(request, **kwargs)
response.from_cache = False
return response
cache_key = self.cache.create_key(request)
def send_request_and_cache_response(stale=None):
try:
response = session_send(request, **kwargs)
except (Timeout, ConnectionError):
if stale is None:
raise
log.warning('No response received, reusing stale response for '
'%s', request.url)
return stale
if stale is not None and response.status_code == 500:
log.warning('Response gave 500 error, reusing stale response '
'for %s', request.url)
return stale
if response.status_code in self._cache_allowable_codes:
self.cache.save_response(cache_key, response)
response.from_cache = False
return response
response, timestamp = self.cache.get_response_and_time(cache_key)
if response is None:
return send_request_and_cache_response()
if self._cache_expire_after is not None:
is_expired = datetime.utcnow() - timestamp > self._cache_expire_after
if is_expired:
self.cache.delete(cache_key)
# try and get a fresh response, but if that fails reuse the
# stale one
return send_request_and_cache_response(stale=response)
# dispatch hook here, because we've removed it before pickling
response.from_cache = True
response = dispatch_hook('response', request.hooks, response, **kwargs)
return response
def basecache_delete(self, key):
# We don't really delete; we instead set the timestamp to
# datetime.min. This way we can re-use stale values if the backend
# fails
try:
if key not in self.responses:
key = self.keys_map[key]
self.responses[key] = self.responses[key][0], datetime.min
except KeyError:
return
from requests_cache.backends.base import BaseCache
BaseCache.delete = basecache_delete
CachedSession
的上述子类绕过original send()
method而不是直接转到原始requests.Session.send()
方法,即使超时已经过了但后端失败,也会返回现有的缓存值。禁用删除有利于将超时值设置为0,因此如果新请求失败,我们仍然可以重用该旧值。
使用FallbackCachedSession
代替常规CachedSession
对象。
如果您想使用requests_cache.install_cache()
,请确保在FallbackCachedSession
关键字参数中将session_factory
传递给该函数:
import requests_cache
requests_cache.install_cache(
'cache_name', backend='some_backend', expire_after=180,
session_factory=FallbackCachedSession)
在我将上述内容混合在一起之后的某段时间,我的方法比requests_cache
实施的方法更全面;即使您之前明确将其标记为已删除,我的版本也会回归到陈旧的响应中。
答案 1 :(得分:0)
尝试做类似的事情:
class UnreliableAPIClient:
def __init__(self):
self.some_api_method_cached = {} # we will store results here
def some_api_method(self, param1, param2)
params_hash = "{0}-{1}".format(param1, param2) # need to identify input
try:
result = do_call_some_api_method_with_fail_probability(param1, param2)
self.some_api_method_cached[params_hash] = result # save result
except:
result = self.some_api_method_cached[params_hash] # resort to cached result
if result is None:
raise # reraise exception if nothing cached
return result
当然,您可以使用它来制作简单的装饰 - http://www.artima.com/weblogs/viewpost.jsp?thread=240808