寻找一种实现通用通用memoization函数的方法,该函数将获取一个函数并返回相同的memoized版本?
在python中寻找@memo(来自Norving的网站)装饰器。
def memo(f):
table = {}
def fmemo(*args):
if args not in table:
table[args] = f(*args)
return table[args]
fmemo.memo = table
return fmemo
更一般的是,有没有办法在C ++中表达泛型和可重用的装饰器,可能使用C ++ 11的新功能?
答案 0 :(得分:39)
一个返回lambda的紧凑型:
template <typename R, typename... Args>
std::function<R (Args...)> memo(R (*fn)(Args...)) {
std::map<std::tuple<Args...>, R> table;
return [fn, table](Args... args) mutable -> R {
auto argt = std::make_tuple(args...);
auto memoized = table.find(argt);
if(memoized == table.end()) {
auto result = fn(args...);
table[argt] = result;
return result;
} else {
return memoized->second;
}
};
}
在C ++ 14中,可以使用广义返回类型推导来避免返回std::function
所带来的额外间接。
使这完全通用,允许传递任意函数对象而不首先将它们包装在std::function
中,这留给读者练习。
答案 1 :(得分:20)
在C ++中进行memoization的正确方法是将Y-combinator混合在一起。
您的基本功能需要修改。它不是直接调用自身,而是将模板化引用作为其第一个参数(或者,std::function<Same_Signature>
递归作为其第一个参数)。
我们从Y-combinator开始。然后我们在operator()
上添加一个缓存并将其重命名为memoizer
,并为其提供一个固定的签名(对于该表)。
唯一剩下的就是编写一个在元组上执行哈希的tuple_hash<template<class...>class Hash>
。
可以记忆的函数的类型是(((Args...)->R), Args...) -> R
,这使得( (((Args...) -> R), Args...) -> R ) -> ((Args...) -> R)
类型的记事本成为可能。使用Y-combinator来生成“传统”递归实现也很有用。
请注意,如果memoized函数在调用期间修改了其args,则memoizer会将结果缓存在错误的位置。
struct wrap {};
template<class Sig, class F, template<class...>class Hash=std::hash>
struct memoizer;
template<class R, class...Args, class F, template<class...>class Hash>
struct memoizer<R(Args...), F, Hash> {
using base_type = F;
private:
F base;
std::unordered_map< std::tuple<std::decay_t<Args>...>, R, tuple_hash<Hash> > cache;
public:
template<class... Ts>
R operator()(Ts&&... ts) const
{
auto args = std::make_tuple(ts...);
auto it = cache.find( args );
if (it != cache.end())
return it->second;
auto&& retval = base(*this, std::forward<Ts>(ts)...);
cache.emplace( std::move(args), retval );
return decltype(retval)(retval);
}
template<class... Ts>
R operator()(Ts&&... ts)
{
auto args = std::tie(ts...);
auto it = cache.find( args );
if (it != cache.end())
return it->second;
auto&& retval = base(*this, std::forward<Ts>(ts)...);
cache.emplace( std::move(args), retval );
return decltype(retval)(retval);
}
memoizer(memoizer const&)=default;
memoizer(memoizer&&)=default;
memoizer& operator=(memoizer const&)=default;
memoizer& operator=(memoizer&&)=default;
memoizer() = delete;
template<typename L>
memoizer( wrap, L&& f ):
base( std::forward<L>(f) )
{}
};
template<class Sig, class F>
memoizer<Sig, std::decay_t<F>> memoize( F&& f ) { return {wrap{}, std::forward<F>(f)}; }
live example使用基于this SO post的硬编码散列函数。
auto fib = memoize<size_t(size_t)>(
[](auto&& fib, size_t i)->size_t{
if (i<=1) return 1;
return fib(i-1)+fib(i-2);
}
);
答案 2 :(得分:4)
虽然@KerrekSB发布了另一个答案的链接,但是我也会把我的答案放在铃声中(它可能比链接的答案稍微复杂一点,虽然本质上它非常相似):
#include <functional>
#include <map>
#include <tuple>
#include <utility>
/*! \brief A template functor class that can be utilized to memoize any
* given function taking any number of arguments.
*/
template <typename R, typename... Args>
struct memoize_wrapper
{
private:
std::map<std::tuple<Args...>, R> memo_;
std::function<R(Args...)> func_;
public:
/*! \brief Auto memoization constructor.
*
* \param func an the std::function to be memoized.
*/
memoize_wrapper(std::function<R(Args...)> func)
: func_(func)
{ }
/*! \brief Memoization functor implementation.
*
* \param a Argument values that match the argument types for the
* (previously) supplied function.
* \return A value of return type R equivalent to calling func(a...).
* If this function has been called with these parameters
* previously, this will take O(log n) time.
*/
R operator()(Args&&... a)
{
auto tup = std::make_tuple(std::forward<Args>(a)...);
auto it = memo_.find(tup);
if(it != memo_.end()) {
return it->second;
}
R val = func_(a...);
memo_.insert(std::make_pair(std::move(tup), val));
return val;
}
}; //end struct memoize_wrapper
编辑:示例用法:
Edit2:正如所指出的,这不适用于递归函数。
#include "utility/memoize_wrapper.hpp"
#include <memory>
#include <vector>
#include <algorithm>
#include <iostream>
long factorial(long i)
{
long result = 1;
long current = 2;
while(current <= i) {
result *= current;
++current;
}
return result;
}
int main()
{
std::vector<int> arg {10, 9, 8, 7, 6, 10, 9, 8, 7, 6};
std::transform(arg.begin(), arg.end(), arg.begin(), memoize_wrapper<long, long>(factorial));
for(long i : arg) {
std::cout << i << "\n";
}
}
答案 3 :(得分:3)
我在同样的问题上挣扎。我创建的宏也支持(递归代码中的小修改)递归。这是:
#include <map>
#include <tuple>
#define MEMOIZATOR(N, R, ...) \
R _ ## N (__VA_ARGS__); \
std::map<std::tuple<__VA_ARGS__>, R> _memo_ ## N; \
template <typename ... Args> \
R N (Args ... args) { \
auto& _memo = _memo_ ## N; \
auto result = _memo.find(std::make_tuple(args...)); \
if (result != _memo.end()) { \
return result->second; \
} \
else { \
auto result = _ ## N (args...); \
_memo[std::make_tuple(args...)] = result; \
return result; \
} \
}
用法非常简单:
MEMOIZATOR(fibonacci, long int, int);
long int _fibonacci(int n) { // note the leading underscore
// this makes recursive function to go through wrapper
if (n == 1 or n == 2) {
return 1;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
fibonacci(40) // uses memoizator so it works in linear time
// (try it with and without memoizator)
查看实际操作:http://ideone.com/C3JEUT:)
答案 4 :(得分:2)
下面是一个(线程安全的)C ++ 17函数模板,其作用类似于std::invoke
,但会记住结果:
/**
* @brief Drop-in replacement for std::invoke which memoizes the return
* result.
*
* @param[in] function The function whose result needs to be cached
* @param[in] args The function arguments
*
* @tparam Function The function type
* @tparam Args The argument types
*
* @return A value obtained either by evaluating the function, or by
* recalling it from a cache.
*
* @note The function provided must not be a type-erase function object
* like a raw function pointer or std::function, because this
* function depends on the uniqueness of the Function template
* parameter. If you were to call invoke_memoized(f, a) and
* invoke_memoized(g, b) in the same translation unit, where f and g
* were function pointers of the same type, and a and b were
* arguments of the same type, you'd end up using the same cache for
* both functions f and g. A reasonable attempt is made to detect
* these misuse cases via static_assert.
*/
template<typename Function, typename... Args>
auto invoke_memoized(Function function, Args... args)
{
using key_type = std::tuple<Args...>;
using value_type = std::invoke_result_t<Function, Args...>;
static_assert(! std::is_same_v<Function, std::function<value_type(Args...)>>,
"cannot memoize on std::function (use a lambda instead)");
static_assert(! std::is_same_v<Function, value_type(*)(Args...)>,
"cannot memoize on function pointer (use a lambda instead)");
static std::mutex mutex;
static std::map<key_type, value_type> cache;
auto key = std::tuple(args...);
auto lock = std::lock_guard<std::mutex>(mutex);
if (cache.count(key))
{
return cache[key];
}
return cache[key] = std::apply(function, key);
}
您可以像这样使用它:
auto c = invoke_memoized(std::plus<>(), 1, 2.3);
为函数对象和参数类型的每种组合维护一个静态缓存。如前所述,std::function
和原始函数指针将被拒绝,因为类型擦除的函数将混淆其缓存。您可以轻松地修改此功能,以限制缓存大小。