Decorators¶
aiocache comes with a couple of decorators for caching results from asynchronous functions. Do not use the decorator in synchronous functions, it may lead to unexpected behavior.
cached¶
- class aiocache.cached(ttl=<object object>, namespace='', key_builder=None, skip_cache_func=<function cached.<lambda>>, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, noself=False, **kwargs)[source]¶
Caches the functions return value into a key generated with module_name, function_name and args. The cache is available in the function object as
<function_name>.cache
.In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the Redis cache. You can send those args as kwargs and they will be propagated accordingly.
Only one cache instance is created per decorated call. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.
Extra args that are injected in the function that you can use to control the cache behavior are:
cache_read
: Controls whether the function call will try to read from cache first ornot. Enabled by default.
cache_write
: Controls whether the function call will try to write in the cache oncethe result has been retrieved. Enabled by default.
aiocache_wait_for_write
: Controls whether the call of the function will wait for thevalue in the cache to be written. If set to False, the write happens in the background. Enabled by default
- Parameters:
ttl – int seconds to store the function call. Default is None which means no expiration.
namespace – string to use as default prefix for the key used in all operations of the backend. Default is an empty string, “”.
key_builder – Callable that allows to build the function dynamically. It receives the function plus same args and kwargs passed to the function. This behavior is necessarily different than
BaseCache.build_key()
skip_cache_func – Callable that receives the result after calling the wrapped function and should return True if the value should skip the cache (or False to store in the cache). e.g. to avoid caching None results: lambda r: r is None
cache – cache class to use when calling the
set
/get
operations. Default isaiocache.SimpleMemoryCache
.serializer – serializer instance to use when calling the
dumps
/loads
. If its None, default one from the cache backend is used.plugins – list plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
noself – bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it.
1import asyncio
2
3from collections import namedtuple
4import redis.asyncio as redis
5
6from aiocache import cached, Cache
7from aiocache.serializers import PickleSerializer
8
9Result = namedtuple('Result', "content, status")
10
11
12@cached(
13 ttl=10, cache=Cache.REDIS, key_builder=lambda *args, **kw: "key",
14 serializer=PickleSerializer(), namespace="main", client=redis.Redis())
15async def cached_call():
16 return Result("content", 200)
17
18
19async def test_cached():
20 async with Cache(Cache.REDIS, namespace="main", client=redis.Redis()) as cache:
21 await cached_call()
22 exists = await cache.exists("key")
23 assert exists is True
24 await cache.delete("key")
25
26
27if __name__ == "__main__":
28 asyncio.run(test_cached())
multi_cached¶
- class aiocache.multi_cached(keys_from_attr, namespace='', key_builder=None, skip_cache_func=<function multi_cached.<lambda>>, ttl=<object object>, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, **kwargs)[source]¶
Only supports functions that return dict-like structures. This decorator caches each key/value of the dict-like object returned by the function. The dict keys of the returned data should match the set of keys that are passed to the decorated callable in an iterable object. The name of that argument is passed to this decorator via the parameter
keys_from_attr
.keys_from_attr
can be the name of a positional or keyword argument.If the argument specified by
keys_from_attr
is an empty list, the cache will be ignored and the function will be called. If only some of the keys inkeys_from_attr``are cached (and ``cache_read
is True) those values will be fetched from the cache, and only the uncached keys will be passed to the callable via the argument specified bykeys_from_attr
.By default, the callable’s name and call signature are not incorporated into the cache key, so if there is another cached function returning a dict with same keys, those keys will be overwritten. To avoid this, use a specific
namespace
in each cache decorator or pass akey_builder
.If
key_builder
is passed, then the values ofkeys_from_attr
will be transformed before requesting them from the cache. Equivalently, the keys in the dict-like mapping returned by the decorated callable will be transformed before storing them in the cache.The cache is available in the function object as
<function_name>.cache
.Only one cache instance is created per decorated function. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.
Extra args that are injected in the function that you can use to control the cache behavior are:
cache_read
: Controls whether the function call will try to read from cache first ornot. Enabled by default.
cache_write
: Controls whether the function call will try to write in the cache oncethe result has been retrieved. Enabled by default.
aiocache_wait_for_write
: Controls whether the call of the function will wait for thevalue in the cache to be written. If set to False, the write happens in the background. Enabled by default
- Parameters:
keys_from_attr – name of the arg or kwarg in the decorated callable that contains an iterable that yields the keys returned by the decorated callable.
namespace – string to use as default prefix for the key used in all operations of the backend. Default is an empty string, “”.
key_builder – Callable that enables mapping the decorated function’s keys to the keys used by the cache. Receives a key from the iterable corresponding to
keys_from_attr
, the decorated callable, and the positional and keyword arguments that were passed to the decorated callable. This behavior is necessarily different thanBaseCache.build_key()
and the call signature differs fromcached.key_builder
.skip_cache_func – Callable that receives both key and value and returns True if that key-value pair should not be cached (or False to store in cache). The keys and values to be passed are taken from the wrapped function result.
ttl – int seconds to store the keys. Default is 0 which means no expiration.
cache – cache class to use when calling the
multi_set
/multi_get
operations. Default isaiocache.SimpleMemoryCache
.serializer – serializer instance to use when calling the
dumps
/loads
. If its None, default one from the cache backend is used.plugins – plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
1import asyncio
2
3import redis.asyncio as redis
4
5from aiocache import multi_cached, Cache
6
7DICT = {
8 'a': "Z",
9 'b': "Y",
10 'c': "X",
11 'd': "W"
12}
13
14cache = Cache(Cache.REDIS, namespace="main", client=redis.Redis())
15
16
17@multi_cached("ids", cache=Cache.REDIS, namespace="main", client=cache.client)
18async def multi_cached_ids(ids=None):
19 return {id_: DICT[id_] for id_ in ids}
20
21
22@multi_cached("keys", cache=Cache.REDIS, namespace="main", client=cache.client)
23async def multi_cached_keys(keys=None):
24 return {id_: DICT[id_] for id_ in keys}
25
26
27async def test_multi_cached():
28 await multi_cached_ids(ids=("a", "b"))
29 await multi_cached_ids(ids=("a", "c"))
30 await multi_cached_keys(keys=("d",))
31
32 assert await cache.exists("a")
33 assert await cache.exists("b")
34 assert await cache.exists("c")
35 assert await cache.exists("d")
36
37 await cache.delete("a")
38 await cache.delete("b")
39 await cache.delete("c")
40 await cache.delete("d")
41 await cache.close()
42
43
44if __name__ == "__main__":
45 asyncio.run(test_multi_cached())