Decorators

aiocache comes with a couple of decorators for caching results from asynchronous functions. Do not use the decorator in synchronous functions, it may lead to unexpected behavior.

cached

class aiocache.cached(ttl=None, key=None, key_builder=None, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, noself=False, **kwargs)[source]

Caches the functions return value into a key generated with module_name, function_name and args. The cache is available in the function object as <function_name>.cache.

In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the RedisCache. You can send those args as kwargs and they will be propagated accordingly.

Only one cache instance is created per decorated call. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.

Parameters:
  • ttl – int seconds to store the function call. Default is None which means no expiration.
  • key – str value to set as key for the function return. Takes precedence over key_builder param. If key and key_builder are not passed, it will use module_name + function_name + args + kwargs
  • key_builder – Callable that allows to build the function dynamically. It receives same args and kwargs as the called function.
  • cache – cache class to use when calling the set/get operations. Default is aiocache.SimpleMemoryCache.
  • serializer – serializer instance to use when calling the dumps/loads. If its None, default one from the cache backend is used.
  • plugins – list plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
  • alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
  • noself – bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import asyncio

from collections import namedtuple

from aiocache import cached, RedisCache
from aiocache.serializers import PickleSerializer

Result = namedtuple('Result', "content, status")


@cached(
    ttl=10, cache=RedisCache, key="key", serializer=PickleSerializer(), port=6379, namespace="main")
async def cached_call():
    return Result("content", 200)


def test_cached():
    cache = RedisCache(endpoint="127.0.0.1", port=6379, namespace="main")
    loop = asyncio.get_event_loop()
    loop.run_until_complete(cached_call())
    assert loop.run_until_complete(cache.exists("key")) is True
    loop.run_until_complete(cache.delete("key"))
    loop.run_until_complete(cache.close())


if __name__ == "__main__":
    test_cached()

multi_cached

class aiocache.multi_cached(keys_from_attr, key_builder=None, ttl=0, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, **kwargs)[source]

Only supports functions that return dict-like structures. This decorator caches each key/value of the dict-like object returned by the function. The cache is available in the function object as <function_name>.cache.

If key_builder is passed, before storing the key, it will be transformed according to the output of the function.

If the attribute specified to be the key is an empty list, the cache will be ignored and the function will be called as expected.

Only one cache instance is created per decorated function. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.

Parameters:
  • keys_from_attr – arg or kwarg name from the function containing an iterable to use as keys to index in the cache.
  • key_builder – Callable that allows to change the format of the keys before storing. Receives the key and same args and kwargs as the called function.
  • ttl – int seconds to store the keys. Default is 0 which means no expiration.
  • cache – cache class to use when calling the multi_set/multi_get operations. Default is aiocache.SimpleMemoryCache.
  • serializer – serializer instance to use when calling the dumps/loads. If its None, default one from the cache backend is used.
  • plugins – plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
  • alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import asyncio

from aiocache import multi_cached, RedisCache

DICT = {
    'a': "Z",
    'b': "Y",
    'c': "X",
    'd': "W"
}


@multi_cached("ids", cache=RedisCache, namespace="main")
async def multi_cached_ids(ids=None):
    return {id_: DICT[id_] for id_ in ids}


@multi_cached("keys", cache=RedisCache, namespace="main")
async def multi_cached_keys(keys=None):
    return {id_: DICT[id_] for id_ in keys}


cache = RedisCache(endpoint="127.0.0.1", port=6379, namespace="main")


def test_multi_cached():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(multi_cached_ids(ids=['a', 'b']))
    loop.run_until_complete(multi_cached_ids(ids=['a', 'c']))
    loop.run_until_complete(multi_cached_keys(keys=['d']))

    assert loop.run_until_complete(cache.exists('a'))
    assert loop.run_until_complete(cache.exists('b'))
    assert loop.run_until_complete(cache.exists('c'))
    assert loop.run_until_complete(cache.exists('d'))

    loop.run_until_complete(cache.delete("a"))
    loop.run_until_complete(cache.delete("b"))
    loop.run_until_complete(cache.delete("c"))
    loop.run_until_complete(cache.delete("d"))
    loop.run_until_complete(cache.close())


if __name__ == "__main__":
    test_multi_cached()