Decorators

aiocache comes with a couple of decorators for caching results from asynchronous functions. Do not use the decorator in synchronous functions, it may lead to unexpected behavior.

cached

class aiocache.cached(ttl=None, key=None, key_from_attr=None, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=<class 'aiocache.serializers.JsonSerializer'>, plugins=None, alias=None, noself=False, **kwargs)[source]

Caches the functions return value into a key generated with module_name, function_name and args.

In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the RedisCache. You can send those args as kwargs and they will be propagated accordingly.

Each call will use the same connection through all the cache calls. If you expect high concurrency for the function you are decorating, it will be safer if you set high pool sizes (in case of using memcached or redis).

Parameters:
  • ttl – int seconds to store the function call. Default is None which means no expiration.
  • key – str value to set as key for the function return. Takes precedence over key_from_attr param. If key and key_from_attr are not passed, it will use module_name + function_name + args + kwargs
  • key_from_attr – str arg or kwarg name from the function to use as a key.
  • cache – cache class to use when calling the set/get operations. Default is aiocache.SimpleMemoryCache.
  • serializer – serializer instance to use when calling the dumps/loads. Default is JsonSerializer.
  • plugins – list plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
  • alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. New cache is created every time.
  • noself – bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import asyncio

from collections import namedtuple

from aiocache import cached, RedisCache
from aiocache.serializers import PickleSerializer

Result = namedtuple('Result', "content, status")


@cached(
    ttl=10, cache=RedisCache, key="key", serializer=PickleSerializer(), port=6379, namespace="main")
async def cached_call():
    return Result("content", 200)


def test_cached():
    cache = RedisCache(endpoint="127.0.0.1", port=6379, namespace="main")
    loop = asyncio.get_event_loop()
    loop.run_until_complete(cached_call())
    assert loop.run_until_complete(cache.exists("key")) is True
    loop.run_until_complete(cache.delete("key"))
    loop.run_until_complete(cache.close())


if __name__ == "__main__":
    test_cached()

multi_cached

class aiocache.multi_cached(keys_from_attr, key_builder=None, ttl=0, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=<class 'aiocache.serializers.JsonSerializer'>, plugins=None, alias=None, **kwargs)[source]

Only supports functions that return dict-like structures. This decorator caches each key/value of the dict-like object returned by the function.

If key_builder is passed, before storing the key, it will be transformed according to the output of the function.

If the attribute specified to be the key is an empty list, the cache will be ignored and the function will be called as expected.

Each call will use the same connection through all the cache calls. If you expect high concurrency for the function you are decorating, it will be safer if you set high pool sizes (in case of using memcached or redis).

Parameters:
  • keys_from_attr – arg or kwarg name from the function containing an iterable to use as keys to index in the cache.
  • key_builder – Callable that allows to change the format of the keys before storing. Receives a dict with all the args of the function.
  • ttl – int seconds to store the keys. Default is 0 which means no expiration.
  • cache – cache class to use when calling the multi_set/multi_get operations. Default is aiocache.SimpleMemoryCache.
  • serializer – serializer instance to use when calling the dumps/loads. Default is JsonSerializer.
  • plugins – plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
  • alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. New cache is created every time.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import asyncio

from aiocache import multi_cached, RedisCache

DICT = {
    'a': "Z",
    'b': "Y",
    'c': "X",
    'd': "W"
}


@multi_cached("ids", cache=RedisCache, namespace="main")
async def multi_cached_ids(ids=None):
    return {id_: DICT[id_] for id_ in ids}


@multi_cached("keys", cache=RedisCache, namespace="main")
async def multi_cached_keys(keys=None):
    return {id_: DICT[id_] for id_ in keys}


cache = RedisCache(endpoint="127.0.0.1", port=6379, namespace="main")


def test_multi_cached():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(multi_cached_ids(ids=['a', 'b']))
    loop.run_until_complete(multi_cached_ids(ids=['a', 'c']))
    loop.run_until_complete(multi_cached_keys(keys=['d']))

    assert loop.run_until_complete(cache.exists('a'))
    assert loop.run_until_complete(cache.exists('b'))
    assert loop.run_until_complete(cache.exists('c'))
    assert loop.run_until_complete(cache.exists('d'))

    loop.run_until_complete(cache.delete("a"))
    loop.run_until_complete(cache.delete("b"))
    loop.run_until_complete(cache.delete("c"))
    loop.run_until_complete(cache.delete("d"))
    loop.run_until_complete(cache.close())


if __name__ == "__main__":
    test_multi_cached()