Django Caching

For best performance, translations must be cached in your application.

TML library offers a few caching options:

Memcache

To provide a separate memcache server to store your translations, independently from your Django cache, add the following configuration to the Django settings file:

TML = {  
   ...
   'cache': {
        'enabled': True,
        'adapter': 'memcached',
        'backend': 'pylibmc',  # by default uses python-memcached
        'namespace': 'tml-foody'
    }
}

The above example use shared caching model. All your nodes that serve as a Django servers (uwsgi or gunicorn) share the same translation cache. This approach will save you memory space, as well as allow you to invalidate/redeploy your translations cache without having to redeploy your application.

To update the cache, execute the following line of code:

from tml.cache import CachedClient

CachedClient.instance().upgrade_version()  

This will invalidate your current cache and rebuilt it with the latest translations from Translation Exchange CDN. As each page loads, it will pull the latest cache data from the CDN and put it into your local shared cache.

If you would like to warmup your cache manually, before running your web servers execute the following command:

./manage.py tml_cache --warmup_cache

This command will upgrade your shared cache with every language/source/translation published in latest release.

Files

An alternative approach to shared cache is the static file cache. This approach requires downloading and installing the release inside your application and re-releasing your app with new translations.

The translation cache will be loaded and stored in every thread/process on every server, but it will be faster at serving translations and this approach does not require cache warmup.

To specify in-memory, file-based cache, provide the following configuration:

import os

TML = {  
   ...
   'cache': {
        'enabled': True,
        'adapter': 'file',
        'version': '20160303075532',
        'path': os.path.join(BASE_DIR, 'tml/cache')
    }
}

The file based cache must be downloaded and installed to the specified folder before you deploy your application. In order to automate this task and eliminate the risk of doing this incorrectly, execute the following command:

./manage.py tml_cache --download --cache_dir=#{'YOUR DIR'}

where cache_dir argument is used to customize the directory at which you want to install your latest release. If you omit this argument, then cache[path] specified in your application settings will be used by default.

Custom

If you would like to use a completely custom cache adapter that stores data in an external shared storage, you can do this by creating your own class implementing the following methods:

class CustomCacheAdapter(object):

    def __init__(self, server, params, library):
        # initialize your adapter
        # use tml.config.CONFIG to access your configuration
        pass

    def read_only(self):
        # indicates if the cache is read only
        pass

    def store(self, key, data, opts=None):
        # stores data in the cache
        pass

    def fetch(self, key, opts=None):
        # fetches the element
        # if cache miss occured under the specified key you can reuse `opts` by passing `miss_callback` key to refetch data from db or any other source.
        pass

    def delete(self, key, opts=None):
        # deletes data from the cache
        pass

    def exist(key, opts=None):
        # checks if data exists in the cache
        pass

Finally specify the adapter in the config:

TML = {  
    'cache': {
        'enabled': True,
        'adapter': 'path.to.CustomCacheAdapter',
        'setting_1': 'value_1',
        'setting_2': 'value_2'
    }
}