Implementing a retry decorator can protect you against unexpected one-off exceptions.
People often describe Python as a “glue-language.” The term “glue-language” to me entails that a language helps to connect systems and makes sure that data gets from A
to B
in a desired structure and format.
I have built countless ETL-scripts (Extraction Transformation Load) with Python. All those scripts essentially are functioning according to the same principle. They are pulling data from somewhere, transform the data, and then run a final operation. This last operation would typically mean to upload the data somewhere, but could also be a conditional deletion.
An ever-increasing proportion of a typical company’s infrastructure is moving to the cloud. More companies are shifting towards a micro-service approach. These paradigm shifts away from local to cloud-based means that you probably also have faced a situation where you had to pull data from somewhere or write data somewhere that is not your local computer.
On a small scale, there rarely are problems around that. If some extraction or writeback fails, you would typically notice that and would be able to remedy the mistake. But, as you move towards larger-scale operations and potentially hundreds of thousands of transactions, you don’t want to get screwed over by a temporary drop of internet-connection, too many concurrent writes, a temporarily unresponsive source system, or god knows what else.
I found a very simple retry-decorator to be a saving grace in countless situations like that. Most of my projects, at one point or another, end up having the retry decorator in some util module.
Decorator
Functions are first-class objects
In Python, functions are first-class objects. A function is just like any other object. This fact, among other things, means that a function can be dynamically created, passed to a function itself, and even changed. Take a look at the following (albeit silly) example:
def my_function(x): print(x) IN: my_function(2) OUT: 2 IN: my_function.yolo = 'you live only once' print(my_function.yolo) OUT: 'you live only once'
Decorating a function
It is good to know that we can wrap a function with another function to fulfill a particular need. We could, for example, make sure that the function reports to some logging endpoint whenever called, we could print out the arguments, we could implement type checking, preprocessing, or postprocessing to just name a few possibilities. Let’s take a look at a simple example:
def first_func(x): return x**2 def second_func(x): return x - 2
Both functions fail when being called with the string '2'
. We could throw a type conversion function in the mix and decorate our first_func
and second_func
with that.
def convert_to_numeric(func): # define a function within the outer function def new_func(x): return func(float(x)) # return the newly defined function return new_func
This convert_to_numeric
wrapper function expects a function as an argument and returns another function.
Now, while previously failing, if you wrap the functions and then call them with a string number, all works as expected.
IN: new_fist_func = convert_to_numeric(first_func) ############################### convert_to_numeric returns this function: def new_func(x): return first_func(float(x)) ############################### new_fist_func('2') OUT: 4.0 IN: convert_to_numeric(second_func)('2') OUT: 0
So what is going on here?
Well, our convert_to_numeric
takes a function (A) as an argument and returns a new function (B). The new function (B), when called, calls function (A) but instead of calling it with the passed argument x
it calls function (A) with float(x)
and therefore solving our previous TypeError
problem.
Decorator Syntax
To make it a little bit easier for the developer, Python provides a special Syntax. We can also do the following:
@convert_to_numeric def first_func(x): return x**2
The above syntax is equivalent to:
def first_func(x): return x**2 first_func = convert_to_numeric(first_func)
This syntax makes a little clearer what exactly is happening, especially when using multiple decorators.
Retry!
Now that we have covered the basics, let’s move to my favorite and heavily used retry
-decorator:
from asyncio.log import logger from functools import wraps import time import logging import random logger = logging.getLogger(__name__) def log(msg, logger = None): if logger: logger.warning(msg) else: print(msg) def retry(exceptions, total_tries=4, initial_wait=0.5, backoff_factor=2, logger=None): """ calling the decorated function applying an exponential backoff. Args: exceptions: Exception(s) that trigger a retry, can be a tuple total_tries: Total tries initial_wait: Time to first retry backoff_factor: Backoff multiplier (e.g. value of 2 will double the delay each retry). logger: logger to be used, if none specified print """ def retry_decorator(f): @wraps(f) def func_with_retries(*args, **kwargs): _tries, _delay = total_tries + 1, initial_wait while _tries > 1: try: log(f'{total_tries + 2 - _tries}. try:', logger) return f(*args, **kwargs) except exceptions as e: _tries -= 1 print_args = args if args else 'no args' if _tries == 1: msg = str(f'Function: {f.__name__}\n' f'Failed despite best efforts after {total_tries} tries.\n' f'args: {print_args}, kwargs: {kwargs}') log(msg, logger) raise msg = str(f'Function: {f.__name__}\n' f'Exception: {e}\n' f'Retrying in {_delay} seconds!, args: {print_args}, kwargs: {kwargs}\n') log(msg, logger) time.sleep(_delay) _delay *= backoff_factor return func_with_retries return retry_decorator
Wrapping a wrapped function. That is some inception stuff right there. But bear with me, it is not that complicated!
Let’s walk through the code step by step:
- Outmost function
retry
: This parameterizes our decorator, i.e. what are the exceptions we want to handle, how often do we want to try, how long do we wait between tries, and what is our exponential backoff-factor (i.e. with what number do we multiply the waiting time each time we fail). retry_decorator
: This is the parametrized decorator, which is being returned by ourretry
function. We are decorating the function within theretry_decorator
with@wraps
. Strictly speaking, this is not necessary when it comes to functionality. This wrapper updates the__name__
and__doc__
of the wrapped function (if we didn’t do that our function__name__
would always befunc_with_retries
)func_with_retries
applies the retry logic. This function wraps the function calls in try-except blocks and implements the exponential backoff wait and some logging.
Usage:
Function decorated with a retry-decorator, trying up to four times on any exception
@retry(Exception, total_tries=4, logger=logger) def test_func(*args, **kwargs): rnd = random.random() if rnd < .2: raise ConnectionAbortedError('Connection was aborted :(') elif rnd < .4: raise ConnectionRefusedError('Connection was refused :/') elif rnd < .6: raise ConnectionResetError('Guess the connection was reset') elif rnd < .8: raise TimeoutError('This took too long') else: return 'Yay!!' print(test_func('hi', 'bye', hi='ciao'))
Alternatively, a little bit more specific: Function decorated with a retry on TimeoutError will try up to two times.
@retry(TimeoutError, total_tries=2, logger=logger) def test_func(*args, **kwargs): rnd = random.random() if rnd < .2: raise ConnectionAbortedError('Connection was aborted :(') elif rnd < .4: raise ConnectionRefusedError('Connection was refused :/') elif rnd < .6: raise ConnectionResetError('Guess the connection was reset') elif rnd < .8: raise TimeoutError('This took too long') else: return 'Yay!!' print(test_func('hi', 'bye', hi='ciao'))
Results:
Calling the decorated function and running into errors would then lead to something like this:

Here we have nice logging, we print out the args
and kwargs
and function name, which should make debugging and fixing the problem a breeze (should the error persist event after all the retries are used up).
Conclusion
There you have it. You learned how decorators work in Python and how to decorate your mission-critical functions with a simple retry decorator to make sure they will execute even in the face of some uncertainty.
I hope you’ll like it and will find it some use. Feel free to fork it, report bug or ask for new features on its GitHub!