Handling Long HTTP Requests Using Asynchronous Worker: Python, Celery, and RabbitMQ

In my last post, I talked about how to pull responses from my database, display them in my client (Corona SDK/Lua/Mobile platform), track which ones are selected based on user touch input (tracking ID in an event handler), and then update a value in my database based on that information.

For this post I’m going to talk about how to handle lengthy HTTP requests in python using Celery and Rabbit MQ. For the application that I am building, there is a specific HTTP request that is computationally expensive (it takes a lot of time to run). Instead of having the client wait around for the request to process, we can kick the request off to an asynchronous process that will run in the background and return results whenever it is finished.

I decided to use Celery and RabbitMQ (among other alternatives) because DigitalOcean had a nice tutorial using these technologies. I’ve become quite fond of DigitalOcean tutorials due to their accurate descriptions, succinct explanations, and comprehensive nature (https://www.digitalocean.com/community/tutorials/how-to-use-celery-with-rabbitmq-to-queue-tasks-on-an-ubuntu-vps). This tutorial was fairly straightforward and got me through the process of setting up Celery and RabbitMQ, and configuring them for use within my Python falcon framework. This set up a celery job queue (a place to dump computationally expensive processes) using RabbitMQ as the messaging system (or message broker).

With this system in place, I can now run my long HTTP request processes in the background, it doesn’t interfere or hold up the front-end of my application (users can continue playing the game), and then when the process is finished it inserts data into my database so my client can pull it later on. I still need to figure out error handling with these processes, but at least it is functional right now. Below you can find the celery code using RabbitMQ as the message broker within my python script.

from celery import Celery
app = Celery(‘tasks’, backend=’amqp’, broker=’amqp://’)
@app.task
def function_name(parameters):

#code for function goes here
return results

If I save this file as celery_process.py, then I can put it in the main directory of my project and import it into my main application script using from celery_process import function_name. Then, when I need to call my asynchronous process, I use the following code:

function_name.delay(parameters)

This will call my function named function_name in an asynchronous worker process spun off by celery using RabbitMQ. Pretty cool.

That’s it for this post. We’ve talked about setting up Celery and RabbitMQ to handle long or computationally expensive HTTP requests to avoid interrupting the browsing experience on the front-end.

Leave a Comment

Your email address will not be published. Required fields are marked *

css.php