Redbox Rentals RSS Feed

Posted: 4/1/13 12:30 AM

For as long as I can remember Netflix has provided users with an RSS feed for their DVD and instant queues.  This is convenient not only to preview your own queue on a mobile device using Google Reader or another RSS feed reader, but also to know what your friends are renting.  That way you'll know what your friends are watching or make suggestions for them.  You can even collaborate on rentals and combine your viewings.  I'm mostly using Redbox now because I don't rent enough movies each month and I was starting to feel left out from all this sharing.  I would still like to let me friends know what I've been watching so they can make suggestions or stop by when I have a movie they're interested in.  One of the advantages of Redbox is near instant procurement of the DVD or Bluray disc.  Sharing this information was a problem up until this weekend when I finally decided to write my own RSS server to broadcast this information.  Getting this to work wasn't without its challenges.

Once you are logged into Redbox your rental history can be seen from the transactions tab.  This appears to have been implemented only recently, around the time the Redbox instant streaming beta became available.  Knowing that this information was available on the website I figured there should be a way to extract it and convert it into an RSS feed.  The first step was of course to try and fetch the transaction page outside of the browser.  I already knew that curl could post data to a website and handle cookies so I started to experiment with that.  Turns out the login process, while initiated from an HTML form, is actually an API call.  Google Chrome has come a long way recently in terms of becoming a powerful web development tool and in this case proved key.  After sometime I discovered the AJAX request that was responsible for the login process.  It was a call to rb.api.account.login.  Looking over the headers and cookies in Chrome I tried to duplicate the environment but without any luck.  Finally after many failed experiments I randomly noticed right-clicking on a line within the Chrome network tab yields a "curl" option.  This generates the complete command line needed to reproduce the AJAX request.  Sure enough it worked!  Now I simply started removing headers and cookies until finally I came to the minimal environment.

The discussion from this point forward will become more technical. Redbox requires two key pieces of information in order to login to their website or use their API. The first piece of information is an API key and the second piece of information is an user identification cookie (probably the server session id).
The API key changes over time (probably because it's public) but it's given with each page request. If a page is requested before each API call then the key will always be valid. Once logged in some requests strangely don't require an API key probably due to the identification cookie. Just to be safe the API key is given each time similar to how it would be in real life where the user is navigating pages.  Now for one of the key tidbits. Once the API key is extracted it gets passed to the server by setting the '__K' HTTP header while constructing the page request.
The second piece of information, the user identification cookie, is named 'rbuser'. This cookie is provided with just about every page request. Conveniently the same page request needed to extract the API key can be used to initialize the identification cookie. Since the cookie jar is persistent across all page requests this is trivial to implement.
API requests are given in JSON (POST) or urlencode (GET) format. API responses are returned in JSON format.  Well at least one response is a javascript page, but it can be treated as JSON. The structure of an API request is unique to the API being called. The basic structure of an API response looks like:
d: {
  data: {
  msg: None/str
  success: True/False
Usually when 'success' is false, 'msg' will contain the error message. The 'data' value contains a structure unique to the API that was called.
Now knowing all this information, I set forth writing a python class to encapsulate the Redbox API.  After working on it for a good portion of the weekend it was coming together quite nicely and I was reliably retrieving my rental history and converting it into an RSS feed.  The next obvious step was to serve up the RSS feed through a web server.  Since I intended to run this server on my Asus RT-N16 I choose the twisted python library which was easy to use and seemed light weight enough.  It didn't take long before I could fetch the RSS feed using my web browser.
Now that I had the basic RSS server working the next step was to improve its functionality.  I added logging, debug, better error checking, but most importantly movie descriptions.  This required pulling in functionality from the rb.api.product class, which meant a bit of refactoring.  After getting that working I moved on to fixing the URLs so they use movie names instead of product ids which don't work on the mobile website.  This required pulling in the master title list and searching through it by product id to get the SEO (Search Engine Optimized) URL.  I then added some caching so I don't piss off Redbox.  At this point it seemed complete so I added the BSD license to each python file and restarted the server on my router.
If you're interested in the fruits of my labor you can find the code on github.  I hope someone else finds this code useful and even helps to fill out the missing API calls.  While this code could easily be turned into a metadata source for your movie collection, I'm not sure what kind of consequences that would have or how Redbox would react.  If you choose to use this code for that purpose be forewarned.

Show/Add Comments (0)