The Authentication Headache

Authentication involves setting cookies, handling incoming & outgoing requests, maintaining sessions and securely managing passwords. Finding all this an unnecessary headache, I’d left this for as late as possible. Instead, I’d been using a temporary text string as username all across the app (client & server) and using it to store data.

Somehow, I always thought that once I’d implemented authentication, integrating the user details instead of that text string would be a quick job. How wrong I was.

Thanks to GAE’s in-built users.User object and connected federated login options, it barely took me half a day to implement authentication, create a page for it and connect it to the database. Then it took me another 2 days to replace all references to that text string – sometimes passed as argument, other times coded, other times completely absent – with a proper uid, uname combo. It needed working-over twice and a lot of thought midway into the second do-over, but also realised that localStorage data didn’t need any user identity information. In fact, the only client side entity that needs any user information is the Logout link. Anyway, have got the system to work with user auth and tested it with multiple users.

Though it’s working with the current setup, I’m afraid the non-xhr GET & POST functions are broken. I’d been planning to get rid of POST and use GET in its current serve-empty-html format, till I realised that my own trusty phone – the Nokia E71 – may not support most of those javascript functions, specially xhr. So, will likely be working on integrating it again with a backup index.html template for non-login non-xhr requests.

Online/Offline Daemon

Completed a big task on Friday, though not the way I wanted it – background, auto synchronisation of notes (including changes, deletions, etc) between server and browser app.

HTML5 has introduced the powerful features that provide in-built features for managing actions such as synchronisation when the browser detects an internet connection.

[sourcecode language=”javascript”]
if(navigator.online){
//do sync here
} else {
//store data offline
}

document.body.addEventListener("online", function () {
// do sync here
}, false);

document.body.addEventListener("online", function () {
// store data offline
}, false);

[/sourcecode]

Unfortunately, these features aren’t completely supported, or even uniformly implemented across browsers and, thus, are hardly usable in the current state.

All server requests in my app are implemented through xhr. So, I’m tapping into it and using this workaround:

  1. If an xhr call fails, an alternate function is called which:
  1. Sets a global variable to ‘offline
  2. Calls equivalent local functions which edit/create/delete/list from localStorage
  3. Starts a daemon using setInterval() that checks every sends a ‘Ping‘ xhr request to check if a server connection is available again
  • When the daemon can connect to server, it:
    1. Sets the global variable to ‘online
    2. Starts a function that syncs all local changes to server and then downloads a fresh list from server
    3. Kills the daemon using clearInterval() call

    There are quite a few problems with this approach. The ones I’m worrying about right now are:

    1. Memory, CPU and battery usage by that artificial daemon, specially on mobile devices.
    2. Unnecessary load on the server from handling those Ping requests, even though there’ll be only one successful one for every connect.

    Though, compared to the browser ‘online / offline’ properties, this approach has one advantage too – it can also handle instances when the internet connectivity is up but my server / app is down.

    One change I’ll probably be doing is fetching only the header instead of pinging the app and checking result – It’s faster and might be lighter on the server (need to check this bit). Got the idea for this from this blog post.

    [sourcecode language=”javascript”]
    x.open(
    // requesting the headers is faster, and just enough
    "HEAD",
    // append a random string to the current hostname,
    // to make sure we’re not hitting the cache
    "//" + window.location.hostname + "/?rand=" + Math.random(),
    // make a synchronous request
    false
    );
    [/sourcecode]

    Still, that doesn’t make up for a native browser capability to tell the app when the network connection has gone down. Waiting for it…

    Continue reading

    Offline App Development

    … is a pain.

    Mainly because once you add a file to the manifest, and thus to the browser’s app cache, you’re never sure if it’s getting updated. I spent most of last night and much of today just trying to get the browser to fetch the new version of html doc, and then the correct js file. Too bugged. It seems to be an issue primarily with Chrome & Chromium, but they are my primary development platform.

    Anyway, if despite all the changes to manifest, restarting servers, clearing browser cache, the browser is still fetching old cached files, here’s a way out:

    1. Open a new tab in chrome/chromium and go to ‘chrome://appcache-internals/
    2. Clear all cached app resources
    3. Refresh the page to confirm there are no more cached app resources
    4. Refresh your test page – it should fetch everything from the server now

    There’s another bit I learnt and implemented (though my implementation is pretty simplistic): Dynamic Cache Manifest.

    Basically, instead of writing the application manifest, for offline app storage, by hand and changing it every time you change a file, let your server-side script do the work. What I’ve done is :

    1. Hand-wrote a cache manifest file but with a comment line that includes a django tag {{ts}}.
    2. Changed my app.yaml so that instead of serving the file directly, the request for it is sent to my server script.
    3. In the script I use the handwritten cache manifest and replace the ts tag with a random number.

    What this ensures is that the browser is forced to fetch all elements in the app manifest every time because of the random number changing cache manifest on every fetch. This ensures that while I’m connected to the server and testing my js scripts, I get the latest versions every time. On the other hand, when I switch off the server and test the offline mode, the script still has resources available offline for rendering.

    There is a lot more you can do here with the dynamic cache manifest rather than just plug-in a random number. I learnt this trick from this blog at Cube, and there are a couple more handy tricks being used there, so I suggest reading that resource.

    Update: Seems like, as with script-initiated xhr requests, Chrome also makes multiple requests for the cache manifest from the server for every page load. When I use a randomizer function to send a ‘new’ cache manifest every time, this results in the two cache manifests being different to each other and thus Chrome doesn’t save any files. Not what I wanted, now? So, here’s a small change I made

    previous randomizer: int(random.random()*10000)
    new randomizer: time.strftime(‘%y%m%d%H%M’)

    Now, instead of returning a new resource every time, I return a new resource every minute. This means that the two simultaneous requests that Chrome fires, will get the same cache manifest. But a page refresh/load later, will return a new cache manifest. It’s working so far :)
    [Ofcourse, I know that there’s still a remote chance that the two requests will hit either side of the minute change. But the probability is small enough for me to take a chance.]

    Using Closure

    Note-to-myself:

    1. Make the xyz.soy file. First line should contain {namespace  namespace_name} defining a name for your namespace.
    2. Define function calls inside the soy file using the format below, replacing abc with your HTML template and code:

      /**
      * function_name function description
      * @param? param1 description
      * @param param2 description
      */
      {template .function_name}
      <templating code>
      {/template}

    3. Compile the xyz.soy file into a .js file using the following command (You’ll need java runtime installed. I used Sun’s jre 6):

      java -jar [path_to_Closure_folder]/SoyToJsSrcCompiler.jar –outputPathFormat ‘[filename_format].js’ xyz.soy

      You can get more details about what can be used in outputPathFormat here.

    4. Import the generated javascript code file and the soyutils.js into your html file (before the main.js where templating functions will be called):


      <script type=”text/javascript” src=”script/[path_to_Closure_folder/]soyutils.js”></script>
      <script type=”text/javascript” src=”script/[generated_js_filename].js”></script>

    5. Call the templating functions as namespace_name.function_name(params) from anywhere in main.js or the HTML code.
    6. Done :)

    Refer to:

    Now, done.

    P.S.: Finally finished moving the code through Closure templates. Now can begin normal development again.

    Yes Yes Yes Yes Yes Yes Yessssssss! Ahh.. the joy of a overcoming a tiny, but hugely irritating hurdle. Like that piece of chicken stuck between teeth.

    Just got the Closure template working to do a listAll, after a day and half of struggling :)

    A night of suffering

    The refactoring continues.

    After moving all xhr calls from JSON to html, backed by django templating on the server, realised that in order to implement offline, localStorage based system, I still need to generate html at the browser. I could have taken the old, working version with html generating via js strings but it will need another load of work when the design of page starts changing. So, spent the night researching browser-based templating systems. Have shortlisted three – mustache, Closure and Pure. All seem to be taking strange approaches. So far Closure seems to be the one I’ll go with, but the final call will happen tomorrow after some more research.

    Meanwhile, the pen drive finally seems to have filled up, so can’t work off it anymore. Have saved all that I wanted to in dropbox and in bookmark syncs. Gonna reformat and recreate the live USB now so I can start working again tomorrow. The new laptop still isn’t featured on dell website though people have started talking about it on twitter. Seems like it’ll be another week or so before I finally get my hands on it and can create a full development environment. The new liveUSB should last till then, I hope.

    Time to sleep now. Ciao.

    POST using xmlhttprequest

    Struggled with this all of yesterday and earlier today. Finally, discovered what seems to be a bug in the implementation (or something that specs missed):

    When passing arguments / parameters with a POST request, the javascript: xmlHttpRequest function seems to truncate/modify the first and last arguments.

    So, now I’m sandwiching the actual arguments between empty buffer parameters and its working.

    Can’t believe how many hours I’ve wasted on this one issue, and that no one addresses it anywhere on the web.

    Anyway, the whole app is now working via ajax, and, as a result, is lightening fast. Now, need to include offline storage of files (manifest) and data (localStorage). Before that, wondering if I should do a split version and test passing the whole formatted html div instead of just JSON objects in response to xhr requests.

    Another code restructuring /

    Yesterday I restructured (I think they call it refactoring) all the code to integrate it into a single html template and a single python class. Also moved all calls to POST so nothing is visible, and editable, from the URL.

    Today, I realise I may have to restructure (refactor) the application again. Yesterday’s restructuring was to bring in simplicity and order. This time it is required for the ‘offline webapp’ bit. This is what happens when you use the learn as you go (cross the bridge when we get to it) approach.

    Looks like I’ve got another few long hours of boring, error-prone, code restructuring ahead of me :/