The Authentication Headache

Authentication involves setting cookies, handling incoming & outgoing requests, maintaining sessions and securely managing passwords. Finding all this an unnecessary headache, I’d left this for as late as possible. Instead, I’d been using a temporary text string as username all across the app (client & server) and using it to store data.

Somehow, I always thought that once I’d implemented authentication, integrating the user details instead of that text string would be a quick job. How wrong I was.

Thanks to GAE’s in-built users.User object and connected federated login options, it barely took me half a day to implement authentication, create a page for it and connect it to the database. Then it took me another 2 days to replace all references to that text string – sometimes passed as argument, other times coded, other times completely absent – with a proper uid, uname combo. It needed working-over twice and a lot of thought midway into the second do-over, but also realised that localStorage data didn’t need any user identity information. In fact, the only client side entity that needs any user information is the Logout link. Anyway, have got the system to work with user auth and tested it with multiple users.

Though it’s working with the current setup, I’m afraid the non-xhr GET & POST functions are broken. I’d been planning to get rid of POST and use GET in its current serve-empty-html format, till I realised that my own trusty phone – the Nokia E71 – may not support most of those javascript functions, specially xhr. So, will likely be working on integrating it again with a backup index.html template for non-login non-xhr requests.

Dell XPS 14z

Yesterday was a happy day – Rags kept her job in the latest office shakeout and I got my new laptop, the Dell XPS 14z.

Today was a sad and miserable day. Rags still has her job. But the laptop, it’s a shame, really.

Let’s begin with the positives:

  1. Despite not being as light, or slim as the ultrabooks, or being black, it does look nice.
  2. It feels solid. The built seems good with no flaky bits to it (very unlike the loose, plasticky feel I got from Toshiba Latitude at the store)
  3. The insides are powerful – i5 processor, 6 gigs ram and 500gig hdd work together brilliantly to deliver a great feel.
  4. Umm… can’t think anymore!

Now the issues. Well, have already posted the detailed review on Dell’s website, but since they haven’t accepted it yet, shall post it here too:

If you use the keyboard for anything more than filling forms and writing emails, do NOT buy this laptop.

I have been a big fan of Dell’s laptops having used Latitudes at work for years and having loved my XPS M1330 last 3 years. It was thus natural that when I needed to replace my old XPS M1330, I chose the 14z. Of course, the glowing reviews all over the web and promos by Dell helped. Bad, Bad decision. At least now I know which websites NOT to trust for reviews.

Problems with the laptop are on four counts:

1. Keyboard
2. Display
3. WiFi chip
4. Trackpad position

The Keyboard has a super weird keyboard layout. There are no dedicated context menu (right click), page up, page down, home or end keys.

The page up, page down, home and end keys are provided as Fn-triggered overlays on the arrow/direction keys. Unfortunately, the way fn, shift and arrow keys are placed, makes it almost impossible to use them together to select text, killing them for all practical purposes.

The context menu / right click key is completely gone. Out. Disappeared! So either get used to moving the mouse to over the item and right clicking, or use that ultra hard to reach shortcut (Shift-F10) to invoke that menu.

The display is mixed – while the colours are rendered true, unlike some other laptops I tried, the max resolution is a pitiable 1366×768. Even then, most of the times you can easily see the tiny grids of pixels on the screen. Even my 3 year old XPS M1330 had a better screen. Truly horrible.

Finally, the wifi chip. This may only be a personal thing but, with the wifi router in living room, I get zero connectivity on the laptop in my bed room at the other end of the house. In contrast, my old XPS M1330 and partner’s Studio 15 both work comfortably on wireless in that room.

There’s another issue with the laptop, though this might have to do with my hand size (medium) – the trackpad is placed about 3/4th an inch too much to the right. This means that while typing, the base of my left hand frequently touches the touchpad and takes the curser away!

My main uses for a laptop are for software development and managing my photos. I am also a heavy keyboard user. The problems with keyboard and trackpad ensure I can’t use it for software development and the poor display means it is no use for managing my photos.

I still like the insides of the device – the CPU-RAM-HDD-Ports-Performance. But it comes in a really buggy shell. I’ll maybe give it another day or two to see if I can find a way around the keyboard issues, but most likely am sending it back for a refund.

So much for trusting a brand based on past experiences!

Frankly, I would have lived with the weak wifi receiver and a low res screen. But I just can’t bear with that horribly designed keyboard and wrongly placed touchpad. If Dell has to learn anything from Apple, then they should forget about copying them, instead just learn about the importance of user interface and why not to screw it up – it can ruin an otherwise brilliant machine to bits.

Anyway, wrote to Dell a few minutes back asking them to cancel the order and take the laptop back and refund my money. Going to give the old XPS for repair again tomorrow. Hope it lasts a month this time without issues – the month that Dell will invariably take to return my money so I can buy some other laptop.

P.S.: Installed Ubuntu on the laptop wiping off Windows 7 completely. Hope Dell doesn’t charge me for this!

Online/Offline Daemon

Completed a big task on Friday, though not the way I wanted it – background, auto synchronisation of notes (including changes, deletions, etc) between server and browser app.

HTML5 has introduced the powerful features that provide in-built features for managing actions such as synchronisation when the browser detects an internet connection.

[sourcecode language=”javascript”]
if(navigator.online){
//do sync here
} else {
//store data offline
}

document.body.addEventListener("online", function () {
// do sync here
}, false);

document.body.addEventListener("online", function () {
// store data offline
}, false);

[/sourcecode]

Unfortunately, these features aren’t completely supported, or even uniformly implemented across browsers and, thus, are hardly usable in the current state.

All server requests in my app are implemented through xhr. So, I’m tapping into it and using this workaround:

  1. If an xhr call fails, an alternate function is called which:
  1. Sets a global variable to ‘offline
  2. Calls equivalent local functions which edit/create/delete/list from localStorage
  3. Starts a daemon using setInterval() that checks every sends a ‘Ping‘ xhr request to check if a server connection is available again
  • When the daemon can connect to server, it:
    1. Sets the global variable to ‘online
    2. Starts a function that syncs all local changes to server and then downloads a fresh list from server
    3. Kills the daemon using clearInterval() call

    There are quite a few problems with this approach. The ones I’m worrying about right now are:

    1. Memory, CPU and battery usage by that artificial daemon, specially on mobile devices.
    2. Unnecessary load on the server from handling those Ping requests, even though there’ll be only one successful one for every connect.

    Though, compared to the browser ‘online / offline’ properties, this approach has one advantage too – it can also handle instances when the internet connectivity is up but my server / app is down.

    One change I’ll probably be doing is fetching only the header instead of pinging the app and checking result – It’s faster and might be lighter on the server (need to check this bit). Got the idea for this from this blog post.

    [sourcecode language=”javascript”]
    x.open(
    // requesting the headers is faster, and just enough
    "HEAD",
    // append a random string to the current hostname,
    // to make sure we’re not hitting the cache
    "//" + window.location.hostname + "/?rand=" + Math.random(),
    // make a synchronous request
    false
    );
    [/sourcecode]

    Still, that doesn’t make up for a native browser capability to tell the app when the network connection has gone down. Waiting for it…

    Continue reading

    Yesss! Just got the localStorage and cached application to work offline for the first time!

    Now, to do that with real data, and extend beyond the listAll function.

    Offline App Development

    … is a pain.

    Mainly because once you add a file to the manifest, and thus to the browser’s app cache, you’re never sure if it’s getting updated. I spent most of last night and much of today just trying to get the browser to fetch the new version of html doc, and then the correct js file. Too bugged. It seems to be an issue primarily with Chrome & Chromium, but they are my primary development platform.

    Anyway, if despite all the changes to manifest, restarting servers, clearing browser cache, the browser is still fetching old cached files, here’s a way out:

    1. Open a new tab in chrome/chromium and go to ‘chrome://appcache-internals/
    2. Clear all cached app resources
    3. Refresh the page to confirm there are no more cached app resources
    4. Refresh your test page – it should fetch everything from the server now

    There’s another bit I learnt and implemented (though my implementation is pretty simplistic): Dynamic Cache Manifest.

    Basically, instead of writing the application manifest, for offline app storage, by hand and changing it every time you change a file, let your server-side script do the work. What I’ve done is :

    1. Hand-wrote a cache manifest file but with a comment line that includes a django tag {{ts}}.
    2. Changed my app.yaml so that instead of serving the file directly, the request for it is sent to my server script.
    3. In the script I use the handwritten cache manifest and replace the ts tag with a random number.

    What this ensures is that the browser is forced to fetch all elements in the app manifest every time because of the random number changing cache manifest on every fetch. This ensures that while I’m connected to the server and testing my js scripts, I get the latest versions every time. On the other hand, when I switch off the server and test the offline mode, the script still has resources available offline for rendering.

    There is a lot more you can do here with the dynamic cache manifest rather than just plug-in a random number. I learnt this trick from this blog at Cube, and there are a couple more handy tricks being used there, so I suggest reading that resource.

    Update: Seems like, as with script-initiated xhr requests, Chrome also makes multiple requests for the cache manifest from the server for every page load. When I use a randomizer function to send a ‘new’ cache manifest every time, this results in the two cache manifests being different to each other and thus Chrome doesn’t save any files. Not what I wanted, now? So, here’s a small change I made

    previous randomizer: int(random.random()*10000)
    new randomizer: time.strftime(‘%y%m%d%H%M’)

    Now, instead of returning a new resource every time, I return a new resource every minute. This means that the two simultaneous requests that Chrome fires, will get the same cache manifest. But a page refresh/load later, will return a new cache manifest. It’s working so far :)
    [Ofcourse, I know that there’s still a remote chance that the two requests will hit either side of the minute change. But the probability is small enough for me to take a chance.]

    Using Closure

    Note-to-myself:

    1. Make the xyz.soy file. First line should contain {namespace  namespace_name} defining a name for your namespace.
    2. Define function calls inside the soy file using the format below, replacing abc with your HTML template and code:

      /**
      * function_name function description
      * @param? param1 description
      * @param param2 description
      */
      {template .function_name}
      <templating code>
      {/template}

    3. Compile the xyz.soy file into a .js file using the following command (You’ll need java runtime installed. I used Sun’s jre 6):

      java -jar [path_to_Closure_folder]/SoyToJsSrcCompiler.jar –outputPathFormat ‘[filename_format].js’ xyz.soy

      You can get more details about what can be used in outputPathFormat here.

    4. Import the generated javascript code file and the soyutils.js into your html file (before the main.js where templating functions will be called):


      <script type=”text/javascript” src=”script/[path_to_Closure_folder/]soyutils.js”></script>
      <script type=”text/javascript” src=”script/[generated_js_filename].js”></script>

    5. Call the templating functions as namespace_name.function_name(params) from anywhere in main.js or the HTML code.
    6. Done :)

    Refer to:

    Now, done.

    P.S.: Finally finished moving the code through Closure templates. Now can begin normal development again.

    Yes Yes Yes Yes Yes Yes Yessssssss! Ahh.. the joy of a overcoming a tiny, but hugely irritating hurdle. Like that piece of chicken stuck between teeth.

    Just got the Closure template working to do a listAll, after a day and half of struggling :)

    A night of suffering

    The refactoring continues.

    After moving all xhr calls from JSON to html, backed by django templating on the server, realised that in order to implement offline, localStorage based system, I still need to generate html at the browser. I could have taken the old, working version with html generating via js strings but it will need another load of work when the design of page starts changing. So, spent the night researching browser-based templating systems. Have shortlisted three – mustache, Closure and Pure. All seem to be taking strange approaches. So far Closure seems to be the one I’ll go with, but the final call will happen tomorrow after some more research.

    Meanwhile, the pen drive finally seems to have filled up, so can’t work off it anymore. Have saved all that I wanted to in dropbox and in bookmark syncs. Gonna reformat and recreate the live USB now so I can start working again tomorrow. The new laptop still isn’t featured on dell website though people have started talking about it on twitter. Seems like it’ll be another week or so before I finally get my hands on it and can create a full development environment. The new liveUSB should last till then, I hope.

    Time to sleep now. Ciao.

    POST using xmlhttprequest

    Struggled with this all of yesterday and earlier today. Finally, discovered what seems to be a bug in the implementation (or something that specs missed):

    When passing arguments / parameters with a POST request, the javascript: xmlHttpRequest function seems to truncate/modify the first and last arguments.

    So, now I’m sandwiching the actual arguments between empty buffer parameters and its working.

    Can’t believe how many hours I’ve wasted on this one issue, and that no one addresses it anywhere on the web.

    Anyway, the whole app is now working via ajax, and, as a result, is lightening fast. Now, need to include offline storage of files (manifest) and data (localStorage). Before that, wondering if I should do a split version and test passing the whole formatted html div instead of just JSON objects in response to xhr requests.

    The Rockstar Weekend

    Learnt more, importantly more practical stuff, about ajax, xhr and how to use it within Google App Engine from this post. Then went about restructuring (refactoring?) the code on both sides – client & server – to enable it. Didn’t require much work on the server-side. During the previous round of restructuring, I’d enabled single url calling with a sorter function spreading requests to relevant functions. That helped since I’m now just calling the same function for xhr calls and returning the results as a JSON, instead of a formatted HTML page.

    On the client side, had to redo nearly the whole code again. The main HTML is now just a plain page with a header and a body div. Everything is written into it by js functions. This has meant I can’t use the powerful django templates on server-side to generate that html. And that’s a pain. Still thinking if I should just give up and instead of returning the JSON objects, return the formatted html code for relevant div in response. Might try it too and then decide on the outcome based on whichever approach works faster and lighter. Anyway, using the xhr-JSON method, have already got the GET functions to work. The two POST functions are still standing out. Was working on them when Rags woke up and interrupted my work day. Still, glad that I woke up at 4 and immediately got to work. Gave me 6-7 hours clean before she decided to rock my day.

    The rest of weekend was almost wasted in work terms. Saw Rockstar, the hindi movie, in the evening. Loved it. Then met JD&R for dinner in Chinatown. Came home and, after briefly checking G+, twitter & techmeme, dropped dead on the bed.

    Today’s been uber-lazy. Woke up at 12 after 11 hours of sleep but got out of bed another 90 mins later. That too when Rags brutally pushed me outta bed. Spent the rest of day watching F1, listening to Rockstar OST over and over again, and reading articles over at gigaom and businessinsider. Researched the biggest player in CldNts industry right now. Followed their founder CEO on twitter. Also followed Naval Ravikant, the angellist founder, on twitter. Fella seems interesting. Finally read a businessinsider story on Ron Conway. Scary.