Category: Blog

A Bibliophile’s Review of the Amazon Kindle

When it comes to books I’m really old school. Starting from the pleasure of discovering a book you’ve been dying to find, nestled between two otherwise forgettable books in the store, to the crinkling goodness of a new book, the reflexive care to not damage the spine unduly, inscriptions from decades past in second-hand books, the smell, the texture, everything. And don’t even get me started on the religious experience of visiting your favourite libraries. Stated another way, e-books are just fundamentally incompatible with my reading experience.

That is, until I had to move houses last year. It is not a pleasant experience to have to cart around a few hundred books, even within the same city. This, and the fact that some Dan McGirt books that I’ve wanted to read are only really available to me in e-book form finally pushed me to actually buy the Amazon Kindle.

My black Kindle 3G (3rd rev.)

My precioussss

About 3 months ago, I got a black Kindle 3G (the 3rd revision). Technical reviews abound, so I’m not going to talk about the technology much. I didn’t see any articles that really spoke about using it, which is far more relevant to potential buyers (I’m sure they’re there, I didn’t find any good ones is all). So this is my attempt at describing the bits of the Kindle experience that are relevant to others of my ilk (the ones who nodded along to the first paragraph, especially :D).

The Device

We’ll I’m a geek, I can’t avoid talking about the technology completely, but I’ll try to keep it to a minimum (also, it runs Linux, woohoo! :D ed: and GStreamer too, as Sebastian Dröge points out!).

I bought the Kindle with the 6″ display and free wireless access throughout the world (<insert caveat about coverage maps here>). The device itself is really slick, the build quality is good. They keys on the keyboard feel hard to press, but this is presumably intentional, because you don’t want to randomly press keys while handling the device.

At first glance, the e-ink display on the new device is brilliant, the contrast in daylight is really good (more about this later). It’s light, and fairly easy to use (but I have a really high threshold for complex devices, so don’t take my word for it). The 3G coverage falls back to 2G mode in India. I’ve tried it around a bit in India, and the connectivity is pretty hit-or-miss. Maybe things will change for the better with the impending 3G rollout.

The battery life is either disappointing or awesome, depending on whether you’ve got wireless enabled or not. This is a bit of a nag, but you quickly get used to just switching off the wireless when you’re done shopping or browsing.

Reading

Obviously the meat and drink of this device is the reading experience. It is not the same as reading a book. There are a lot of small, niggling differences that will keep reminding you that you’re not reading a book, and this is something you’re just going to have to accept if you’re getting the device.

Firstly the way you hold the device is going to be different from holding a book. I generally hold a book along the spine with one hand, either at the top or bottom (depending on whether I’m sitting, lying down, etc.). You basically cannot hold the Kindle from above — there isn’t enough room. I alternate between holding the device on my palm (but it’s not small enough to hold comfortably like that, your mileage will vary depending on the size of your hand), grasping it between my fingers around the bottom left or right edge (this is where the hard keys on the keyboard help — you won’t press a key by mistake in this position), or I just rest the Kindle on a handy surface (table or lap while sitting, tummy while supine :) ).

Secondly, the light response of the device is very different from books. Paper is generally not too picky about the type of lighting (whiteness, diffused or direct, etc.) In daylight, the Kindle looks like a piece of white paper with crisp printing, which is nice. However, at night, it depends entirely on the kind of lighting you have. My house has mostly yellow-ish fluorescent lamps, so the display gets dull unless the room is very well lit. I also find that the contrast drops dramatically if the light source is not behind you (diffuse lighting might not be so great, in other words). There are some angles at which the display reflects lighting that’s behind/above you, but it’s not too bad.

The fonts and spacing on the Kindle are adjustable and this is one area in which it is hard to find fault with the device. Whatever your preference is in print (small fonts, large fonts, wide spacing, crammed text), you can get the same effect across all your books.

Flipping pages looks annoying when you see videos of the Kindle (since flipping requires a refresh of the whole screen), but in real life it’s fast enough to not annoy.

The Store

I’ve only used the Kindle Store from India, and in a word, it sucks. The number of books available is rubbish. I don’t care if they have almost(?) a million books, but if they don’t have Good Omens, Cryptonomicon, or most of Asimov’s Robot series, they’re fighting a losing battle as far as I’m concerned (these are all books that I’ve actually wanted to read/re-read since I got the Kindle).

When I do find a book I want, the pricing is inevitably ridiculous. I do not see what the publishers are smoking, but could someone please tell them that charging more than 2 times the price of a paperback for an e-book is just plain stupid? Have they learned nothing from the iTunes story? Speaking of which, the fact that the books I buy are locked by DRM to Kindle devices is very annoying.

While the reading experience is something I can get used to, this is the biggest problem I currently have. From my perspective, books have been the last bastion of purity where piracy is not the only available solution to work around the inability of various industry middlemen to find a reasonable way to deal with the Internet and it’s impact on creative content. I am really hoping that Amazon will get enough muscle soon to pull an Apple on the book industry and get the pricing to reasonable levels. And possibly go one step further and break down country-wise barriers. Otherwise, we’re just going to have to deal with another round of rampant piracy and broken systems to try to curb it.

(Editor’s note: This bit clearly bothers me a lot and deserves a blog post of its own, but let’s save that for another day)

The Ecosystem

A lot of my family and friends love reading books, and a large number of the books I buy go through many hands before finding their final resting place on my shelf. This is not just a matter of cost — there is a whole ecosystem of sharing your favourite books with like-minded people, discussing, and so on.

The Kindle device itself isn’t conducive to sharing (if I’m reading a book on the device, nobody else can use the device, obviously). Interestingly Amazon has recently introduced the idea of sharing books from the Kindle (something Barnes and Noble has had for a while). You can share books you’ve bought off the Kindle Store with someone else with a Kindle account, once, for a period of 2 weeks. This in itself is a really lame restriction, but even something more relaxed would be useless to me. Almost nobody I know has a device that supports the Kindle software (phones and laptops/desktops do not count as far as I am concerned).

So in my opinion, the complete break from the reading ecosystem is a huge negative for the Kindle experience. When I know I’m going to want to lend a book to someone, I immediately eliminate the possibility of buying it off the Kindle Store. This is true of all e-books, of course, and might become less of an issue in decades to come, but it is a real problem today.

Other Fluff

The Kindle comes with support for MP3s, browsing the Internet and some games (some noises about an app store have also been made). These are just fluff — I don’t care if my reading device has any of these things. Display technology is still quite far from getting to a point where convergence is possible without compromising the reading experience (yes, I’m including the Pixel Qi display in this assertion, but my opinion is only based on the several videos of devices using these displays).

The Verdict

Honestly, it’s not clear to me whether the Kindle is a keeper or not. It’s definitely a very nice device, technically. I think it’s possible for Amazon to improve the reading experience — I’m sure the display technology will get better with regards to response to different kinds of lighting. Some experimentation with design to make it work with standard reading postures would be nice too. The Kindle Store is a disaster for me, and I really hope Amazon and the publishing industry get their act together.

Maybe this article will be helpful to potential converts out there. If you’ve got questions about the Kindle or anything to add that I’ve missed, feel free to drop a comment.

Updates from the Rygel + DLNA world

Things have been awfully quiet since Zeeshan’s posted about the work we’ve been doing on DLNA support in Rygel. Since I’ve released GUPnP DLNA 0.3.0, I thought this is a good time to explain what we’ve been up to. This is also a sort of expansion of my Lightning Talk from GUADEC, since 5 minutes weren’t enough to establish all the background I would have liked to.

For those that don’t know, the DLNA is a consortium that aims to standardise how various media devices around your house communicate with each other (that is, your home theater, TV, laptop, phone, tablet, …). One piece of this problem is having a standard way of identifying the type of a file, and communicating this between devices. For example, say your laptop (MediaServer in DLNA parlance) is sharing the movies you’ve got with your TV (MediaPlayer), and your TV can play only upto 720p H.264-encoded video. When the MediaServer is sharing files, it needs to provide sufficient information about the file so that the MediaPlayer knows whether it can play it or not, so that it can be intelligent about what files show up in its UI.

How the DLNA specification achieves this is by using “profiles”. For each media format supported by the DLNA specification, a number of profiles are defined, that identify the audio/video codec used, the container, and (in a sense) the complexity of decoding the file. (for multimedia geeks, that translates to things like the codec profile, resolution, framerate/samplerate, bitrate, etc.)

For example, if a file is indicated to be of a DLNA profile named AAC_ISO_320, this indicates that this is an audio file encoded with the AAC codec, contained in an MP4 container (that’s “ISO”), with a bitrate of at most 320 kbps. Similarly, a file with profile AVC_MP4_MP_SD_MPEG1_L3 represents a file with H.264 (a.k.a. AVC) video coded in the H.264 Main Profile at specific resolutions upto 720×576, MP3 audio, in an MP4 container (there are more restrictions, but I don’t want to swamp you with details).

So now we have a problem statement – given a media file, we need to get the corresponding DLNA profile. It’s easiest to break this problem into 3 pieces:

  1. Discovery: First we need to get all the metadata that the DLNA specification requires us to check. Using GStreamer and Edward’s gst-convenience library, getting the metadata we needed was reasonably simple. Where the metadata wasn’t available (mostly codec profiles and bitrate), I’ve tried to expose the required data from the corresponding GStreamer plugin.

  2. DLNA Profiles: I won’t rant much about the DLNA specification, because that’s a whole series of blog posts in itself, but the spec is sometimes overly restrictive and doesn’t support a number of popular formats (Matroska, AVI, DivX, OGG, Theora). With this in mind, we decided that it would be nice to have a generic way to store the constraints specified by the DLNA specification and use them in our library. We chose to store the profile constraints in XML files. This allows non-programmers to tweak the profile data when their devices resort to non-standard methods to work around the limitations of the DLNA spec.

  3. Matching: With 1. and 2. above in place, we just need some glue code to take the metadata from discovery and match it with the profiles loaded from disk. For the GStreamer hackers in the audience, the profile storage format we chose looks suspiciously like serialized GstCaps, so matching allows us to reuse some GStreamer code. Another advantage of this will be revealed soon.

So there you have it folks, this covers the essence of what GUPnP DLNA does. So what’s next?

  1. Frankie Says Relax: Since the DLNA spec can often be too strict about what media is supported, we’ve decided to introduce a soon-to-come “relaxed mode” which should make a lot more of your media match some profile.

  2. I Can Haz Trancoding: While considering how to store the DLNA profiles loaded from the XML on disk, we chose to use GstEncodingProfiles from the gst-convenience library since the restrictions defined by the DLNA spec closely resemble the kind of restrictions you’d expect to set while encoding a file (codec, bitrate, resolution, etc. again). One nice fallout of this is that (in theory), it should be easy to reuse these to transcode media that doesn’t match any profile (the encodebin plugin from gst-convenience makes this a piece of cake). That is, if GStreamer can play your media, Rygel will be able to stream it.

Apart from this, we’ll be adding support for more profiles, extending the API as more uses arise, adding more automated tests, and on and on. If you’re interested in the code, check out (sic) the repository on Gitorious.

GUADEC 2010 :(

Hopefully that title was provocative enough. ;) No, GUADEC seemed to be a smashing success. If only I had been able to attend instead of lying in bed for 2 days, ill and wondering at the general malignancy of a Universe that would do this to me.

Collabora Multimedians, looking for a canal

Nevertheless, I had a great time meeting all the cool folks at Collabora Multimedia at our company meeting. Managed to trundle out for my Rygel + DLNA lightning talk (more updates on this in a subsequent post). Things did get better subsequently, and I had an amazing week-long vacation in Germany, and now I’m back at home with my ninja skillz fully recharged!

Site moved to Linode

I finally got tired of how slow NearlyFreeSpeech.net is (it’s still a fantastically affordable service – you get what you pay for and more), and moved to a Linode. Setup and migration was dead simple, and I’m really happy with the instance I’m on (and extremely happy about their awesome service). Do feel free to drop me a note if anything on the site doesn’t work for you.

p.s.: This also adds to my count of Gentoo boxen. :)

Pure EFI Linux Boot on Macbooks

My company was really kind to get me a Macbook Pro (the 13.3-inch “5.5” variant). It is an awesome piece of hardware! (especially after my own PoS HP laptop I’ve been cussing at for a while now)

That said, I still don’t like the idea of running a proprietary operating system on it (as beautiful as OS X is ;)), so I continue to happily use Gentoo. The standard amd64 install works just fine with some minor hiccups (keyboard doesn’t work on the LiveCD, kernel only shows a console with vesafb).

The one thing that did bother me is BIOS-emulation. For those coming from the PC world, Macs don’t have a BIOS. They run something called EFI which is significantly more advanced (though I think the jury’s out on quirkiness issues and Linus certainly doesn’t approve of the added complexity).

Anyway, in order to support booting other OSes (=> Windows) exactly as they would on PCs, Apple has added a BIOS emulation layer. This is how Ubuntu (at least as of 9.10) boots on Macbooks. Given that both the bootloader (be it Grub2 or elilo) and the Linux kernel support booting in an EFI environment, it rubbed me the wrong way to take the easy way out and just boot them in BIOS mode. There is a reasonable technical argument for this – I see no good reason to add one more layer of software (read bugs) when there is no need at all. After a lot of pain, I did manage do make Linux boot in EFI-only mode. There is not enough (accurate, easily-findable) documentation out there, so this is hard-won knowledge. :) I’m putting this up to help others avoid this pain.

Here’s what I did (I might be missing some stuff since this was done almost a month ago). The basic boot steps look something like this:

  1. EFI firmware starts on boot
  2. Starts rEFIt, a program that extends the default bootloader to provide a nice bootloader menu, shell, etc.
  3. Scans FAT/HFS partitions (no ext* support, despite some claims on the Internet) for bootable partitions (i.e. having a /efi/… directory with valid boot images)
  4. Runs the Grub2 EFI image from a FAT partition
  5. Loads the Linux kernel (and initrd/initramfs if any) from /boot
  6. Kernel boots normally with whatever your root partition is

Now you could use elilo instead of Grub2, but I found this it to not work well (or at all) for me, so I just used a Grub2 (1.97.1, with some minor modifications) (just adds an “efi” USE-flag to build with --with-platform=efi). While I could make /boot a FAT partition, this would break the installkernel script (it’s run by make install in your kernel source directory), which makes symlinks for your latest/previous kernel image.

Instructions for installing the Grub2 EFI image are here. Just ignore the “bless” instructions (that’s for OS X), and put the EFI image and other stuff in something like /efi/grub (the /efi is mandatory). You can create a basic config file using grub-mkconfig and then tweak it to taste. The Correct Way™ to do this, though, is to edit the files in /etc/grub.d/.

Of course, you need to enable EFI support in the kernel, but that’s it. With this, you’re all set for the (slightly obsessive-compulsive) satisfaction of not having to enable yet another layer to support yet another proprietary interface, neither of which you have visibility or control over.

FOSSKriti ’10 \o/

Three days left to the event I helped start 3 years ago. That’s right, folks, FOSSKriti ’10 is here!

We started this event in 2008 because there was a huge gap between the open source world and academia in India. The aim was to expose enthusiastic students to what the F/OSS world has to offer, how they can participate in the community, contribute, and get that warm, happy feeling in the gut. :) And I’ve met enough people who make me believe that we have been successful in this endeavour.

One complaint we always get is that we are not newbie-friendly. From the beginning, we took an active call to channel our limited resources towards encouraging people to just start hacking and contributing (that’s the important part, remember?), which necessarily meant that if this was your first exposure to the F/OSS world, things could be a bit overwhelming.

This time, the organising team is trying something different. FOSSKriti ’10 will have loosely have 2 tracks. One track is like previous editions of the event – it’s meant for people who are comfortable with coding, possibly already F/OSS hackers. The agenda, I am given to understand, is “Come – Sit – Fork – Code – Share LuLZ.” :) And being privy to what some of the FOSSKriti veterans are planning, I am extremely excited about what this track will bring.

The second track is meant for students who are enthusiastic about F/OSS but need a little more guidance with getting started. There will be talks and workshops to help them get bootstrapped, and hopefully provide them with sufficient resources to take the ball and run. The schedule for this track is already up.

This is not complete, so keep an eye out for updates. Unfortunately, again, I will not be able to make it to the event. :( If you’re a student in India, interested in F/OSS, possibly not too far from Kanpur, this is an event you cannot miss!

(Gst)Discovering Vala

My exploits at Collabora Multimedia currently involve a brief detour into hacking on Rygel, specifically improving the DLNA profile name guessing. We wanted to use Edward‘s work on GstDiscoverer work, and Rygel is written in Vala, so the first thing to do was write Vala bindings for GstDiscoverer. This turned out to be somewhat easier and more difficult than initially thought. :)

There’s a nice tutorial for generating Vala bindings that serves as a good starting point. The process basically involves running a tool called vapigen, which examines your headers and libraries, and generates a GIR file from them (it’s an XML file describing your GObject-based API). It then converts this GIR file into a “VAPI” file which describes the API in a format that Vala can understand. Sounds simple, doesn’t it?

Now if only it were that simple :). The introspected file is not perfect, which means you need to manually annotate some bits to make sure the generated VAPI accurately represents the C API. These annotations are specified in a metadata file. You need to include things like “the string returned by this function must be freed by the caller” (that’s a transfer_ownership), or, object type Foo is derived from object type FooDaddy (specified using the base_class directive). Not all these directives are documented, so you might need to grok around the sources (specifically, vapigen/valagidlparser.vala) and ask on IRC (#vala on irc.gnome.org).

All said and done, the process really is quite straightforward. The work is in [my gst-convenience repository][arun-gst-conv-ks.git] right now (should be merged with the main repository soon). I really must thank all the folks on #vala who helped me with all the questions and some of the bugs that I discovered. Saved me a lot of frustration!

I’ve already got Rygel using these bindings, though that’s not been integrated yet. More updates in days to come.

*Gasp*

It’s been long since I wrote about anything non-specific. I guess it’s a common symptom amongst bloggers from my generation (age jokes will draw ire). For me, it just stopped being so important to say anything, what with every major insight seeming pretty ciichéd and/or obvious by the time I got down to writing it. Can’t you just see the brain cells sizzle away?

As I was wont to do in days gone, let me start with books. I spent a long time re-reading books, sometimes more than once. Seemed to take a lot less effort than reading new books. Ditto movies, when I had enough time to watch them. Of course, bite-sized chunks of TV series were easier to grok as well. With a conscious effort now, I’ve started reading more new stuff, the latest of these being Neil Gaiman’s American Gods. Gripping book, that. Incidentally, if you’ve not read Richard Morgan’s Altered Carbon, you’re missing out on what I am certain is one of the best SF works in recent times. Have been promised access to the remainder of his novels featuring Takeshi Kovacs, so looking forward to that.

On the music front, I sold off my guitar a year and a half ago (the neck warped too often) and now intend to buy a new guitar. Saw a fairly decent and not-too-pricey Granada at Furtado’s, and it’s now somewhere near the top of my acquisition list. As before, my tastes are (a) relatively esoteric, and (b) temporally out of phase. My latest obsession are the Yeah Yeah Yeahs, about whom Rolling Stone has a brilliantly written article that borders on idolation.

Moved to a new house, and unpacking and settling in slowly proceed. All the running around before this and now mean that I still have to establish a proper working environment and discipline. I’ve been trying (with only moderate success) to maintain a reasonably “normal” diurnal cycle. Should be more successful as things settle further.

I’ve got a couple of work related blog posts lined up in my head, but that’ll have to wait for later. I hope this particular post heralds some exercising of atrophied writing muscles.

Good night, world.

GNOME Day @ FOSS.IN/2009

Yes, yes, I know this post is a tad late, but hey, it’s still the right year. ;)

As Srini had announced, Dec 5th was GNOME Day at FOSS.IN this year. We kicked the day off with Shreyas giving a developer’s introduction to GNOME 3.0. This was followed by another well-received talk by Srini on the Mobiln2 UI and Clutter.

By the end of lunch, it turned out our already packed schedule had got some new additions from the other enthusiastic GNOME folks around! The afternoon session was kicked off by Arun ‘vimzard’ Chaganty introducing what newbies need to know to dive into GNOME development. Tobias Mueller followed with a talk about GNOME Bugsquadding. Sayamindu and Dimitris then took the stage for a short L10n talk. Next up was a talk about Anjal by Puthali. Olivier then gave a hackers’ introduction to Empathy/Telepathy, Srinidhi and Bharath did a quick introduction to using the OpenSUSE Build Service.

Wait, I’m not done yet. :) The final session on GNOME Performance was a 4-hit combo with me giving a quick introduction to Sysprof, Lennart introducing mutrace, Krishnan giving a pretty wow introduction to using DTrace to profile GNOME, and Dhaval giving a short introduction to how cgroups could help make GNOME more responsive.

Phew! That was a long and awesome day, with some icing on the cake in the form of stickers and T-shirts. The last were possible thanks to the GNOME Foundation, so a huge thanks to them!

Sponsored by GNOME!

Sponsored by GNOME!

The times they are a-changin’

Yesterday was my last day at NVidia. I’ve worked with the Embedded Software team there for the last 15 months, specifically on the system software for a Linux based stack that you will see some time next year. I’ve had a great time there, learning new things, and doing everything from tweaking bit-banging I²C implementations with a CRO to tracking down alleged compiler bugs (I’m looking at you -fstrict-aliasing) by wading through ARM assembly.

As some of you might already know, my next step, which has had me bouncing off the walls for the last month, is to join the great folks at Collabora Multimedia working on the PulseAudio sound server. I’ll be working from home here, in Bangalore (in your face, 1.5-hour commute!). It is incredibly exciting for me to be working with a talented bunch of folks and actively contributing to open source software as part of my work!

More updates as they happen. :)