Tag: gnome

GNOME3 Power Settings

Richard Hughes recently posted about the recent GNOME3 Power Settings design that got a lot of people (myself included) hot and bothered. As I said in my comment, I think that a lot of people prefer that their laptop stay on when the lid is closed. There are clearly other who, like myself, would prefer to maintain the normal behaviour when an external monitor is plugged in.

So Nirbheek Chauhan and I designed a couple of quick mockups that I think would work well. This doesn’t address customising behaviour with an external monitor, but I don’t feel nearly as strongly about that being hidden in dconf-editor as I do about the rest.

My mockup

Nirbheek's mockup

While Nirbheek’s version looks decidedly prettier, I think the meaning of the icons is not absolutely obvious. This might be solvable by some explanatory text above and mouse-overs.

While doing all this, though, it’s clear that it is really hard to design a UI that you think will please enough people, and really easy to make assumptions about what “people” want and how they use their computers. So kudos to the GNOME3 UI designers for taking up this difficult job and I hope they take all the feedback flying around in a positive spirit (even if the messages are often not quite positive-sounding ;) )

Updates from the Rygel + DLNA world

Things have been awfully quiet since Zeeshan’s posted about the work we’ve been doing on DLNA support in Rygel. Since I’ve released GUPnP DLNA 0.3.0, I thought this is a good time to explain what we’ve been up to. This is also a sort of expansion of my Lightning Talk from GUADEC, since 5 minutes weren’t enough to establish all the background I would have liked to.

For those that don’t know, the DLNA is a consortium that aims to standardise how various media devices around your house communicate with each other (that is, your home theater, TV, laptop, phone, tablet, …). One piece of this problem is having a standard way of identifying the type of a file, and communicating this between devices. For example, say your laptop (MediaServer in DLNA parlance) is sharing the movies you’ve got with your TV (MediaPlayer), and your TV can play only upto 720p H.264-encoded video. When the MediaServer is sharing files, it needs to provide sufficient information about the file so that the MediaPlayer knows whether it can play it or not, so that it can be intelligent about what files show up in its UI.

How the DLNA specification achieves this is by using “profiles”. For each media format supported by the DLNA specification, a number of profiles are defined, that identify the audio/video codec used, the container, and (in a sense) the complexity of decoding the file. (for multimedia geeks, that translates to things like the codec profile, resolution, framerate/samplerate, bitrate, etc.)

For example, if a file is indicated to be of a DLNA profile named AAC_ISO_320, this indicates that this is an audio file encoded with the AAC codec, contained in an MP4 container (that’s “ISO”), with a bitrate of at most 320 kbps. Similarly, a file with profile AVC_MP4_MP_SD_MPEG1_L3 represents a file with H.264 (a.k.a. AVC) video coded in the H.264 Main Profile at specific resolutions upto 720×576, MP3 audio, in an MP4 container (there are more restrictions, but I don’t want to swamp you with details).

So now we have a problem statement – given a media file, we need to get the corresponding DLNA profile. It’s easiest to break this problem into 3 pieces:

  1. Discovery: First we need to get all the metadata that the DLNA specification requires us to check. Using GStreamer and Edward’s gst-convenience library, getting the metadata we needed was reasonably simple. Where the metadata wasn’t available (mostly codec profiles and bitrate), I’ve tried to expose the required data from the corresponding GStreamer plugin.

  2. DLNA Profiles: I won’t rant much about the DLNA specification, because that’s a whole series of blog posts in itself, but the spec is sometimes overly restrictive and doesn’t support a number of popular formats (Matroska, AVI, DivX, OGG, Theora). With this in mind, we decided that it would be nice to have a generic way to store the constraints specified by the DLNA specification and use them in our library. We chose to store the profile constraints in XML files. This allows non-programmers to tweak the profile data when their devices resort to non-standard methods to work around the limitations of the DLNA spec.

  3. Matching: With 1. and 2. above in place, we just need some glue code to take the metadata from discovery and match it with the profiles loaded from disk. For the GStreamer hackers in the audience, the profile storage format we chose looks suspiciously like serialized GstCaps, so matching allows us to reuse some GStreamer code. Another advantage of this will be revealed soon.

So there you have it folks, this covers the essence of what GUPnP DLNA does. So what’s next?

  1. Frankie Says Relax: Since the DLNA spec can often be too strict about what media is supported, we’ve decided to introduce a soon-to-come “relaxed mode” which should make a lot more of your media match some profile.

  2. I Can Haz Trancoding: While considering how to store the DLNA profiles loaded from the XML on disk, we chose to use GstEncodingProfiles from the gst-convenience library since the restrictions defined by the DLNA spec closely resemble the kind of restrictions you’d expect to set while encoding a file (codec, bitrate, resolution, etc. again). One nice fallout of this is that (in theory), it should be easy to reuse these to transcode media that doesn’t match any profile (the encodebin plugin from gst-convenience makes this a piece of cake). That is, if GStreamer can play your media, Rygel will be able to stream it.

Apart from this, we’ll be adding support for more profiles, extending the API as more uses arise, adding more automated tests, and on and on. If you’re interested in the code, check out (sic) the repository on Gitorious.

GUADEC 2010 :(

Hopefully that title was provocative enough. ;) No, GUADEC seemed to be a smashing success. If only I had been able to attend instead of lying in bed for 2 days, ill and wondering at the general malignancy of a Universe that would do this to me.

Collabora Multimedians, looking for a canal

Nevertheless, I had a great time meeting all the cool folks at Collabora Multimedia at our company meeting. Managed to trundle out for my Rygel + DLNA lightning talk (more updates on this in a subsequent post). Things did get better subsequently, and I had an amazing week-long vacation in Germany, and now I’m back at home with my ninja skillz fully recharged!

(Gst)Discovering Vala

My exploits at Collabora Multimedia currently involve a brief detour into hacking on Rygel, specifically improving the DLNA profile name guessing. We wanted to use Edward‘s work on GstDiscoverer work, and Rygel is written in Vala, so the first thing to do was write Vala bindings for GstDiscoverer. This turned out to be somewhat easier and more difficult than initially thought. :)

There’s a nice tutorial for generating Vala bindings that serves as a good starting point. The process basically involves running a tool called vapigen, which examines your headers and libraries, and generates a GIR file from them (it’s an XML file describing your GObject-based API). It then converts this GIR file into a “VAPI” file which describes the API in a format that Vala can understand. Sounds simple, doesn’t it?

Now if only it were that simple :). The introspected file is not perfect, which means you need to manually annotate some bits to make sure the generated VAPI accurately represents the C API. These annotations are specified in a metadata file. You need to include things like “the string returned by this function must be freed by the caller” (that’s a transfer_ownership), or, object type Foo is derived from object type FooDaddy (specified using the base_class directive). Not all these directives are documented, so you might need to grok around the sources (specifically, vapigen/valagidlparser.vala) and ask on IRC (#vala on irc.gnome.org).

All said and done, the process really is quite straightforward. The work is in [my gst-convenience repository][arun-gst-conv-ks.git] right now (should be merged with the main repository soon). I really must thank all the folks on #vala who helped me with all the questions and some of the bugs that I discovered. Saved me a lot of frustration!

I’ve already got Rygel using these bindings, though that’s not been integrated yet. More updates in days to come.

GNOME Day @ FOSS.IN/2009

Yes, yes, I know this post is a tad late, but hey, it’s still the right year. ;)

As Srini had announced, Dec 5th was GNOME Day at FOSS.IN this year. We kicked the day off with Shreyas giving a developer’s introduction to GNOME 3.0. This was followed by another well-received talk by Srini on the Mobiln2 UI and Clutter.

By the end of lunch, it turned out our already packed schedule had got some new additions from the other enthusiastic GNOME folks around! The afternoon session was kicked off by Arun ‘vimzard’ Chaganty introducing what newbies need to know to dive into GNOME development. Tobias Mueller followed with a talk about GNOME Bugsquadding. Sayamindu and Dimitris then took the stage for a short L10n talk. Next up was a talk about Anjal by Puthali. Olivier then gave a hackers’ introduction to Empathy/Telepathy, Srinidhi and Bharath did a quick introduction to using the OpenSUSE Build Service.

Wait, I’m not done yet. :) The final session on GNOME Performance was a 4-hit combo with me giving a quick introduction to Sysprof, Lennart introducing mutrace, Krishnan giving a pretty wow introduction to using DTrace to profile GNOME, and Dhaval giving a short introduction to how cgroups could help make GNOME more responsive.

Phew! That was a long and awesome day, with some icing on the cake in the form of stickers and T-shirts. The last were possible thanks to the GNOME Foundation, so a huge thanks to them!

Sponsored by GNOME!

Sponsored by GNOME!

The times they are a-changin’

Yesterday was my last day at NVidia. I’ve worked with the Embedded Software team there for the last 15 months, specifically on the system software for a Linux based stack that you will see some time next year. I’ve had a great time there, learning new things, and doing everything from tweaking bit-banging I²C implementations with a CRO to tracking down alleged compiler bugs (I’m looking at you -fstrict-aliasing) by wading through ARM assembly.

As some of you might already know, my next step, which has had me bouncing off the walls for the last month, is to join the great folks at Collabora Multimedia working on the PulseAudio sound server. I’ll be working from home here, in Bangalore (in your face, 1.5-hour commute!). It is incredibly exciting for me to be working with a talented bunch of folks and actively contributing to open source software as part of my work!

More updates as they happen. :)

It’s pronounced Gwahdec

I’ve been terrible about it, but here’s the big update — I just got back today after spending the last week at the Gran Canaria Desktop Summit, location of the first co-located GUADEC and aKademy. It’s been amazing, and I don’t know where to start. Let’s try the beginning.

The GNOME Foundation has funded a very significant part of my expense for this trip (making it possible at all), so a huge thanks to Travel Committee for giving me this opportunity. :) To summarise …

Sponsored by GNOME!

Sponsored by GNOME!

Shreyas and I reached Gran Canaria early in the morning of Day 1, but were too tired to make it to the first 2 keynotes. We woke up, had breakfast by the beach (the apartment we were in was <100 steps from the beach, and the auditorium was a 20 minute walk down the same beach — photos soon).

We did make it to Richard Stallman’s talk. It was quite generic, not surprisingly about software freedom, and nothing new to most of us. Of note were the great vitriol towards C# and the heathens who use it to create new software and a rather terrible and inappropriate attempt at humour that has been blogged about to death.

I met a huge number of people subsequently, some who’ve been at FOSS.IN before, and many whom I only knew by their online presence. The second half of the day was devoted to a number of Lightning Talks. I was pleasantly surprised to see the amount of work happening on semantic-aware projects. Good stuff.

Way to sleepy to continue making sense. More details on subsequent days, photos and so forth to come soon.

Edit: In the name of avoiding further procrastination, here are the photos.