Tag: gstreamer

Hello … hello … hello!

I have a secret to confess. I’ve spent a great deal of time over the last few months talking to myself. I can’t say I haven’t enjoyed it — it turns out my capacity to entertain myself is far greater than initially suspected. But I hear you ask … why?

Here at Collabora, I’ve been building on Wim’s previous work on adding echo cancellation to PulseAudio. Thanks go to Intel for supporting us in continuing this work. Before too long, all this work will be trickling down to your favourite Linux distribution and all your friends will stop hating you.

First, a quick recap on what acoustic echo cancellation (AEC) is. If you already know this, you might want to skip this paragraph and the next. Say you’re on your laptop, and you receive a voice call from your friend. You don’t have a pair of headphones lying around, so you’re just going to use your laptop’s built-in speakers and mic. When your friend speaks, what she says is played out the speakers, but is also captured by the microphone and she gets to hear herself speak, albeit a short while (a few hundred milliseconds or more) later. This is called acoustic echo, and can be frustrating enough to make conversation nigh impossible. There are other types of echo for phone systems, but that’s not interesting to us at the moment.

This problem is common on pretty much all devices that you use to make phone calls. Astute readers will ask why they don’t actually face this problem on their phone. That’s because your phone (or, if you have a cheap phone, your phone company) has special software hidden away that removes the echo before sending your signal along to the other end. On laptops, which are general-purpose hardware, the job of echo cancellation is left to either your operating system (Windows XP onwards, for example) or your chat client (Skype, for example) to provide.

On Linux, we implement echo cancellation as a PulseAudio module (code-ninja Wim Taymans wrote this last year). We use the Speex DSP library to perform the actual echo cancellation. The code’s quite modular, so it’s not very hard to plug in alternate echo cancellers (we even include an alternate implementation, which isn’t quite as effective as Speex).

Recently, we plugged in some more bits from the Speex library to do noise suppression and digital gain control (so you can quit twiddling with your mic volume for the other end to be able to hear you). We also added a bunch of fixes to reduce CPU consumption significantly — this should be good enough to run on a netbook and reasonably recent ARM platforms.

While all this sounds nice, I think a demo would sound (haha!) nicer …

Without AEC: /downloads/pulseaudio/aec/call-no-aec (or download ogg, aac)

With AEC: /downloads/pulseaudio/aec/call-with-aec (or download ogg, aac)

This is a recording of a call between my laptop and N900. The laptop is playing audio out the speakers and recording with the built-in mic. What you hear is the conversation as heard on the N900.

All this echo cancelling goodness will come to a Linux distribution near you in the upcoming 1.0 release of PulseAudio. The next version of the GNOME IM client, Empathy (3.2), will actually make use of this functionality. In due time, we intend to make it so that all voice applications will end up using this functionality (so if you’re writing a VoIP application and don’t want to use this functionality, you need to set a special stream property to disable this — filter.suppress="echo-cancel").

For the impatient among you, you can try all this out by getting recent testing versions of PulseAudio (I know packages are available for Ubuntu, Debian, Gentoo and Mageia at least). To force your phone streams to use echo cancellation, just run pactl load-module module-echo-cancel, and you’re done.

There’s still some work to be done, refining quality and using other AEC implementations (in the short-term, the WebRTC one looks promising). Things don’t work at all if you’re using different devices for playback and capture (e.g. laptop speakers and webcam mic). These are things that will be addressed in coming weeks and months.

More PulseAudio power goodness

[tl;dr — if you’re using GNOME or a GStreamer-based player, not using the Rhythmbox crossfading backend, and want to try to save ~0.5 W of power, jump to end of the post]

Lennart pointed to another blog post about actually putting PulseAudio’s power-saving capabilities to use on your system. The latter provides a hack-ish way to increase buffering in PulseAudio to the maximum possible, reducing the number of wakeups. I’m going to talk about that a bit.

Summarising the basic idea, we want music players to decode a large chunk of data and give it to PA so that we can then fill up ALSA’s hardware buffer, sleep till it’s almost completely consumed, fill it again, sleep, repeat. More details in this post from Lennart.

The native GNOME audio/video players don’t talk to PulseAudio directly — they use GStreamer, which has a pulsesink element that actually talks to PulseAudio. We could configure things so that we send a large amount (say 2 seconds’ worth) to PulseAudio, sleep, and then wake up periodically to push out more. Now in the audio player (say Rhythmbox), the user hits next, prev, or pause. We need to effect this change immediately, even though we’ve already sent out 2 seconds of data (it would suck if you hit pause and the actual pause happened 2 seconds later, wouldn’t it?). PulseAudio already solves because it can internally “rewind” the buffer and overwrite it if required. GStreamer can and does take advantage of this by sending pause and other control messages out of band from the data.

This all works well for relatively simple GStreamer pipelines. However, if you want to do something more complicated, like Rhythmbox’ crossfading backend, things start to break. PulseAudio doesn’t offer an API to do fades, and since we don’t do rewinds in GStreamer, we need to apply effects such as fades with a latency equal to the amount of buffering we’re asking PulseAudio to do. This makes for unhappy users.

Well, all is not as bleak as it seems. There was some discussion on the PA mailing list, and the need for a proper fade API (really, a generic effects API) is clear. There have even been attempts to solve this in GStreamer.

But you want to save 0.5 W of power now! Okay, if you’re not using the Rhythmbox crossfading backend (or are okay with disabling it), this will make Rhythmbox, Banshee, pre-3.0 Totem (and really any GNOMEy player that uses gconfaudiosink, which will soon be replaced by gsettingsaudiosink, I guess), you can run this on the command line:

gconftool-2 --type string \
    --set /system/gstreamer/0.10/default/musicaudiosink \
    "pulsesink latency-time=100000 buffer-time=2000000"

On my machine, this brings down the number of wakeups per second because of alsa-sink to ~2.7 (corresponding nicely to the ~350ms of hardware buffer that I have). With Totem 3.0, this may or may not work, depending on whether your distribution gives gconfaudiosink a higher rank than pulseaudiosink.

This is clearly just a stop-gap till we can get things done the Right Way™ at the system level, so really, if things break, you get to keep the pieces. If you need to, you can undo this change by running the same command without the latency-time=… and buffer-time=… bits. That said, if something does break, do leave a comment below so I can add it to the list of things that we need to test the final solution with.

Updates from the Rygel + DLNA world

Things have been awfully quiet since Zeeshan’s posted about the work we’ve been doing on DLNA support in Rygel. Since I’ve released GUPnP DLNA 0.3.0, I thought this is a good time to explain what we’ve been up to. This is also a sort of expansion of my Lightning Talk from GUADEC, since 5 minutes weren’t enough to establish all the background I would have liked to.

For those that don’t know, the DLNA is a consortium that aims to standardise how various media devices around your house communicate with each other (that is, your home theater, TV, laptop, phone, tablet, …). One piece of this problem is having a standard way of identifying the type of a file, and communicating this between devices. For example, say your laptop (MediaServer in DLNA parlance) is sharing the movies you’ve got with your TV (MediaPlayer), and your TV can play only upto 720p H.264-encoded video. When the MediaServer is sharing files, it needs to provide sufficient information about the file so that the MediaPlayer knows whether it can play it or not, so that it can be intelligent about what files show up in its UI.

How the DLNA specification achieves this is by using “profiles”. For each media format supported by the DLNA specification, a number of profiles are defined, that identify the audio/video codec used, the container, and (in a sense) the complexity of decoding the file. (for multimedia geeks, that translates to things like the codec profile, resolution, framerate/samplerate, bitrate, etc.)

For example, if a file is indicated to be of a DLNA profile named AAC_ISO_320, this indicates that this is an audio file encoded with the AAC codec, contained in an MP4 container (that’s “ISO”), with a bitrate of at most 320 kbps. Similarly, a file with profile AVC_MP4_MP_SD_MPEG1_L3 represents a file with H.264 (a.k.a. AVC) video coded in the H.264 Main Profile at specific resolutions upto 720×576, MP3 audio, in an MP4 container (there are more restrictions, but I don’t want to swamp you with details).

So now we have a problem statement – given a media file, we need to get the corresponding DLNA profile. It’s easiest to break this problem into 3 pieces:

  1. Discovery: First we need to get all the metadata that the DLNA specification requires us to check. Using GStreamer and Edward’s gst-convenience library, getting the metadata we needed was reasonably simple. Where the metadata wasn’t available (mostly codec profiles and bitrate), I’ve tried to expose the required data from the corresponding GStreamer plugin.

  2. DLNA Profiles: I won’t rant much about the DLNA specification, because that’s a whole series of blog posts in itself, but the spec is sometimes overly restrictive and doesn’t support a number of popular formats (Matroska, AVI, DivX, OGG, Theora). With this in mind, we decided that it would be nice to have a generic way to store the constraints specified by the DLNA specification and use them in our library. We chose to store the profile constraints in XML files. This allows non-programmers to tweak the profile data when their devices resort to non-standard methods to work around the limitations of the DLNA spec.

  3. Matching: With 1. and 2. above in place, we just need some glue code to take the metadata from discovery and match it with the profiles loaded from disk. For the GStreamer hackers in the audience, the profile storage format we chose looks suspiciously like serialized GstCaps, so matching allows us to reuse some GStreamer code. Another advantage of this will be revealed soon.

So there you have it folks, this covers the essence of what GUPnP DLNA does. So what’s next?

  1. Frankie Says Relax: Since the DLNA spec can often be too strict about what media is supported, we’ve decided to introduce a soon-to-come “relaxed mode” which should make a lot more of your media match some profile.

  2. I Can Haz Trancoding: While considering how to store the DLNA profiles loaded from the XML on disk, we chose to use GstEncodingProfiles from the gst-convenience library since the restrictions defined by the DLNA spec closely resemble the kind of restrictions you’d expect to set while encoding a file (codec, bitrate, resolution, etc. again). One nice fallout of this is that (in theory), it should be easy to reuse these to transcode media that doesn’t match any profile (the encodebin plugin from gst-convenience makes this a piece of cake). That is, if GStreamer can play your media, Rygel will be able to stream it.

Apart from this, we’ll be adding support for more profiles, extending the API as more uses arise, adding more automated tests, and on and on. If you’re interested in the code, check out (sic) the repository on Gitorious.

(Gst)Discovering Vala

My exploits at Collabora Multimedia currently involve a brief detour into hacking on Rygel, specifically improving the DLNA profile name guessing. We wanted to use Edward‘s work on GstDiscoverer work, and Rygel is written in Vala, so the first thing to do was write Vala bindings for GstDiscoverer. This turned out to be somewhat easier and more difficult than initially thought. :)

There’s a nice tutorial for generating Vala bindings that serves as a good starting point. The process basically involves running a tool called vapigen, which examines your headers and libraries, and generates a GIR file from them (it’s an XML file describing your GObject-based API). It then converts this GIR file into a “VAPI” file which describes the API in a format that Vala can understand. Sounds simple, doesn’t it?

Now if only it were that simple :). The introspected file is not perfect, which means you need to manually annotate some bits to make sure the generated VAPI accurately represents the C API. These annotations are specified in a metadata file. You need to include things like “the string returned by this function must be freed by the caller” (that’s a transfer_ownership), or, object type Foo is derived from object type FooDaddy (specified using the base_class directive). Not all these directives are documented, so you might need to grok around the sources (specifically, vapigen/valagidlparser.vala) and ask on IRC (#vala on irc.gnome.org).

All said and done, the process really is quite straightforward. The work is in [my gst-convenience repository][arun-gst-conv-ks.git] right now (should be merged with the main repository soon). I really must thank all the folks on #vala who helped me with all the questions and some of the bugs that I discovered. Saved me a lot of frustration!

I’ve already got Rygel using these bindings, though that’s not been integrated yet. More updates in days to come.