Tag: gstreamer

PulseConf Schedule

David has now published a tentative schedule for the PulseAudio Mini-conference (I’m just going to call it PulseConf — so much easier on the tongue).

For the lazy, these are some of the topics we’ll be covering:

  • Vision and mission — where we are and where we want to be
  • Improving our patch review process
  • Routing infrastructure
  • Improving low latency behaviour
  • Revisiting system- and user-modes
  • Devices with dynamic capabilities
  • Improving surround sound behaviour
  • Separating configuration for hardware adaptation
  • Better drain/underrun reporting behaviour

Phew — and there are more topics that we probably will not have time to deal with!

For those of you who cannot attend, the Linaro Connect folks (who are graciously hosting us) are planning on running Google+ Hangouts for their sessions. Hopefully we should be able to do the same for our proceedings. Watch this space for details!

p.s.: A big thank you to my employer Collabora for sponsoring my travel to the conference.

PulseConf!

For those of you who missed it, your friendly neighbourhood PulseAudio hackers are converging on Copenhagen in a month to discuss, plan and hack on the future of PulseAudio.

We’re doing this for the first time, so I’m super-excited! David has posted details so if this is of interest to you, you should definitely join us!

Picking your battles

Most of you have no doubt already seen that Mozilla will be changing their position on H.264 support for HTML5 video in future releases. This is an extremely important decision that I’ve been hoping to see for a while now, and I am really glad this is being done.

There is no doubt that we need patent-unencumbered standards for web codecs (or as much as is possible given the dismal patent ecology today), and while much giddy anticipation followed Google/On2’s release of VP8 into the open, I don’t believe it ever made sense to expect the codec landscape to change drastically in the short timespan everyone expected. There’s a lot of the hardware and software out there that needs to change (see any SoCs with VP8 support yet?), not to mention the interests of the MPEG-LA mafconsortium.

I love Firefox, both as a product and what it means for an open web (for those of you that know me, this might be hard to believe given all my ranting, but it’s true!). I’m glad Mozilla chose to live to fight another day rather than go out in a blaze of glory and (or a flicker of irrelevance).

p.s.: these are my views and do not necessarily represent those of my employer

p.p.s.: Alessandro’s been doing some great work to get the GStreamer multimedia backend going again (this makes so much more sense than going the NIH route!)

Talk video from GstConf 2011

For those of you who were interested but couldn’t make it to the GStreamer Conference this year, the cool folks at Ubicast have got the talk videos up (can be streamed or downloaded).

Among these is my talk about recent developments in the PulseAudio world.

Notes from the Prague Audio BoFs

As I’d blogged about last week, we had a couple of Audio BoF sessions last week. Here’s a summary of what was discussed. I’ve collected items in relevance order rather than chronological order to make for easier reading. I think I have everything covered, I’ll update this post if one of the attendees points out something I missed or got wrong.

  • Surround: There were a number of topics that came up with regards to multichannel/surround support:

    • There seems to be some lack of uniformity of channel maps, particularly in non-HDA hardware. While it was agreed that this should be fixed, we need some directed testing and bug reports to actually be able to fix this.

    • Multichannel mixers, while theoretically supported, are not actually exposed by any current drivers. It was suggested that these could be exposed without breaking existing applications by having new MC mixers exposed with device names corresponding to the surround PCM device (like “surround51”).

    • We need a way to query channel maps on a given PCM. This will be implemented as a new ALSA API which could be called after the PCM is opened. (AI: Takashi)

    • It would be good to have a way to configure the channel map as well (if supported by the hardware?). The suggestion was to do this as was done in PulseAudio, where an arbitrary channel map could be specified. (NB: is there hardware that supports multiple/arbitrary channel maps? If yes, how do we handle this?)

  • Routing: Unsurprisingly, we came back to the problem of building a simplified mixer graph for PulseAudio.

    • The current status is that ALSA builds a simplified mixer for use by userspace, and PulseAudio further simplifies this by means of some name-based guessing.

    • PulseAudio would ideally like a simplified version of the original mixer graph, but something more complete than what we get now

    • However, since PulseAudio has fairly unique requirements of what information it wants, it probably makes sense to have ALSA provide the entire graph and leave the simplification task to PulseAudio (discussion on this approach continues)

    • There was no consensus on who would do this or how this should be done (creating a new serialisation format, exposing what HDA provides, adding node metadata to ALSA mixer controls, or something else altogether)

    • As an interim step, it was agreed that it would be possible to provide ordering in the simplified ALSA mixer (that is, add metadata to the control to signal what control comes “before” it and what comes “after” it). This should go some way in making the PA mixer simplification logic simpler and more robust. (AI: Takashi)

  • HDMI: A couple of things came up in discussion about the status of HDMI.

    • There was a question about the reliability of ELD information as this will be used more in future versions of PulseAudio. There did not appear to be conclusive evidence in either direction, so we will probably assume that it is reliable and deal with reliability problems as they arise.

    • It was mentioned that it might be desirable to only expose the ALSA device if a receiver is plugged in. This had been mooted earlier as something to do in PulseAudio as an alternative. One thing to consider with this approach is dealing with a device switch on the receiver side. Doing this without a notification to userspace was generally agreed to be a bad idea.

  • Jack detection: The long-standing debate on exposing jacks as input devices or ALSA controls came to a conclusion, with the resolution being that jacks would be exposed as ALSA controls. This requires a change in the kernel (and potentially alsa-lib?) which should not be too complex. Actual buttons (such as volume/mute control) will continue to be input devices. Once this is done, the pending jack detection patches will be adapted to use the new interface. (AI: Takashi (patches are in a branch already!), David)

  • UCM: Another long-standing issue was the merging of the ALSA UCM patches into PulseAudio. Most of the problems thus far had been due to an incomplete understanding of how ALSA and PA concepts mapped to each other. Some consensus was arrived at in this regard after a lengthy discussion:

    • As is the case now, every ALSA PCM maps to a PA sink

    • Each UCM verb maps to a PA card profile

    • Each combination of UCM devices that can be used concurrently maps to a PA port

    • Each UCM modifier is mapped to an intended role on the corresponding sink

    The code should (as is in the patches currently submitted) be part of the PA ALSA module, and there will be changes required to use the UCM-specified mixer list instead of PA’s guessing mechanism. (AI: ???)

    (NB: It was mentioned that PulseAudio needs to support multiple intended roles for a sink/source. This is actually already supported — the intended roles property is a whitespace-separated list of roles)

    (NB2: There was further discussion with the Linaro folks this week about the UCM bits, and there’s likely going to be an IRC/phone gathering to clarify things further in the near future)

  • GStreamer latency settings: We currently do not actually use PulseAudio’s power saving features from GStreamer for several reasons. Suggestions to over come this were mooted. While no definite agreement was reached, one suggestion was to add a “powersave” profile to pulsesink that chose higher latency/buffer-time values. Players would need to set this if they are not using features that break when these values are raised.

  • Corking: The statelessness of current the corking mechanism was discussed in one session, and between the PulseAudio developers later. The problem is that we need to be able to track cork/uncork reasons more closely. This would give us more metadata that is needed to make policy decisions without breaking streams. Particularly, for example, if PA corks a music stream due to an incoming call, then the user plays, then pauses music, and then the call ends, we must not uncork the music stream. We intend to deal with this with two changes:

    • We need to add a per-cause cork/uncork request count

    • We need to associate a “generation” with cork/uncork requests, so certain conditions (such as user intervention) can bump the generation counter, and uncork requests corresponding to old cork requests will be ignored

    This will make it possible to track the various bits of state we need to do the right thing for cases like the one mentioned before.

So that’s that — lots of things discussed, lots of things to do! Thanks to everyone who came for participating.

PragueAudio

For those who are in Prague for GstConf, LinuxCon, ELCE, etc. — don’t forget we’ve a couple of interesting audio-related things happening:

If you’re around and interested, do drop in!

More conferences than you can shake a stick at

Prague is an interesting place to be at this time of the year — next week it’s playing host to the Real Time Linux Workshop. The week after that, we have the Kernel Summit, GStreamer Conference, Embedded Linux Conference Europe and LinuxCon Europe. I’m going to be at the last 3, and there’s some great audio stuff happening!

On the evening of Oct 23rd, we’re having an Audio BoF to discuss pending issues that cut across the stack — ALSA, PulseAudio, GStreamer and any other similar bits that people have complaints about.

Then there’s GstConf, where there are going to be a bunch of awesome talks. Also included is a talk by yours truly about recent developments in the PulseAudio world.

At some point during that week, possibly Oct 26th morning, plans are afoot to have an ALSA BoF to discuss the state and future of the HDA driver.

There are also rumours of excellent beer that will need to be scrupulously verified. ;)

It’s going to be a pretty exciting week!

1.w00t!

As Colin Guthrie reports, PulseAudio 1.0 is now out the door! There’s a lot of new things in the release, and we should be getting a much more regular release schedule going. Head over to the full release notes for more details.

A lot of people have contributed to this release and thanks to them all. Special props to Colin all the patch-herding, tireless help, and code ninjutsu!

p.s.: Gentoo packages are already available, of course. :)

Hello … hello … hello!

I have a secret to confess. I’ve spent a great deal of time over the last few months talking to myself. I can’t say I haven’t enjoyed it — it turns out my capacity to entertain myself is far greater than initially suspected. But I hear you ask … why?

Here at Collabora, I’ve been building on Wim’s previous work on adding echo cancellation to PulseAudio. Thanks go to Intel for supporting us in continuing this work. Before too long, all this work will be trickling down to your favourite Linux distribution and all your friends will stop hating you.

First, a quick recap on what acoustic echo cancellation (AEC) is. If you already know this, you might want to skip this paragraph and the next. Say you’re on your laptop, and you receive a voice call from your friend. You don’t have a pair of headphones lying around, so you’re just going to use your laptop’s built-in speakers and mic. When your friend speaks, what she says is played out the speakers, but is also captured by the microphone and she gets to hear herself speak, albeit a short while (a few hundred milliseconds or more) later. This is called acoustic echo, and can be frustrating enough to make conversation nigh impossible. There are other types of echo for phone systems, but that’s not interesting to us at the moment.

This problem is common on pretty much all devices that you use to make phone calls. Astute readers will ask why they don’t actually face this problem on their phone. That’s because your phone (or, if you have a cheap phone, your phone company) has special software hidden away that removes the echo before sending your signal along to the other end. On laptops, which are general-purpose hardware, the job of echo cancellation is left to either your operating system (Windows XP onwards, for example) or your chat client (Skype, for example) to provide.

On Linux, we implement echo cancellation as a PulseAudio module (code-ninja Wim Taymans wrote this last year). We use the Speex DSP library to perform the actual echo cancellation. The code’s quite modular, so it’s not very hard to plug in alternate echo cancellers (we even include an alternate implementation, which isn’t quite as effective as Speex).

Recently, we plugged in some more bits from the Speex library to do noise suppression and digital gain control (so you can quit twiddling with your mic volume for the other end to be able to hear you). We also added a bunch of fixes to reduce CPU consumption significantly — this should be good enough to run on a netbook and reasonably recent ARM platforms.

While all this sounds nice, I think a demo would sound (haha!) nicer …

Without AEC: /downloads/pulseaudio/aec/call-no-aec (or download ogg, aac)

With AEC: /downloads/pulseaudio/aec/call-with-aec (or download ogg, aac)

This is a recording of a call between my laptop and N900. The laptop is playing audio out the speakers and recording with the built-in mic. What you hear is the conversation as heard on the N900.

All this echo cancelling goodness will come to a Linux distribution near you in the upcoming 1.0 release of PulseAudio. The next version of the GNOME IM client, Empathy (3.2), will actually make use of this functionality. In due time, we intend to make it so that all voice applications will end up using this functionality (so if you’re writing a VoIP application and don’t want to use this functionality, you need to set a special stream property to disable this — filter.suppress="echo-cancel").

For the impatient among you, you can try all this out by getting recent testing versions of PulseAudio (I know packages are available for Ubuntu, Debian, Gentoo and Mageia at least). To force your phone streams to use echo cancellation, just run pactl load-module module-echo-cancel, and you’re done.

There’s still some work to be done, refining quality and using other AEC implementations (in the short-term, the WebRTC one looks promising). Things don’t work at all if you’re using different devices for playback and capture (e.g. laptop speakers and webcam mic). These are things that will be addressed in coming weeks and months.

More PulseAudio power goodness

[tl;dr — if you’re using GNOME or a GStreamer-based player, not using the Rhythmbox crossfading backend, and want to try to save ~0.5 W of power, jump to end of the post]

Lennart pointed to another blog post about actually putting PulseAudio’s power-saving capabilities to use on your system. The latter provides a hack-ish way to increase buffering in PulseAudio to the maximum possible, reducing the number of wakeups. I’m going to talk about that a bit.

Summarising the basic idea, we want music players to decode a large chunk of data and give it to PA so that we can then fill up ALSA’s hardware buffer, sleep till it’s almost completely consumed, fill it again, sleep, repeat. More details in this post from Lennart.

The native GNOME audio/video players don’t talk to PulseAudio directly — they use GStreamer, which has a pulsesink element that actually talks to PulseAudio. We could configure things so that we send a large amount (say 2 seconds’ worth) to PulseAudio, sleep, and then wake up periodically to push out more. Now in the audio player (say Rhythmbox), the user hits next, prev, or pause. We need to effect this change immediately, even though we’ve already sent out 2 seconds of data (it would suck if you hit pause and the actual pause happened 2 seconds later, wouldn’t it?). PulseAudio already solves because it can internally “rewind” the buffer and overwrite it if required. GStreamer can and does take advantage of this by sending pause and other control messages out of band from the data.

This all works well for relatively simple GStreamer pipelines. However, if you want to do something more complicated, like Rhythmbox’ crossfading backend, things start to break. PulseAudio doesn’t offer an API to do fades, and since we don’t do rewinds in GStreamer, we need to apply effects such as fades with a latency equal to the amount of buffering we’re asking PulseAudio to do. This makes for unhappy users.

Well, all is not as bleak as it seems. There was some discussion on the PA mailing list, and the need for a proper fade API (really, a generic effects API) is clear. There have even been attempts to solve this in GStreamer.

But you want to save 0.5 W of power now! Okay, if you’re not using the Rhythmbox crossfading backend (or are okay with disabling it), this will make Rhythmbox, Banshee, pre-3.0 Totem (and really any GNOMEy player that uses gconfaudiosink, which will soon be replaced by gsettingsaudiosink, I guess), you can run this on the command line:

gconftool-2 --type string \
    --set /system/gstreamer/0.10/default/musicaudiosink \
    "pulsesink latency-time=100000 buffer-time=2000000"

On my machine, this brings down the number of wakeups per second because of alsa-sink to ~2.7 (corresponding nicely to the ~350ms of hardware buffer that I have). With Totem 3.0, this may or may not work, depending on whether your distribution gives gconfaudiosink a higher rank than pulseaudiosink.

This is clearly just a stop-gap till we can get things done the Right Way™ at the system level, so really, if things break, you get to keep the pieces. If you need to, you can undo this change by running the same command without the latency-time=… and buffer-time=… bits. That said, if something does break, do leave a comment below so I can add it to the list of things that we need to test the final solution with.