Snapcast client container direct to audio output in multi-room

I’m looking at the balenaSound architecture diagram here, and I had an idea I think might help with glitches and skips. I don’t know how feasible this is.

When running in any multi-room mode, route the multiroom-client container’s output directly to the sound output, rather than through the audio block. The audio block is a great management point for audio inputs. But there is only ever one audio output, and it doesn’t change during the life-cycle of a device, under normal usage patterns. Upon reboot or a sound-supervisor reconfig event, the containers are restarted anyway, so reconnecting the correct blocks to the sound output would happen at that time, without any expected conflicts.

This would provide reduced complexity in the audio routing, and it would open up the ability to run a dedicated multi-room client on only the multiroom-client container, which I expect would re-enable support all the way back to the RPi 1.x.

In standalone mode, audio would route through a plugin, then the audio block, then to the sound output, as it currently does.

In multi-room mode, audio would route through a plugin, then audio block, then multi-room server … received by multi-room client, then to sound output.

My rationale here is that I think pulseaudio is just trying to do too much, with how much compute and I/O the RPi has available (or maybe some kind of process or scheduling issue, but that’s beyond the scope of this post). I’ve noticed that even though a device like an RPi 4B has plenty of compute capability to keep up with balenaSound, it still has glitches and skips, which did not exist (as far as I can tell) in the 2.4.x code.

Thoughts?

Cheers!
Mark-

Hi Mark,

Thank you for this - you’ve clearly put a lot of time and effort into the idea and writing it up so succinctly. I took a look at the code to try and work out if this would work, and it looks plausible, but I don’t know the codebase well enough to consider all the angles. I’ve asked Tomas, who is the main dude for balenaSound, if he could come and have a look at your suggestion. I’ll gently prod him until he replies. :wink:

Thanks again,
Phil

1 Like

Hey Mark! Thank you for the detailed explanation, always nice to read well thought posts.

I agree with your overall assessment and to be honest I don’t have any concrete arguments against this idea. The only concern I have is purely philosophical as we do want to have a solid audio block that can handle everything audio related and this would detract us from getting to the bottom of the problem :stuck_out_tongue: However user experience should always go first, so if this provides a clear benefit I think we should go for it.

My gut feeling is that while handling output does add stuff to pulseaudio to do it shouldn’t be too significant, but yeah this is not based on science but rather a hunch/previous experience. Fortunately this should be easy enough to implement and test so I can take a stab at it in the coming days and let you know (maybe you can help test?).

For reference, the other avenues we need to explore are process scheduling, process priority and dealing with FIFO pipes (this problem does seem to get worse with multiroom), but this are more complex topics to debug and test.

Anyways, I’ll let you know when I get the time to try this out so you can also test (unless you want to go for it, obviously feel free to do so!).

If possible can you raise a GitHub issue in the sound repo for this? Thanks Mark!

@tmigone,

I created feature request 354 for this.

Cheers!
Mark-