Hi,
few “silly questions” if you wouldn’t mind advising a new starter. From what I have read, I cannot fully grok the working model for Balena Sound. I cannot tell if balena sound is talking to the local raspberry pies only or talking to the cloud? Does music go from the phone to the master rpi? or through a Balena cloud server?
Further working model grokking problems:
I have all the music on a server, and I want to stream it to the dumb kitchen hifi. I am setting this up for my wife, as she likes her music in the kitchen when she cooks (I prefer silence - can’t focus on the recipe otherwise) She like to just stream her music from YouTube over her crap phone speakers, but we have all her CD music on the server, and a nice hifi there in the kitchen. She won’t use any new screen I add to the hifi. I think balena fit in here but I am not sure. I want the control app on the phone to show the album art when she uses the phone, the server to send the music to an rpi in the kitchen, driving the dumb hifi.
Looking at all the Balena sound tutorials, it looks like the music has to be on the phone. Can the phone app mount a shared network drive, or play from a media server software to a Balena sound rpi?
If so would this still use Balena cloud? and why?
thanks again
Chris
balena is a platform for managing your fleet. So we help with getting an application on the device, and monitoring it. The data that is being generated or consumed by your application isn’t related to balenaCloud. So the music doesn’t go through our servers.
The communication between multiple raspberry pis is local to your network.
If your wife is playing music over YouTube, and is using an iOS device, she could change the speaker to point to the AirPlay part of balenaSound. That way she controls the music from her phone, but the sound is output on the speakers connected to the Pi. This is limited to iOS devices of course - but for other devices, you have bluetooth - also supported by balenaSound.
About your music on the server - you can use UPnP. You can use something like Bubble UPnP on Android, or JuP&P on iOS - to control the source (your server) and the destination (balenaSound on a Pi) - as well as the music related controls. It’s an open standard, and you should be able to find other apps that work for you as well.
Ahh that’s amazing thanks for such quick and complete reply, There’s lots I don’t know but that really connected the dots for me.
My wife uses YouTube because it is easier than navigating a network aware file browser app to the server and playing through that. If the songs were already appearing in an app, that would mean she could more easily use the music she has accumulated over the years rather than just be another google data point and listening to whatever they suggest. I can’t convince her the apple price tag is worth it, new to a macbook myself. We are both android for phones for now.
Just so I get this right: I would Try bubbleUPNP/equivalent on our android phones, and connect it to the server. The server would rung Madsonic/subsonic/?emby/?plex and would follow instruction from the UPNP app on the phone to play sound. I would set up Balena Sound on the rpi on the kitchen hifi and once plumbed it to the server, it would instruction the server play whatever is sent from the server to the rpi…? So I am using 3 physical devices and a different program on each, and due to the magic on upnp this all just works. Is that right?
So in my situation where I have 3 devices, a control client, a media sharing server, and a playback client, I can use uPNP.
So uPNP is a server/client model too… …not peer to peer. So the uPNP server sits on the playback device not the media-sharing-server. So in this model, which bit has authority to commands which bit? The uPNP client on the phone has authority to issue uPNP commands to the media sharing server, through the uPNP server on the playback device…
…at what point do the uPNP commands enter the balena system? On arrival at the media sharing server?
Just so I Understand the alternative you propose: Bubble UPNP client on the phone talks to uPNP server (?also bubble uPNP?) on the rpi. This sends a uPNP to the server, which is running Emby. Emby server receives the uPNP request and responds by following the instruction to stream music media to the requesting device - the rpi. The requesting device, the rpi in the kitchen has Emby client. It receives the stream and sends it to audio (DAC hat or 3.5mm jack). Is that right?
So in this case Emby on the playback device and on the server can pretend it is a 2 device system, with the direction to play entering the Emby client server model on the client end, just as it would on the 2 device model.
So does Emby client on the pi have some sort of means to receive uPNP from a uPNP server on the same machine? And in this case, Emby would not be sending album art metadata to the phone, but to the playback device (which will most likely be headless). So what will the end user see on the phone in that situation?
Really sorry, revealing my ignorance, if there is a favourite link you think I should have read, please feel free to post it. Thanks again.