You are here

SDR on the mesh

15 posts / 0 new
Last post
zl4dk
zl4dk's picture
SDR on the mesh
OK I know it's not really a Emcomm Application but I have a RTL-SDR dongle connected to a Raspberry Pi on the mesh. I know this could just as easily be on the internet (well almost) but the main reason for putting it on the mesh is to create a reason for other locals to consider getting active on the mesh also.

 ​I installed some software called YouSDR to run this. This provides a simple webpage driver and allows either raw SDR data to be fed to a program like SDR# or full decoding down to audio after choosing a frequency and mode. YouSDR uses"Icecast" and "Darkice" to send the audio data. It seems to work OK running into SDR# but when simply sending audio there seems to be a significant delay in sending the data (1 minute) which gets progressively longer as time goes by. This is very annoying as changes to the frequency, mode or squelch etc end up appearing to take a long time to take effect.
 
I wonder if anyone knows what might be causing this problem or if anyone has tried YouSDR or any other similar software.
 
Regards
David ZL4DK  
kg9dw
kg9dw's picture
Everything works ok locally?
Everything works ok locally? So connecting to the pi on the same LAN doesn't experience the delays?
zl4dk
zl4dk's picture
SDR on the Mesh

I was assuming no. I have no reason to believe that the problem is anything to do with the mesh. I was expecting the problem to be something to do with icecast/darkice. Normally I connect my laptop hardwired to a different node but the two nodes are linked via DtD. I have also ran the system remotely via both a pair of wrt54 HSMM nodes and also over a pair of AREDN 3GHz nanostations with the same result. I will try and connect my laptop to the same node as the Raspberry Pi and see if there is any difference. I'm not expecting any. I should also try and listen to the audio directly from the Pi somehow to confirm it's not an issue even further back that I thought.

zl4dk
zl4dk's picture
SDR on the Mesh
I connected directly to the router that has the Raspberry Pi on it and found the delay is still there. I suspect some of the problem is that I am using TCP whereas UDP would be far better for sending stuff in real time. It seems most audio "IP broadcasting" systems use TCP and are content to accept a bit of delay rather than skip lost bits. I wonder if there is a way to mimic an IP phone on the Pi and provide the audio to a phone number? That might fix the problem.
KE2N
KE2N's picture
SDR

I think it has been mentioned that a mesh network is not suitable for UDP type streaming. TCP applications need to build up a buffer so that packets can be put back in the correct order when they finally arrive. 

The usual way to stream SDR over a network is to stream broadband IQ data (of a selected bandwidth) over the network and then run a client SDR program at the listener's end.  That way, although there may be substantial delay in the end-to-end data, the tuning/mode/etc adjustments are instantaneous.  If you have a mesh network that can handle video you probably can stream a substantial chunk of spectrum.

I am sure there are a number of ways to do this - one I found today on Google is called Cloud-SDR

 

AE6XE
AE6XE's picture
"multicast" would be
"multicast" would be problematic on an RF mesh network, if it's trying to use this.   Is the crunching of the SDR's IQ data being done on the PI?   Wild guess, does it have sufficient cpu horse power to keep up?  Would fit if the symptoms occur locally and across the network.
zl4dk
zl4dk's picture
SDR on the Mesh
Ken,​
YouSDR provides both an audio stream or a broadband IQ data stream. I don't seem to have significant delays when using the broadband IQ data stream only the audio stream which is a little confusing. Perhaps there is a buffer somewhere that fills up extremely quickly with the broadband data so the delay is not noticed whereas the slower audio stream takes longer to fill the buffer. It seems an audio stream shouldn't need to be delayed significantly otherwise IP phones would be a lot worse than they are. Yes I can see how there could be issues with UDP on a mesh system.
 
Joe,
YouSDR is really just a webpage frontend and uses "rtl_fm" to decode the IQ data to audio. This is a simple and efficient IQ decoder for the raspberry pi. I ran "top"while it was running and rtl-fm uses 10-12% cpu time (depending on the mode), darkice uses another 4-5%, aplay 3%, and icecast and other processes all less than 1%.
 
Regards
David
AE6XE
AE6XE's picture
Interesting... Check out this
Interesting... Check out this way to pull IQ data from a Ubiquiti device:    https://wireless.wiki.kernel.org/en/users/drivers/ath9k/spectral_scan  
 Maybe some programming involved, but in theory a way to stream IQ data out somewhere if there is a compatible SDR renderer. 
K6AH
K6AH's picture
Ha!
Who will be first to write a request to Bloodhound?
zl4dk
zl4dk's picture
I don't think from reading
I don't think from reading the blurb, that this is actually IQ data but rather a set of data indicating the strength of signals v frequency. So that a plot of this would give a rough spectrum display.But then I had to read through it twice to figure this out and I'm not sure if there is more to it yet.
AE6XE
AE6XE's picture
zl4dk,  You're right, that
zl4dk,  You're right, that documentation shows how to directly get the power of each 'bin' directly.  It is based on the linux driver already calculating the power for us in each bin (no need to re-invent the wheel to re-calculate the power somewhere else).   We wouldn't need to add code and dig deeper into the linux driver to capture the IQ data directly, unless there was some other benefit in doing so.

These devices are SDRs, but specialized to 802.11 designs.  They can be configured to get at least 3 levels of spectrum graphs. I hesitate to say 'freq resolution'.   This is my understanding of what it is doing and what we'd need to do.  This may be helpful to the greater community to better understand what an 802.11 radio is doing, at least that's the story I'm sticking with after getting carried away with this description :) :

We configure the device at a channel center freq.  Then set the channel width, for example 10Mhz. The chip is down-converting (mixing) the signal to 'baseband' (0 hz) for each channel it is digitizing. A spectrum scan would change channel to capture data across the entire freq range we want to look at.   For every 10Mhz channel, digitization of the signal takes place in this baseband between 0Mhz and 10Mhz.  This is done in baseband to avoid extremely high sampling rates.   As many of you know, there's this guy named Nyquist that once proved we have to sample at 2x or higher of the highest freq component to digitalize with no loss of information.  Thus, the clock sampling rate is 20Mhz corresponding to the 10Mhz channel width setting. 

We are now digitizing a 10Mhz chunk of spectrum.  64 samples are taken at a time (at this 20Mhz rate) and feed into the 64 point hardwired chip FFT.  This number-crunches the 64 samples and outputs 64 frequency domain components of the signal for the time period these 64 samples were captured.  This groups or sums the power or the frequency components of the received signal into one these 64 'bins'.   A bin's size is 10Mhz/64 = 156,250 Hz for the 10Mhz bandwidth.  If we were doing 5Mhz channel widths,  a bin would represent 5Mhz/64 = 78,125 Hz chunks, but the time period of the samples would also be different.

Putting it all together, if the center frequency is 2397 for a 10MHz bandwidth setting, there will be 64 bins or possible frequency components the signal is broken down into.  A given bin aggregates the power for its respective chunk of frequency.  For example, a specific bin would be, 2397MHz up to 156,250 Hz higher.  If the receive signal breaks down into a signal of 2,397,000,005 Hz plus a signal of 2,397,001,000 Hz, the power of both these signal components are summed and shown in the same bin with one data point.   We simply show this power level for this chuck of frequency, like a bar graph.  If we go to 5Mhz channels, we can narrow the bars in the graph to 78,125 Hz chunks.  The math likely needs to be divisible by 2 for this to work, so maybe not a way to get smaller bin sizes (but at the tradeoff of 64 samples in half the time, this doesn't really translate to better "freq resolution" anyway).

There would be a data stream of power datapoints for each of the 64 bins across all the 20/10/5Mhz channels being scanned. Each frequency bin datapoint is for the time period it takes to get 64 samples at the channel width rate.  We just need a graphical imager to show this.  It's no doubt more complicated to plug together with basic user input of the scan range and granularity.

Joe AE6XE
~                                                                                                                                                                                                                  
~                                                                                                                                                                                                                  
~           
zl4dk
zl4dk's picture
OK Joe,
OK Joe,

​yes that's kind of what I expected but the detail was a little unclear. Your explanation helps. Re-Nyquist and the IQ data I thought that normally these channels were mixed down to zero Hz as the "centre frequency" and the say 10MHz bandwidth ran from -5MHz to +5MHz but the channel actually mixed down into two streams (I and Q) 90degrees apart. Since -ve frequencies are really just the same as +ve frequencies but with a 180 degree phase shift each of these streams are able to be sampled at 10MHz. However since we have two 10Mhz streams it is the equivalent data to a 20MHz sample rate. You could have just been avoiding describing this extra complication or am I wrong and zero Hz is at the edge of the channel?

​Interestingly I have an analogue 1296MHz SSB rig that uses the "third method" of SSB modulation/demodulation which uses this zero MHz in the centre of the channel IF technique. It runs the I and Q channels through 1.2kHz low pass filters before then mixing them back up to "real audio" with a 1.5kHz oscillator to produce a 300Hz to 2.7kHz bandwidth and then combining I and Q to get rid of all the extra frequencies produced by this process. Now-a-days this could all be done digitally.
 
AE6XE
AE6XE's picture
Yes, I am avoiding some of
Yes, I am avoiding some of these issues to simplify it and also trying to remember all these DSP basics, been a while at this level.  The dialog definitely helps to make sure I get it right.   I had changed it from -5 to +5 before posting--someone's going to wonder why if it only has 5Mhz max freq why wouldn't we sample with only 10Mhz rate.   Complicated with FFT complex conjugate issues.  This is a 'real' signal being received.  We also have to figure a lot out,  the chip is proprietary with NDA documentation and what information is out there is incomplete with zero explanations.   While we assume it is a 64 point FFT, I couldn't rule out that it is 128 for some reason due to 802.11n 40MHz channels, but then the 802.11 spec pairs 2 x 20Mhz channels. ... More mysteries to get to the bottom of :) .   

Joe
KD2EVR
KD2EVR's picture
Just making an uneducated
Just making an uneducated wild guess, but is it possible that the rate at which the data is being streamed out is just slightly faster than it is set to play back, thus the input buffer is growing over time? 
zl4dk
zl4dk's picture
KD2EVR
Yes, I suspect something like that is happening. However that doesn't quite explain why there is a significant time delay even at the start of running it. I am wondering if there is an oversized buffer somewhere that requires to be filled up a bit before data gets sent. I tried hunting around in the configuration files but the buffers specified seemed reasonable. But it is worth another look.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer