You are here

Use Aredn for backhaul?

23 posts / 0 new
Last post
KG4FJC
KG4FJC's picture
Use Aredn for backhaul?
Hey all, Please point me in the right direction... first let me say that I'm NOT an IT guy! I've taken the challenge to move our 220 repeaters over to Mw backhaul. Currently we have 4 sites that talk through a uhf hub. The goal is to have ptp from each site back to our hub site where we will have an internet connection. Internet is needed due to our linking to other 220 Allstar repeaters. Unfortunately, we can't use the "hub and spoke" with 5ghz due to locations. We can however "chain" a few and bring in to hub site that way. The primary goal is building a private rf network for backhaul and later some IP cams. Using ubiquiti nodes ive got my test setup working great with them as a bridge. I did want to use the 5g as Aredn but cant figure out how to set it up so our traffic passes as a large network (all 192.168.1.x/24) AND have Aredn available for others to use and (example) view live cams. Any help? Be gentle! Thanks....de kg4fjc
K9CQB
K9CQB's picture
Is this in Virginia?
KG4JFC,
Is this for Virginia? It sounds like we need to sit around a map and look at this. I'm not the best networking guy, but I know a few networking dudes.

-Damon K9CQB
KG4FJC
KG4FJC's picture
Yes, central and western va.
Yes, central and western va. Primary use is 220 backhaul and control but i would like to incorporate aredn if practical.
KE2N
KE2N's picture
repeater link

As much as I appreciate AREDN mesh, I would suggest that a mesh is not the best solution for a chain of nodes linking repeater audio.  As the number of "hops" increases the amount of variable latency increases quickly and you will start to get stuttering and drop outs in the audio. And, if someone decides to do an "iperf" test on the system things will really bog down.  Same is true if someone is trying to stream a high resolution video link.

The AREDN net work is always a 10.x.x.x/8 configuration. If you need other numbering you need to provide routers to accommodate that. 

I would suggest using the OEM "transparent bridge" network mode and WDS wireless mode.  This has the absolute minimum overhead/latency when doing multi-hop links and you can keep whatever network numbering you want.

An example of a repeater linking network can be found in this presentation (this is a download link) :
http://www.w3nd.org/wp-content/uploads/2014/09/Central-PA-IP-Network-Powerpoint-revised-08052014.pptx
You can contact Gary for more information (as shown in the pptx).

You could provide access points ("off ramps") on such a network for AREDN nodes and, if done properly, you can limit the load imposed on the network by the mesh traffic.

The main disadvantage of using OEM is that the ham-only frequencies are not available.  On the other hand, if the nodes are identified with call signs, you will be able to use higher EIRP levels than are permitted in part-15 operation.  (You need to turn off encryption, of course).
 

K9CQB
K9CQB's picture
KE2N is the networking expert I was referring to.
Ken 'K-Jam' KE2N was exactly one of the networking experts I was referring to in my previous post. Thank you, Ken.

-Damon K9CQB
w8erd
audio and video distortions
This comment agrees with my own observations, that mesh audio and video do not work well over multiple hops.
Is this a general fact?  Can this be quantized? If true, this needs to be made more widely known.

Bob W8ERD
K6AH
K6AH's picture
Supporting Isochronous traffic
Isochronous applications such as audio and video require links with 100% LQ/NLQ and relatively small variations in packet transit times.  I have participated in Team Talk conference calls which have worked great over 4-6 hops.  I have also experienced poor quality over a single hop.  To say, "mesh audio and video does not work well over multiple hops" to me means the links need to be improved.

Another observation I have is that many AREDN implementers shoot for LQ/NLQ of something like 80% and call it day.  If you are going to support isochronous traffic, you need 100%.

Andre, K6AH
nc8q
nc8q's picture
Design network for 100% LQ/NLQ
+1
I find lower latency on 100% LQ/NLQ RF links and DtD links than I do on tunnels.
Retries do not seem to matter. Let the wireless driver choose the modulation coding scheme for best though-put.
I find greatest latency on tunnels (residential ISP service<>internet<>residential ISP service).
$0.02
Chuck

 
KE2N
KE2N's picture
figure of merit

I think you mean quantified.   I expect a proper analysis of a mesh network is something that graduate students work on to get their degrees.

But I would propose a simple rule of thumb where you take the LQ's in one direction and the NLQ's in the other direction and just multiply them together (where 90%=0.9 for example).

so taking a a 3-hop path - if all the LQ's are 100%  you get 1.0*1.0*1.0 = 1.0 or 100%  - this is Andre's ideal system and will work well for everything out to many hops.

if the LQ's are 85% you get 0.85*0.85*0.85 = 0.614 or 61%  (marginal for VOIP)

If the LQ's are 70% for each link you get 0.7*0.7*0.7 = 0.343 0r 34%  (ok for file transfer, but not good enough for VOIP)

(in general the LQ's will not be equal, just chosen here for simplicity)

This rapid fall off with more hops I think is probably realistic and shows how link quality becomes really important as you get more hops.  

It also shows why OLSR will try mightily to do it in two hops of three even if it has to slow down to 1 Mb. 

Ken
.

w8erd
Figure of Merit
Multiplying the LQ values sounds reasonable.  But how can we be sure what path is bring used?  I can use traceroute to get info, but can I be sure that is actually the route used by mesh?

Bob W8ERD
AA7AU
AA7AU's picture
Mesh Path?

Q: can I be sure that is actually the route used by mesh?
A: unless you truly have singular PTP connections (like long-haul PTP and some of the spoke/hub implementations etc), in a true mesh (with multiple possibilities for path) the path is continually and dynamically determined by what the mesh sees as the current conditions for each choice based on then-calculated "cost" (which can change at anytime).

For monitoring changing paths over time for latency and packet loss, I use a windoze-based program called PingPlotter - which still has a free edition available at: http://www.pingplotter.com/products/free.html - I highly recommend it. There is also a free-trial of the more fully-featured paid version available.

Of course, this only works from the end-point where the windoze box is located. However, you might be able to locate a windoze box at another location on the mesh, and use remote desktop/xrdp to run it from there (works well for me).

- Don - AA7AU
 

AE6XE
AE6XE's picture
If the network has frequently
If the network has frequently changing paths and/or flapping, this will be problematic regardless.   There's no mesh at the RF layer with AREDN, it's at the routing layer above that.  Using airOS in bridge mode, doesn't  change this behavior if AREDN-OLSR is still setting routing tables over the top of a part 15 AirOS link.   

VOIP and video will drop out as packets are lost during route table changes--takes time for all nodes on the network to be updated with changing route info.   Some applications may drop connection and exit.

Rule of thumb, engineer so route changes are not occurring on fixed location networks.  This means if there are 2 paths from A to B, if the OLSR ETX value is close on both paths, need to do something to get the ETX values far enough apart, so the route only changes when one path is actually down or severely degraded from normal. 

Joe AE6XE
w6bi
w6bi's picture
What Joe said!
I can vouch for what Joe said.   At my QTH I have two routes into the network: 5 GHz Primary and 2 GHz Secondary/backup.  I had to turn the 2 GHz power down about 6 dB to keep the route from flapping between the two.

Orv W6BI
w8erd
Pingplotter
Thanks for the reference to this program. It is great!

Bob W8ERD
w8erd
Pingplotter
Thanks for the reference to this program. It is great!

Bob W8ERD
KG4FJC
KG4FJC's picture
Ken, thank you for the input.
Ken, thank you for the input. Currently i have setup test equipment using it just as you suggest with factory fw in bridge mode and it works very well (at home). These are setup using Allstar controllers so the "audio" is all digital. I think i will continue rolling out this config and maybe incorporate some Aredn in future. We have some tower space at great mountain top sites.
w6bi
w6bi's picture
Streaming protocols over mesh network

Just a observation - we have a weekly Teamtalk net on the ham network, using both voice and video.   Some of our participants are a hundred miles away over a half dozen hops.  Admittedly they're pretty high-quality hops, and half of them are Part 15 links but we rarely hear voice dropouts and while video artifacts are more common, they're not frequent, even with a half-dozen (low-res) video streams going.

My two cents.

Orv W6BI

KE2N
KE2N's picture
YMMV

I think that all it takes is a weak link somewhere on the mesh to cause problems. Since every ham is a DX-er at heart you are always subject to some one connecting who can just barely make the link and while we don't want to discourage that guy ... he does cause problems.

On the other hand, if *every* link on your mesh is high quality (very few retries) it makes a big difference in performance.  Still, in any side-by-side comparison, the dedicated AP/station mode of operation (especially with some low-overhead protocol) is going to outperform a mesh. 

Audio has the advantage that you can make a big jitter buffer to iron out a lot of the latency - as opposed to video.  Of course, with a big jitter buffer you may get an objectionable lag in the T/R  response time.

So I were designing a system specifically to link repeater audio channels (by audio it could be DSTAR, DMR, or whatever) I would still lean toward the non-mesh configuration.

Ken

AE6XE
AE6XE's picture
In the Southern SoCal AREDN
In the Southern SoCal AREDN design (LA and south), the back bone AREDN links are dedicated dishes, with clear unique channels in part 97.   Part 15 channels at 40 mi in our area don't work, too much noise.    Where there are weak link hams connecting in, this is on the Sector coverages, and impact the people in that coverage area, and doesn't impact the back bone link performance.  

Mixing AirOS part 15 links with AREDN links is done and can be done.   However, anyone considering this design, means more complexity, consequently a higher level of IT skillset is needed to sustain.   

Joe AE6XE

 
N4CV
Sharing with AREDN?

I agree with Ken's comments and analysis on the topic.  There are times that a dedicated, non-meshing, link is appropriate when you need more control of the routing and prioritization of traffic.

That said, I'm interested in how we can plan for those "off-ramp" connections to hang mesh infrastructure from a Part 15 microwave link.  Do we have to create a tunnel server on one end and client on the other, using the Part 15 "backbone" as the WAN connection?  The manufacturer firmware gives us the ability to trunk VLANs over the air, but our AREDN developers have recommended NOT to use microwave to extend a DtD connection utilizing VLAN2 noting latency intolerance in the DtD protocol.

@W6BI, what approach are you all in SoCal taking to work Part 15 links into your mesh?  Anyone else have experience with this?

AE6XE
AE6XE's picture
long haul AREDN vs AirOS
long haul AREDN vs AirOS

With auto-distance on an AREDN 40 mi 5GHZ link,  my testing suggests this is providing equivalent thoughput as AirOS TDMA.    Although I need to do the direct auto-distance-aredn with AirOS TDMA and get more numbers to have confidence.   Prior testing  showed that static-distance-aredn was ~70% throughput of AirOS TDMA.  All things were equal, with back-2-back swapping of firmware.     Then I compared static-distance-aredn with auto-distance-aredn and found a comparable improvement that suggests auto-distance-aredn is ~= AirOS TDMA or comparable throughput and latency.

How the route tables are configured (aredn is mesh at the IP routing level) isn't going to impact the end to end performance.  This is at layer 3.   Well, if it is miss configured or flapping it won't work at all.    The RF channel throughput and latency is directly dependent on layer 2 and layer 1 protocols.    We are comparing 802.11n with CDMA-RTS/CTS in AREDN and 802.11n with TDMA in AirOS.    If we are comparing  these in P2P back-hall scenarios, the above data says they are directly competitive with each other.    

As an example,  here is 100+  mile  6 RF hop link through multiple long haul links with AREDN.  To get a visual, this is from my QTH in Mission Viejo Orange County, to Redlands San Bernardino County, to Elsinore Peak in Riverside County.    Average ping in this sample was 23.197 ms.   VOIP crystal clear.   

The primary consideration is to have quality links, which means avoiding channel contention or interference.   There's increased chance of interference if not using part 97 only channels, particularly in metropolitan areas.
 
joe@AE6XE-VM:~/repos/aredn_ar71xx$ ping k6ah-elbbrl
PING k6ah-elbbrl.local.mesh (10.116.137.235) 56(84) bytes of data.
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=1 ttl=53 time=31.5 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=2 ttl=53 time=16.7 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=3 ttl=53 time=40.1 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=4 ttl=53 time=53.6 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=5 ttl=53 time=12.9 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=6 ttl=53 time=14.7 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=7 ttl=53 time=23.2 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=8 ttl=53 time=14.6 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=9 ttl=53 time=12.5 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=10 ttl=53 time=19.1 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=11 ttl=53 time=25.6 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=12 ttl=53 time=18.2 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=13 ttl=53 time=19.4 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=14 ttl=53 time=15.2 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=15 ttl=53 time=12.1 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=16 ttl=53 time=55.5 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=17 ttl=53 time=13.9 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=18 ttl=53 time=23.0 ms
64 bytes from K6AH-ELBBRL.local.mesh (10.116.137.235): icmp_seq=19 ttl=53 time=17.9 ms
^C
--- k6ah-elbbrl.local.mesh ping statistics ---
19 packets transmitted, 19 received, 0% packet loss, time 18028ms
rtt min/avg/max/mdev = 12.119/23.197/55.550/12.761 ms
 
joe@AE6XE-VM:~/repos/aredn_ar71xx$ traceroute k6ah-elbbrl
traceroute to k6ah-elbbrl (10.116.137.235), 30 hops max, 60 byte packets
 1  localnode.local.mesh (10.84.80.57)  1.440 ms  1.774 ms  0.949 ms
 2  AE6XE-Saddleback-RM3.local.mesh (10.14.33.27)  6.498 ms  8.273 ms  11.090 ms
 3  dtdlink.AE6XE-Saddleback-RM5.local.mesh (10.165.24.232)  3.516 ms  12.244 ms *
 4  W6LY-RM5-RDish-LWV-SB.local.mesh (10.10.74.178)  17.944 ms  20.677 ms  25.383 ms
 5  dtdlink.W6LY-RM5-RDish-LWV-PP.local.mesh (10.165.38.64)  38.008 ms  35.843 ms  43.361 ms
 6  AE6XE-PleasantsPk-P2P-LagunaWoods.local.mesh (10.174.175.46)  46.777 ms  62.036 ms  56.795 ms
 7  dtdlink.AE6XE-PleasantsPk-P2P-Yucaipa.local.mesh (10.41.100.204)  55.705 ms  58.684 ms  59.630 ms
 8  AI6BX-3-RM5-XW-SW.local.mesh (10.192.21.77)  61.700 ms  61.819 ms  68.970 ms
 9  dtdlink.AI6BX-3-NB-M5-P2P.local.mesh (10.49.222.136)  71.683 ms  77.397 ms  71.859 ms
10  AI6BX-1-NB-M5-P2P.local.mesh (10.132.36.90)  82.533 ms  73.788 ms  70.050 ms
11  dtdlink.AI6BX-1-RM5-XW-SD.local.mesh (10.171.193.95)  70.345 ms  67.191 ms  72.776 ms
12  K6AH-ELBBRL.local.mesh (10.116.137.235)  80.013 ms  78.022 ms  89.139 ms

AE6XE
nc8q
nc8q's picture
throughput + latency is directly dependent on layer2 and layer1

+1
"The RF channel throughput and latency is directly dependent on layer 2 and layer 1 protocols."

IMHO, that should be a 'sticky' note in bold font and large point.

The layer 3 cannot function without a well performing layer 2 and layer 1.
IOW, degradation of the radio signals degrades OLSR.
Channel contention, hidden nodes, and exposed nodes degrade (destroy) OLSR.
OLSR does not 'fix' poor radio circuits, nor poor network planning.
$0.02
Chuck

AE6XE
AE6XE's picture
Note that RTS/CTS in use is
Note that RTS/CTS in use is designed to deal with the hidden nodes problem.   However, it's noteworthy that RTS/CTS is only used for unicast (one node transmitting data to another specific neighbor).  Tthere's also a packet size threshold to trigger use of the RTS/CTS protocol.     

OLSR is a broadcast -- to all neighbors.  As such goes out at lowest common rate (3Mbps @ 10Mhz channel on 3/5 Ghz).   Thus, OLSR does not use RTS/CTS, and some transmitted frames are lost or collide with other nodes trying to transmit at the same time on a busy channel.   

The same busy channel, when a node is sending VOIP or video will be using RTS/CTS and preserve a time slot (similar to what TDMA protocol is doing), to ensure the channel is clear sending the data.  VOIP or video then ends up with 100% success (barring interference beyond neighbor nodes). 

Thus, while LQ/NLQ can be <100%, the voip/video can be 100% due to RTS/CTS usage when there are lots of active neighbors (multipoint) on the channel.

One can see and test this by doing a tcpdump capture of the RF channel, and viewing the RTS/CTS packets and video frame sequences.

I'd clarify, that OLSR works well, with the caveat the inherent weakness is the LQ/NLQ and ETX values go down when the link is busy.  This is not really representative of the link quality, just because it suddenly got busy.   

On my backlog list is to look at an OLSR plugin that reads the ath9k wireless driver data -- to get a much better picture of the  RF link quality.  No need to send hello broadcast packets, just pull the statistics that already exist from the wireless driver for the best characterization of the link.

Joe AE6XE

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer