You are here

M2 inconsistencies

11 posts / 0 new
Last post
M2 inconsistencies
I have a Rocket M2 mated to a 120 Sector with approximately four consistent clients accessing it. All have strong, -25-32 dbm SNR links. None are able to achieve truly consistent LQ or NLQ to the sector and, as would be expected with this the throughputs vary.

I have run wifi scans both at the 10Mhz that the nodes are running on -2 as well as on the 20Mhz to see what might be interfering. As would be expected, there is traffic out there on the 20Mhz width with some being on channel 1 though channel 1 is not a factor at the Rocket/Sector end of the link. 

Any ideas?

AE6XE's picture
The LQ and NLQ metrics are

The LQ and NLQ metrics are produced differently than normal traffic flowing between 2 nodes.    These are produced with a UDP broadcast from a node to all listening neighbors.   When beacons and IP broadcasts are transmitted, they must be sent at lowest common denominator link rate, e.g. 1Mbit or maybe 6.5Mbits for 802.11n only networks to ensure all neighbors can receive.    This is at 20Mhz channel, so divide by 2 for 10Mhz and by 4 for 5Mhz.   If you had a 50Mbps link on 10Mhz channel for 'normal' traffic with a neighbor, in comparison, this means the same size packet takes ~100 times longer exposed to  higher chance of interference (sent at 0.5Mbps @ 10Mhz).   

I've not yet found other possible causes to explain why I often see 2Ghz hub sites with links in the 30+Mbps range, but the LQ and NLQ is not what one would expect.  70% range +/-.     This symptom along with the SNR charts jumping around 10dB range show the environment is dealing with some contention.

I've migrated more towards 5Ghz and 3Ghz in recent times due to this.  But you and I are in densely populated areas in SoCal,  Mileage elsewhere may vary.   3Ghz has very little of these symptoms but equipment costs are ~30% higher.  5Ghz is showing improvement over 2Ghz, but not as good as 3Ghz.

[another thought]   broadcasts and beacons also don't use RTS/CTS handshaking to ask everyone else to clear a time slot to transmit.  This handshaking is supposed to be honored in the 802.11 specs by all devices on the same channel and bandwidth regardless of SSID.  It would be interesting to see if better performance can be achieved deciding to live with some interference (no RTS/CTS) with another wifi signal or joining directly with them on the same 20Mhz channel.

Joe AE6XE 

The explanation of traffic

Your explanation of the multi point traffic calculation makes sense to me. If I am understanding it correctly, one new node coming in with truly poor performance could, in theory, degrade performance for all nodes attached to the sector. If I have four clients connected to the node with optimal peaking and no interference on their path, all would be fine and a new node, not properly peaked or with lots of interference in its path, could drag the rest down as the system looks for the optimal speed to support usable communication with that node. Am I getting the concept correctly?

I also know the distance settings are critical so if the sector has a farthest client at a 10 mile range, it needs to be set at ten miles as would that client and all other clients would need to be at their correct distance to the sector. If one of these is out it could, in theory, begin degrading performance. That said, if a longer distance adjusts packet size for optimal error correction, if there is noise in a signal path would a longer distance help adjust for this and optimize the throughput? I played with this s concept yesterday and it actually seemed to work.

The distances from clients to the sector right now are at about an 8 mile max however the elevations vary from lower and higher than the sector location. For example, optimal alignment for one location would have a -2.9 degree downward tilt and alignment to the highest client would dictate a +1 degree vertical alignment. The sector was originally set with a -1 degree tilt in an attempt to best serve all. I am beginning to think setting it at a 0 degree would be best.
AE6XE's picture
In regards to the multipoint
In regards to the multipoint traffic, there is not one link rate that all nodes are using.  When a packet is transmitted to a specific neighbor, there is a specific link rate determined that is optimal between those 2 devices.     Every packet being transmitted could be sent at a different MCS link rate depending on which device the traffic is intended for.      The LQ and NLQ measures are based on packets intending to go to anyone listening, regardless if there is a link showing in mesh status or not.   So it goes out at lowest rate.    Thus, a distant node is not going to lower the link rate for a closer node that data is both being directed to respectively.  But both are able to receive the same broadcast packet.

Distance should be based on the farthest node the Sector has actual traffic with.   If traffic is not routing to farther out distant nodes, then having the distance value too short for them is not an issue.   Although, if new people are joining, farther out, then it may be problematic with their first mesh experience :( .

The 2Ghz 120 deg sectors, have a 9 deg vertical beam coverage with a 4 deg electrical down tilt.   Doing the math out loud, if the physical mount is up-tilt of +0.5 deg,  then top edge of beam is at ( +0.5physical - 4 electrical_downtilt + 4.5 half_beam) =  +1 deg.   The bottom edge of the beam width would then be at -8 deg.  Unless you're up 1000's of ft, you'll still be pointing a fair amount of energy into the ground close in. 

...starting to ramble just a little :) 

I found pointing angle is most sensitive on 5Ghz.  I usually pull out my trig calculator and decide where the ring of beam width coverage should be.  But I may be getting more picky about this than most.  With a 4000' delta between clients and tower-antenna and a 3 deg vertical beam width on 5Ghz, I engineer a downtilt to have something like a ring of 5 miles to 30 miles of coverage (and figure close in will still connect).   Depending on a coverage area, one might do a different angle to have 0 to 20 mile coverage.  The higher the antenna is, the smaller the ring of coverage. 3 deg is significantly different than the 2Ghz 9 deg.

I then have this $40 digital read-out level from home depot still hopelessly trying to get the tilt to 0.1 deg tolerance...

K6AH's picture
Closer nodes don't need as much gain
Remember that closer-in nodes don't actually need the sector antenna gain as much as those farther away.  A remote node at 5 miles has a 12 dB path-loss advantage over one at 20 miles.  So when you are projecting the antenna's major lobe on the user-base, I would focus more on those in the extreme coverage area who will need the extra antenna gain.

Here's one more observation
So here is one more observation from yesterday after helping to peak the remote node. The sector faces East and the remote node, a dish, is facing West; as the sun get more in,one with the remote dish, performance definitely falls off and then improves gradually again after sundown. Has this been seen in other locations? If so, can it be minimized with a different node or different frequency range? Part of me is beginning to think about moving the 2.4 from this site and going up with 3.4 and then moving the 2.4 further out for other users with a dtd to one of the prime 3.4 sites.
K6AH's picture
If it's due to a temp inversion, then you might try raising the aim of the antenna a degree or two. If the two nodes are at our near the same elevation, then this is likely not the problem.
N2MH's picture
Sun Noise?

Maybe what you are seeing is noise coming off of the sun. One of the quick checks that the microwave weak signal guys do is to aim their antenna at a "cold" spot in the sky and then move it to the "warm" spot of the sun. If the received noise power doesn't peak when the antenna is on the sun, they know that there is something wrong with their system. In your case, the noise power from the sun may be greater than the signal power from the other station, thus producing an unusable snr.

A related effect is that twice a year, earth stations looking at geo-synchronous satellites lose contact with a satellite when the earth is directly behind the satellite. The snr becomes unusable until the sun moves on and is no longer in back of the satellite. With a bad snr, the earth station simply can't hear the satellite because of the background noise. Because of geometry, this effect will happen at different times depending on the earth station's location on earth and hence, the time that the earth will be behind the satellite for that station.

Maybe we can call this "uwavehenge"? :-)

KE2N's picture
fluxing around

last year I did a calculation converting 2800 MHz solar flux units to dBm in a 30 inch dish. I cannot find it now, but my recollection is that the noise, even in a 20 MHz bandwidth, was not enough to have any effect on the signals AREDN typically encounters (like -80 dBm).  In any case we are getting near the bottom of the sunspot cycle and the solar flux is lower than ever ...

I think tropo effects are a likely cause.  On days when it is sunny and there is little or no wind, you can get some layering in the lower atmosphere.  These layer boundaries (dielectric gradient) deflect the signal up or down from the geometric aiming point. The effect should repeat daily and may be worst, for example, late in the afternoon or early evening when the air starts to cool off, while the ground itself is still hot.

The other thing that can happen is that you get enhanced propagation at some elevation above the Fresnel zone. This would be great except it can be out of phase with the propagation you really want.   I had always thought that tropo "scatter" would disrupt the wave front too much to be useful but apparently if you employ frequency diversity and enough FEC you can do broadband using tropo scatter (I suppose some serious ERP is required).  How about 8 Mbps at 300 km?

Let me know when these things come on the surplus market:


KE2N's picture

In a help file it says that LQ is done using OLSR packets which have a sequence number and therefor can indicate how many packets you lost between one and the next reception... I guess that is the UDP broadcast you refer to?  I guess if you miss a packet, you missed it, period.  With data, you can ask for a repeat. At a fast data rate you can get repeats quickly and the channel looks OK.  But what happens when all these repeats start to fill up the available time ("duty cycle" on the channel).  With increasing traffic loading the throughput "falls off the cliff".  So the channel looks good until you really need it for something. 

I think that a low link quality (and jumping SNR) is trying to tell you something.  The fact that you can squirt small amounts of data through a lightly loaded channel is giving a kind of false sense of the quality of the channel. Assessing the quality of a wireless link, in a dynamic environment, is a tricky business to say the least (and fertile ground for grad students to write papers).

AE6XE's picture
Yes, OLSR packets are the UDP
Yes, OLSR packets are the UDP broadcasts referenced.    With directed data between 2 nodes, if the error rate increases, then a lower MCS rate will soon be selected with less retransmission.   In monitoring this rate selection table, I've noticed that the %success rate of packets for the optimal MCS rate selected tends to be in the 95%+ range.  But this wasn't a serious scientific method used, rather causal observation.  It appears before there's any significant retries occurring, the link rate drops down to a lower rate = receiver has better sensitivity or more error correction bits in use.  

We have 2 long distance links ~37 miles to compare between.  All links using RocketDishes on 5Ghz and 10Mhz channels.   But one link is getting 50-60 TxMbps rates.  The other is getting 12-25 TxMbps rates.   Both are demonstrating stable very low latency links (see my 'network performance' post a couple days ago), but one is half the capacity.     The difference looks to be:   one site is showing more noise ->  less SNR getting above the noise to decode the signal.   

The weakness of OLSRv1 in use is that it limits the characterization of the link quality to missed UDP packets (no retries).   It has no concept of 'throughput' in the design -- it would choose to direct traffic through an awesome  100% LQ/NLQ  @ 1 Mbps link, and pass over the 50Mbps link with  70% LQ/NLQ.   Of course as soon as the 1 Mbps link becomes saturated, it will start showing much lower LQ/NLQ--a slow back and forth change of routing when loading the network down.    OLSRv2 and other routing protocols are working to better characterize the throughput to make better decisions.   One of these is in our future.


Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer