I have a Rocket M2 mated to a 120 Sector with approximately four consistent clients accessing it. All have strong, -25-32 dbm SNR links. None are able to achieve truly consistent LQ or NLQ to the sector and, as would be expected with this the throughputs vary.
I have run wifi scans both at the 10Mhz that the nodes are running on -2 as well as on the 20Mhz to see what might be interfering. As would be expected, there is traffic out there on the 20Mhz width with some being on channel 1 though channel 1 is not a factor at the Rocket/Sector end of the link.
Any ideas?
Keith
I have run wifi scans both at the 10Mhz that the nodes are running on -2 as well as on the 20Mhz to see what might be interfering. As would be expected, there is traffic out there on the 20Mhz width with some being on channel 1 though channel 1 is not a factor at the Rocket/Sector end of the link.
Any ideas?
Keith
The LQ and NLQ metrics are produced differently than normal traffic flowing between 2 nodes. These are produced with a UDP broadcast from a node to all listening neighbors. When beacons and IP broadcasts are transmitted, they must be sent at lowest common denominator link rate, e.g. 1Mbit or maybe 6.5Mbits for 802.11n only networks to ensure all neighbors can receive. This is at 20Mhz channel, so divide by 2 for 10Mhz and by 4 for 5Mhz. If you had a 50Mbps link on 10Mhz channel for 'normal' traffic with a neighbor, in comparison, this means the same size packet takes ~100 times longer exposed to higher chance of interference (sent at 0.5Mbps @ 10Mhz).
I've not yet found other possible causes to explain why I often see 2Ghz hub sites with links in the 30+Mbps range, but the LQ and NLQ is not what one would expect. 70% range +/-. This symptom along with the SNR charts jumping around 10dB range show the environment is dealing with some contention.
I've migrated more towards 5Ghz and 3Ghz in recent times due to this. But you and I are in densely populated areas in SoCal, Mileage elsewhere may vary. 3Ghz has very little of these symptoms but equipment costs are ~30% higher. 5Ghz is showing improvement over 2Ghz, but not as good as 3Ghz.
[another thought] broadcasts and beacons also don't use RTS/CTS handshaking to ask everyone else to clear a time slot to transmit. This handshaking is supposed to be honored in the 802.11 specs by all devices on the same channel and bandwidth regardless of SSID. It would be interesting to see if better performance can be achieved deciding to live with some interference (no RTS/CTS) with another wifi signal or joining directly with them on the same 20Mhz channel.
Joe AE6XE
Your explanation of the multi point traffic calculation makes sense to me. If I am understanding it correctly, one new node coming in with truly poor performance could, in theory, degrade performance for all nodes attached to the sector. If I have four clients connected to the node with optimal peaking and no interference on their path, all would be fine and a new node, not properly peaked or with lots of interference in its path, could drag the rest down as the system looks for the optimal speed to support usable communication with that node. Am I getting the concept correctly?
I also know the distance settings are critical so if the sector has a farthest client at a 10 mile range, it needs to be set at ten miles as would that client and all other clients would need to be at their correct distance to the sector. If one of these is out it could, in theory, begin degrading performance. That said, if a longer distance adjusts packet size for optimal error correction, if there is noise in a signal path would a longer distance help adjust for this and optimize the throughput? I played with this s concept yesterday and it actually seemed to work.
The distances from clients to the sector right now are at about an 8 mile max however the elevations vary from lower and higher than the sector location. For example, optimal alignment for one location would have a -2.9 degree downward tilt and alignment to the highest client would dictate a +1 degree vertical alignment. The sector was originally set with a -1 degree tilt in an attempt to best serve all. I am beginning to think setting it at a 0 degree would be best.
Distance should be based on the farthest node the Sector has actual traffic with. If traffic is not routing to farther out distant nodes, then having the distance value too short for them is not an issue. Although, if new people are joining, farther out, then it may be problematic with their first mesh experience :( .
The 2Ghz 120 deg sectors, have a 9 deg vertical beam coverage with a 4 deg electrical down tilt. Doing the math out loud, if the physical mount is up-tilt of +0.5 deg, then top edge of beam is at ( +0.5physical - 4 electrical_downtilt + 4.5 half_beam) = +1 deg. The bottom edge of the beam width would then be at -8 deg. Unless you're up 1000's of ft, you'll still be pointing a fair amount of energy into the ground close in.
...starting to ramble just a little :)
I found pointing angle is most sensitive on 5Ghz. I usually pull out my trig calculator and decide where the ring of beam width coverage should be. But I may be getting more picky about this than most. With a 4000' delta between clients and tower-antenna and a 3 deg vertical beam width on 5Ghz, I engineer a downtilt to have something like a ring of 5 miles to 30 miles of coverage (and figure close in will still connect). Depending on a coverage area, one might do a different angle to have 0 to 20 mile coverage. The higher the antenna is, the smaller the ring of coverage. 3 deg is significantly different than the 2Ghz 9 deg.
I then have this $40 digital read-out level from home depot still hopelessly trying to get the tilt to 0.1 deg tolerance...
Joe
Andre
Maybe what you are seeing is noise coming off of the sun. One of the quick checks that the microwave weak signal guys do is to aim their antenna at a "cold" spot in the sky and then move it to the "warm" spot of the sun. If the received noise power doesn't peak when the antenna is on the sun, they know that there is something wrong with their system. In your case, the noise power from the sun may be greater than the signal power from the other station, thus producing an unusable snr.
A related effect is that twice a year, earth stations looking at geo-synchronous satellites lose contact with a satellite when the earth is directly behind the satellite. The snr becomes unusable until the sun moves on and is no longer in back of the satellite. With a bad snr, the earth station simply can't hear the satellite because of the background noise. Because of geometry, this effect will happen at different times depending on the earth station's location on earth and hence, the time that the earth will be behind the satellite for that station.
Maybe we can call this "uwavehenge"? :-)
last year I did a calculation converting 2800 MHz solar flux units to dBm in a 30 inch dish. I cannot find it now, but my recollection is that the noise, even in a 20 MHz bandwidth, was not enough to have any effect on the signals AREDN typically encounters (like -80 dBm). In any case we are getting near the bottom of the sunspot cycle and the solar flux is lower than ever ...
I think tropo effects are a likely cause. On days when it is sunny and there is little or no wind, you can get some layering in the lower atmosphere. These layer boundaries (dielectric gradient) deflect the signal up or down from the geometric aiming point. The effect should repeat daily and may be worst, for example, late in the afternoon or early evening when the air starts to cool off, while the ground itself is still hot.
The other thing that can happen is that you get enhanced propagation at some elevation above the Fresnel zone. This would be great except it can be out of phase with the propagation you really want. I had always thought that tropo "scatter" would disrupt the wave front too much to be useful but apparently if you employ frequency diversity and enough FEC you can do broadband using tropo scatter (I suppose some serious ERP is required). How about 8 Mbps at 300 km?
Let me know when these things come on the surplus market:
http://www.ausairpower.net/APA-Troposcatter-Systems.html
In a help file it says that LQ is done using OLSR packets which have a sequence number and therefor can indicate how many packets you lost between one and the next reception... I guess that is the UDP broadcast you refer to? I guess if you miss a packet, you missed it, period. With data, you can ask for a repeat. At a fast data rate you can get repeats quickly and the channel looks OK. But what happens when all these repeats start to fill up the available time ("duty cycle" on the channel). With increasing traffic loading the throughput "falls off the cliff". So the channel looks good until you really need it for something.
I think that a low link quality (and jumping SNR) is trying to tell you something. The fact that you can squirt small amounts of data through a lightly loaded channel is giving a kind of false sense of the quality of the channel. Assessing the quality of a wireless link, in a dynamic environment, is a tricky business to say the least (and fertile ground for grad students to write papers).
We have 2 long distance links ~37 miles to compare between. All links using RocketDishes on 5Ghz and 10Mhz channels. But one link is getting 50-60 TxMbps rates. The other is getting 12-25 TxMbps rates. Both are demonstrating stable very low latency links (see my 'network performance' post a couple days ago), but one is half the capacity. The difference looks to be: one site is showing more noise -> less SNR getting above the noise to decode the signal.
The weakness of OLSRv1 in use is that it limits the characterization of the link quality to missed UDP packets (no retries). It has no concept of 'throughput' in the design -- it would choose to direct traffic through an awesome 100% LQ/NLQ @ 1 Mbps link, and pass over the 50Mbps link with 70% LQ/NLQ. Of course as soon as the 1 Mbps link becomes saturated, it will start showing much lower LQ/NLQ--a slow back and forth change of routing when loading the network down. OLSRv2 and other routing protocols are working to better characterize the throughput to make better decisions. One of these is in our future.
Joe AE6XE