You are here

Auto Distance setting

3 posts / 0 new
Last post
AE6XE
AE6XE's picture
Auto Distance setting
The auto distance setting will be in the Nightly build after tonight, May 3rd, 2019 PDT.    Here is a snippet from the node's help file below. Further testing with complex environments (mobile use) needs to occur to ensure stability and have confidence to put into a release.   A changing max distance neighbor scenario will confirm the implementation is hitting the target.

Failures of auto distance would be observed if a node can not be communicated with, and normally is accessable.   In the worse case if you run into this situation, it is possible to still access the node (it is very slow to respond, but does respond) with a telnet or ssh command line and type the command "iw phy phy0 set distance 60000" to put back to a static setting and bring back connectivity (on the hAP ac lite, this is "iw phy phy1 set distance 60000").

Monitor the file /tmp/AutoDistReset.log.  This file will log whenever a node joins (since node boot time in seconds) and the auto distance setting starts new.

Review the timeout value, in uS, calculated from auto and the static setting in meters:
cat /sys/kernel/debug/ieee80211/phy0/ath9k/ack_to
on the hAC ap lite, it is "phy1" instead of "phy0".

If you really want to see the details of the algorithm, turn on debug logging:
cat 0x00080400 > /sys/kernel/debug/ieee80211/phy0/ath9k/debug
then do a "dmesg" to see the debug logging.

The on-node help:
 
A value of '0' will cause the radio to auto determine the RF retry timer based on measuring the
actual time it takes acknowledgement packets to be received back.  The
automatic timer is tracked using a Exponential Weighted Moving Average (EWMA) method.
'auto' is the default setting and in most all situations the optimal setting.
The best way to test an optimal distance settings is to do an 'iperf'
test directly between 2 nodes to measure the performance of this RF
channel.   Try different distance settings to peak out the iperf throughput.
 
The maximum distance settings the ath9k wireless driver allows is dependent on
the Channel Width:
  • 20MHz: 46666 meters
  • 10MHz: 103030 meters
  • 5MHz: 215757 meters
 
The auto distance setting is best used on quality point to point links.
50% performance increases have been observed over static.  Auto distance
setting does not work well with many nodes and marginal links.  In this
scenario, the round trip packet timing has a very wide range of time values.
Consequently the timeout value becomes inflated and inconsistent. Static settings
should be used in this situation. It is best to measure the link with iperf to
compare thoughput and determine the best distance setting.

Joe AE6XE
 
K6CCC
K6CCC's picture
It's working

Updated the Rocket M5 at work to 890 an hour or so ago and it is working.  No noticed change in the TX speed listed.  I have never done an iperf so no baseline there.  Note that prior to setting to Auto, I had a the static setting to something like 10 or 11 KM on a 9.36 KM path, so it was pretty close to optimum already.
The cat /sys/kernel/debug/ieee80211/phy0/ath9k/ack_to result is: 201A
/tmp/AutoDist/Reset.log is:  94
Not sure I know what either of those means, but that's what they are.



 

AE6XE
AE6XE's picture
This ack_to is the timeout in

This ack_to is the timeout in uS when an ack packet is expected back, after sending a data packet to another neighbor (unicast, meaning the data was addressed to that neighbor and not broadcasted to all neighbors).    The logic is measuring when ack packets are received back to set this value, which can be many samples per second.  Keep in mind the time includes how long the remote node processed and transmitted back the ack.   The math the nodes use to convert a static distance setting to this uS value is:

(ack_to in uS) =  (distance in meters) / 151.515151 + 64

Thus this '201 A' or Auto determined value would be equivalent to a static setting of about 21km.    However, if you have not sent data through this link of any significance, it will not have zeroed into an optimal settings -- it starts at max, then comes down to the setting.   Monitor the setting over time.

On the 40mi P2P link I measured, when there was minimal data flowing, it is ~10% lower ack_to value as compared to iperf traffic saturating the link.   I'd speculate the remote node is a bit slower to respond with an ack when it is under load. 

The iperf package can be installed from the Administration page.    Once installed:

pick one side to be the server:   iperf3 -s
pick the other node to be the client:    iperf3 -c <hostname of the iperf server node>

See the results.  (If you can't connect to the server side, you may need to reboot the node for the firewall rules to be updated.)

Joe AE6XE

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer