You are here

that distance parameter

18 posts / 0 new
Last post
KE2N
KE2N's picture
that distance parameter

In my new install, I noticed the default distance on the setup screen was 100000 m (62 miles). That struck me as silly so I took one zero off.

But when I did tests on the bench, speed results on the bench were disappointing 

 5 MHz - 3.6 Mbps (one pair of nodes)

10 MHz - 5.7 Mbps

20 MHz - 7 Mbps 

By shortening the distance, I got better results:  for example 5 Mbps at 5 MHz and 10 Mbps for 10 MHz. 

It seems that 800 meters (one half mile) does not impact the speed. 

Not sure where it starts to fall off the cliff, but clearly you don't want this to be longer than needed.

73

Ken

 

KG6JEI
Correct, the distance should

Correct, the distance should be set for your real world max intended distance.

Too long and it can decrease throughput (it waits longer than needed to realize a packet is lost) and too short throughout may increase but the device thrashes the RF (sending the same packet again and again before the first packet may have even had time to reach the destination.)

Ultimately this "speed impact" is actually a function of latency real world distance will decrease the speed because the wifi channel has to wait so long always to ever know a packet was received or lost based on distance between them.  Bench tests have trouble really showing this part of the equation.

KE2N
KE2N's picture
tortoise and hare

I wonder if you can comment on the reason that (all) hamnet firmware is so slow compared with the OEM?  I just unboxed a new Nano Station LOCO and  pointed it at my Verizon router here. It connects at 130 Mbps in 20 MHz and 300 Mbps at 40 MHz.  A speed test at 20 MHz produces 83 Mbps Internet throughput, being limited not by the hardware, but by my ISP.  

I know that when I flash this shiny new toy with AREDN - and run it at the same 20 Mhz - it is going to produce 20 Mbps in a two-node test .... how sad is that?

Ken

 

KG6JEI
There are a lot of items here

There are a lot of items here that come into play.

A big one is an issue I found out about a few weeks back but won't get to look at until after 3.15.0.1 is out the door is that the radios do use both chains for diversity receive but are not using them for addtional bandwith.  Now this is a grey area in that ADHOC acctually never specifies that this is suppose to be doable (802.11n didn't touch the ADHOC specs) so this is unsure if this is a bug or just not implemented in kernel or hardware .  That alone takes max speed compared to AirOS down 50%.  Its something I would like to see not be an issue personally if possible so its on my todo list.

After that we are getting into the fact that a mesh adhoc connection are NOT the same as an AP connection to a router.  A mesh connection does a lot of items differently and has more overhead that can add up.  This is the cost of redundancy and self configuring vs a hard defined infrastructure, the cost of a mesh, vs a single room network.

Beacons are one item we have a lot more of in a mesh that come into play to slow down RF.

Beacons from every device (sent at 1mbps)  these are sent around 10times per second by every device, and while small, they do consume airtime at the slowest possible data rate (802.11 rules, beacons must be sent at the slowest supported speed. we have a consideration to bump the minimum speed to the MCS data rates that are more reliable and faster but this will require a protocol version jump.) In an AP network only the access point sends these packets (if at all) in a mesh these packets are needed from every device so that nearby devices can all link up.

Beacons of mesh routing data: These beacons are sent less often, but these also consume RF at the slowest data rate and as such will take up data speeds.

Commercial Driver Support: This may play some part to it all, I don't know how much however, but the Open Source community often has to pull from not so great documentation, this means sometimes some items that commercial drivers can use to 'speed up' are not actualy utilized in the opne source world.  I'm not sure if this comes into play for sure but its an item I have seen time and time again on drivers in open source sometimes not being as great as the closed source drivers (yet other times I've had hardware where the open source driver is leaps and bounds above the closed source driver)

I'm sure there are dozens of more interactions that are deep inside kernel drivers as well that even I don't know of that interact as well. Main point to keep in mind is its not an apples and apples comparison between the two so differences are expected.

 

KE2N
KE2N's picture
fruit

Yes - it is an apples-to-kumquats comparison....

Thanks for the explanation, that helps a lot.  

Sounds like using diversity to directly increase bit rate would be complex to do and maybe require special hardware (e.g., UBNT Air Fiber).

In principle, the better SNR resulting from MIMO should allow using a higher MCS step than would be the case with SISO.  That would be a really simple way to increase throughput, but would likely offer benefit only when signals were strong. Still, it would be worthwhile ... 

Since AREDN favors MIMO type hardware, it would be nice if it would offer a throughput increase (over BBHN) of something closer to 2X rather than the 10% that some people are reporting. Although, due to the various fixed overheads, I suppose 2X is not possible. Still - it's an interesting problem for some clever person.

73

Ken

KE2N
KE2N's picture
speed

Speaking of speed - I have not tried the higher bands yet (soon!) but can we expect to be able to use 40 MHz BW up there?

Ken

 

k1ky
k1ky's picture
Distance Setting Guidelines

When setting up the distance setting for a node, do I need to take into account the "total" path distance including all of the hops to the internet or just the distance to the furthest node that this unit will be communicating directly via RF?

Reason I ask is that I sometimes get a "LATENCY ERROR"  message when testing SPEEDTEST on a computer that is located at my remote node.

 

 

K5DLQ
K5DLQ's picture
The physical distance between

The physical distance between the RF of your node to the furthest node that your node will reach over RF.

 

 

KA9Q
I am not convinced that the

I am not convinced that the distance parameter only affects the ack timeout. If packets were never lost, the ack timeout wouldn't matter as long as it wasn't too short. But Ken observes that decreasing the parameter increases throughput even on the bench, where I wouldn't expect anything to get lost.

I suspect it affects the collision avoidance hold-off time piggybacked onto each physical frame for which a response is expected. Nodes overhearing a frame to someone else are expected to defer for this amount of time to avoid stomping on the response that they might not hear directly. Too large a value would cause unnecessarily long holdoffs and reduce througput.

There's a way to test this. From a shell within the Ubiquiti unit, run 'tcpdump' on the raw wifi interface (wlan0-1) with the -v -e options. This will produce a lot of OLSR broadcasts, so to trim it down you might say "tcpdump -n -i wlan0-1 -v -e not multicast". You might have to generate some unicast traffic, e.g., by pinging some other node. You'll see something like this, which I generated by pinging a remote node:

21:52:13.199474 48.0 Mb/s [bit 15] 172us CF +QoS DA:24:a4:3c:f2:1c:ab SA:68:72:51:12:49:b7 BSSID:3a:af:e7:8e:f1:1b LLC, dsap SNAP (0xaa) Individual, ssap SNAP (0xaa) Command, ctrl 0x03: oui Ethernet (0x000000), ethertype IPv4 (0x0800): (tos 0x0, ttl 63, id 11054, offset 0, flags [DF], proto ICMP (1), length 84)
    10.44.77.5 > 10.44.77.9: ICMP echo request, id 856, seq 1, length 64

This is my ICMP echo request frame going out; the SA field is my own MAC address and the DA field is the MAC address of w6qar-wts, my neighbor. Note the field "172 us". This is the time, in microseconds, that I'm asking other stations to wait so I can get my link-level ack back. The holdoff depends on the data rate (48 Mb/s in this case) and the size of the packet. And it may also depend on other parameters like the link distance setting -- that's my idea, anyway.

KE2N
KE2N's picture
clarification

Decreasing it from 10,000 meters to 800 meters made a notable improvement. Decreasing it from 800 to 100 meters had no effect.

This makes sense. The CPU is not infinitely fast. If I fiddled enough I suppose I could find a "knee in the curve" and that would be a distance equal to the amount of time it takes the CPU to get around to listening for the reply, times the speed of propagation.

Regards,

Ken

 

n5mdt
It definitely makes a huge

It definitely makes a huge difference. In our network we have nodes that are 16+ miles apart, and some that are 5 miles apart. Changing the default value of 100,000 meters to the actual distance +1 mile on each node substantially improved the throughput of the network.

Until I did that I was worried that it would not handle our voip solution.

Mark

 

SP2ONG
If I am remember that option

If I am remember that option 'distance' in wireless file is 2-way, for example  distance between nodes are 2000 meters you must define in wireless: distance 4000

 

73 Waldek SP2ONG
 

KE2N
KE2N's picture
distance

OpenWRT/MadFI seems to indicate that there are two parameters affected by distance - one is the ACK time which, as you say, is round trip. The other is CTS delay which is a one-way time.  What I would expect is that  you put in the actual distance and the multiple of two is applied where needed, inside the program.

73 Ken KE2N

 

SP2ONG
Yes, we can play with

Yes, we can play with following prameters to optimize link:

- distance (coverage class)

- RTS

- Fragmentation

- Retry short

- Retry Long

current status this parameters we can see when run command: iw phy

73 Waldek

 

wb6tae
In earlier versions of the

In earlier versions of the software the default setting for Distance was "0". That was also defined as being "automatic."  With v3.15.x the default now seems to be 10000, or was that 100000. Does this default setting also denote some attempt at "automatic?:"  Also, has the definition of "0" changed?

AE6XE
AE6XE's picture
wb6tae,  the original '0'

wb6tae,  the original '0' came from linksys days and the behavior between the broadcom chips and the Atheros chips may not be the same.   Someone would need to dig further into the OpenWRT code base for Atheros behavior to get a definitive understanding of what is going on.  The material out there spans many different versions/models of the Atheros chips and linux wireless vs. Openwrt implementations (they don't always do things the same).   I have seen posts that say Atheros "0" ('ack' handshaking timeout) is automatic.  But no context of which chipsets and driver version to know if this holds for AREDN supported hardware.  

My testing shows that it makes a SIGNIFICANT performance difference and that this parameter should be set to the actual distance + a little :) .   I'm seeing 2x iperf throughput by only tuning distance--reduced setting from 30km down to 10km.   I went from a 9Mbps to 17.2Mbs on 3Ghz.  But then I needed to increase the distance setting on the mountain node and settle for 15Mbps on my shorter link due to another node out a farther distance.   

Joe AE6XE

wb6tae
Thanks Joe.  I have al;ways

Thanks Joe.  I have al;ways set mine to an explicit value (distance rounded up so mod/150 = 0). But, I was just curious about the significance, or lack thereof, of the default values.

Kd6mtu
auto ack time
I've looked but I am thinking have missed this.. Will their be a Auto ack time in a future release ?

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer