You are here

power versus data throughput

15 posts / 0 new
Last post
power versus data throughput

I've always set power out for maximum data throughput        according to Mikrotik, my ​LHG XL HP5, has maximum data throughput with 24dBm of power.   max power is 28dBm.     

reading the ARDEN help file...   

The Power setting controls the max power the unit may output. The node may decrease its power output as it enters higher speed data rates to maintain a linear spectrum. Some devices may have max power levels that change based on what channel/frequency the hardware is operating on, in this case the max level will change when you save the settings and will be capped at the max level supported by the hardware for that frequency.

should I be leaving my nodes at max power out?


Richard   ko0ooo

K6AH's picture
Highest power not necessarily the best

I encourage you to try lowering power as an experiment to determine the answer to your question.  In addition to the explanation in the our Docs, at high power levels some devices may actually introduce signal distortion.  Back them off 2-3 dB and measure throughput again.

KE2N's picture

Vendor specs will show a power turn-down at higher bit rates.
Ubiquiti has a lot of detail in this regard, MikroTik, not as much.
Here is the table for the one you are using:

6MBit/s 28 -96
54MBit/s 25 -80
MCS0 28 -96
MCS7 24 -75

Joe (or somebody else) can correct me if I am wrong but I think the power turn-down may not be included in the AREDN firmware load.

In that case, the radio will run at whatever MCS gives the best throughput, including the effect of retries. Some of the retries may be due to the transmitter distorting the OFDM signal so that it cannot be decoded without errors. But you might get better throughput having retries and running at a faster signaling rate.  

Or not.




thanks Ken...

ran some tests..  normally I use 24dBm (20MHZ bandwidth) to a node 15 miles away.    increased power to 28dBm...    TxMbps stays the same at both locations..  144 at mine and 120 at the mountain node.    only noticeable difference is the mountain node shows an improvement in my received SNR, from 33 to 35

Richard     ko0ooo

KE2N's picture

Quite a while back I did a series of measurements with a peak-reading power meter and found that response to the last few dB of the power range is quite flat.

Asking for 4 dB more power and actually getting 2 dB more is exactly the kind of thing I found near the top of the range.

2 dB may not be enough difference to jump you into the next MCS (from the receiver's point of view).


AE6XE's picture
Signal quality vs  power,

Signal quality vs  power, channel width, and modulation rate

Background if new to some reading this thread.  An issue in these low cost devices is the ratio of average to peak power of the signal generated as it goes through the Power Amp (PA).  The larger the ratio, the more dynamic range the PA requires to stay linear. If it goes out of linearity, then the signal quality degrades with increased energy outside the channel -- spurs, etc.  Keeping the cost low results in PA designs just good enough.  With part 15 certification, this is 20MHz and out of the gate when we use 10MHz, are in unknown territory.  Well, not completely unknown, there have been spectral plots produced over the years.

* Increasing the xmit power increases the ave to peak power ratio
* cutting the bandwidth in half increases the ratio
* increasing the modulation scheme of the 64 carrier waves, e.g QAM8 to QAM16,  increases the ratio

I've not been able to confirm if the power reduction shown by Ubiquiti, with increase of modulation scheme, is done in the chip or in the linux wireless driver.  We'd have to do further testing to confirm.   If done in the chip, we benefit.  If done in the wireless driver, we're probably not benefiting, I've not found any reference in the ath9k code base.  the logic is probably done in the wireless driver as the chip manufacturer does not know what PA will be paired with a given device.

We don't have a measurement of out-of-channel energy to know how small or big the issue is.   Not a scientific measurement, but in past changing of the power settings, it seemed like there was a diminishing return as the power is increased to max.    I don't recall a situation yet where a power reduction resulted in an increase of thoughput, thus I suspect the quality of the in-channel signal hasn't degraded that much.  Maybe this means the out-of-channel signal problem is small too. 

Ditto.  On P2P or other links, dial back the power until you see a link rate drop of significance.  This is best practice regardless.   Alternatively, if there are others on the adjacent channel, see if they can tell a thoughput difference by changing power settings.  If they can't, then from a practical perspective, there isn't a problem to be solved.


nc8q's picture
dial back the power until you see a link rate drop

In service test of a pair of Mikrotik LHG XL HP5 on channel 141 at 10 MHz BW
2.6 miles apart, 49' agl one end, 120' agl on the other end.
1 mile of the LOS is in the trees.
Maximum power, 28dBm, yields somewhat steady 21.6 TxMbps at 120' agl end
and up to 21.6 TxMbps on 49' end.
rs_stats usually reports SGI 2 A MCS10 at each end.
This is a clear channel (of local AREDN nodes) Point-to-Point link.

While watching '/mesh' status and rc_stats, I dropped the Tx power 1 dBm at a time down to 20dBm.
At each dBm reduction there was a perceptible reduction in average TxMbps.
Usually an increase in retries, often a reduction in MCS or reduction in MCS due to reduction of number of streams.
At 120' agl end at the lowest power setting the SGI changed to LGI.
At 49' agl end LGI/SGI changes were common.
Node appears to always strive for highest throughput without regard to retries.

A same site 5 GHz, channel 173, 13.5 mile link was unaffected by this 8dBm power reduction.
This link is an early model Ubiquiti AirGrid M5 HP <> AirGrid M5 XW.

Assuming that the amount of data per time is constant,
the higher data rates require less transmit time,
thus less watts per second and less channel occupancy.
Is there a way to measure how much time a node has been transmitting?
i.e. Can channel occupancy or Tx Watt Seconds be determined?
This may not be an issue with clear channel point-to-point links, but
it might affect Point-to-MultiPoint or Multi-Multi link channels


Image Attachments: 
AE6XE's picture


on the 2.6 mi link, this channel 141 is in the part 15 noise area.  Is part 15 noise a possible factor for this link?   At that distance, a link  should be able to do 20Mhz bandwidth and get 130Mbps MCS15 link rates.  There must be some noise or something degrading the signal (trees, etc.)?     I know of  RocketDishes on 3Ghz ~5 mi apart and will do 130Mbps rock solid.  I know of RocketDishes on 5GHz 40 mi apart doing 65Mbps on 10Mh rock solid. 

"Is there a way to measure how much time a node has been transmitting?"

This is not straight forward.   There are cumulative transmitted byte counts that could be used.  If this is P2P, and the rate is steady, a good ball park number could be calculated.   We'd have to confirm if beacons and all data, including protocol, is included in the byte counts.   For a multi-point, it may be difficult to match up rates and byte counts as traffic is sent out -- every other frame could be a different rate, depending on who it is going to. 

Comparing total xmit bytes between all the devices on the channel, may give just as good insight into the question being asked. 

"i.e. Can channel occupancy or Tx Watt Seconds be determined?
This may not be an issue with clear channel point-to-point links, but
it might affect Point-to-MultiPoint or Multi-Multi link channels"

Simi liar issue to find the answers, but we also have to factor in how much user data the device is servicing.  Very different answer if streaming video, voice, or idle state. 


nc8q's picture
Power rate .vs. lots of trees .vs. channel occupancy

Hi, Joe:

Many thanks.
"Is part 15 noise a possible factor for this link?"
Always. ;-)
That both ends show even SNR and TxMbps indicates the noise is equal or not a factor. :-|
It has been about a year since, but I did several 20 MHz BW WiFi scans from each end and then chose a channel
that was relatively clear at both ends. :-|

"something degrading the signal (trees, etc.)?"
Guaranteed! At least the 1st mile of the 2.6 miles from my home is through the trees. :-(

Thanks for your comments on determining how much time a node has been transmitting.

Please guestimate what this link should be with these SISO devices at 10 MHz BW:
AirGrid M5 27dBi 25dBm 120' agl <13.5 miles> AirGrid M5 XW 27 dBi, 25 dBm, 130' agl.
We are getting MCS1 6.5 TxMbps one way and MCS2 9.7 the other.

Around these parts you can count on lots of trees 50 to 70 feet tall.


Image Attachments: 
AE6XE's picture
Take a significant portion of

Take a significant portion of the part 15 noise out of consideration by going to a channel above 165.   I have 5GHz nodes on major SoCal tower sites with part 15 saturated from WISP operators.  I can still do a 40 mi link at 65Mbps in the higher channels 170, 172, 174, and up.  ch 165 and below, the node isn't functional. 

I found a data value per neighbor that captures the RX and TX cumulative time.   

root@AE6XE-NSM3-QTH:/sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/68:72:51:0e:21:1b# cat airtime
RX: 2209045696 us
TX: 156286592 us
root@AE6XE-NSM3-QTH:/sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/68:72:51:0e:21:1b# uptime
 15:35:28 up 4 days, 52 min,  load average: 0.29, 0.23, 0.19

With a bit of math, can calc the RX % and TX % of time this node has on the channel (probably a small fraction of a %).    In this case, this is a (hidden node) client to the mountain top site.  There are ~3 other clients connecting in.  

With the airGrid devices, at this distance, unobstructed LOS, it should get full MCS7 rate, on 10MHz would be 32.5 LGI or 35Mbps SGI.   There must be minimal multipath signals, which makes sense as a lot more trees would be blocking, if it is selecting and can use SGI.  

This guard interval actually has duplicate data from a symbol, called the cyclical prefix.  The purpose is to do equalization -- to null out any signals and prevent corruption in the next symbol, or inter symbol interference (ISI).   The guard interval time has to be longer than a delayed signal would arrive, or it can't prevent ISI.  At 10MHz channel width, the symbol and GI time is double the time of the 20MHz channel.   Generally, ISO RF shielding can block some multipath signals and give hopes of SGI and the slightly higher rates.  Looks like your trees are doing that for you :) . 



AE6XE's picture
802.11n specs and testing the design

From the book, "Next Generation Wireless LANS", 2nd Ed, Eldad Perahia & Robert Stacey:

"A power amplifier (PA) of a certain size, class, and drive current is most efficiently used when driven to saturation for maximum output power.  However, when operated at saturation, the PA exhibits non-linear behavior when amplifying the input signal.  The distortion caused by the PA on the transmitted waveform causes spectral re-growth, impacting the transmitter's ability to meet the specified spectral mask.  In addition the distortion impacts the Tx error vector magnitude (EVM), increasing the packet error rate at the receiver.

The 802.11n MIMO_OFDM system is especially sensitive to PA non-linearity, since the transmitted waveform has high dynamic range and uses high order modulation.  Therefore in order to properly model the PHY, especially with high order QAM modulation, a model for PA non-linearity was included in the PHY simulations when comparing proposals.  [... math of modeling distortion...]   For the simulation of the different proposals for the 802.11n development [...]

For simulations, the recommended transmitted power, at full saturation, was 25 dBm.  The total transmit power was limited to no more than 17 dBm.  Therefore, with the recommended settings, the output backoff from full saturation is 8 dB."




thanks for taking the time to explain.     

Richard     ko0ooo

k1ky's picture
So the power doesn't adjust automatically?

It was my understanding that the nodes automatically adjust the power output based on the internal selection of the MCS operation?  I've never tried reducing power to increase throughput.  Interesting discussion.

AE6XE's picture
K1KY, AirOS does this per the

K1KY, AirOS does this per the model specs of devices.   It is not confirmed if OpenWrt driver does this -- probably not as I've not found any comments or notation in the code to date that talks about this.   I doubt the logic is in the hardware.


KE2N's picture
data rate and throughput are different things

The specs will tell you the maximum power for a certain bit rate (MCS). Generally, power has to be reduced to run the higher bit rate without a lot of retries due to signal corruption. In some cases, it may be that the power needs to be reduced to meet FCC specs for bandwidth of the transmitted signal.
The maximum bit rate does not necessarily correspond to the maximum throughput for a link.
That is kind of the crux of the discussion .....

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer