The code to support the GL AR300M16 is in the nightly builds, available from May 18th, 2019. Summary of specs:
* 2GHz
* comes with or without 2 external antennas models
* 23dBm max xmit power
* 2 ports - LAN/DtDlink (untagged/vlan 2) and WAN (untagged)
* power is supplied with USB 5v cord (could run for days on readily available battery pack typically used to charge cell phones)
* look for the "gl-ar300m" image
* 3 led lights: power, booting/booted, established mesh link
There is another model, the "GL AR300M" which includes the 16Mb nor flash like the GL AR300M16, but additionally has a 2nd 128Mb nand flash. This model does not yet have support in OpenWrt and AREDN yet. It will take further work to dig though the code and port over from the GL-inet github repo.
Thanks to Damon K9CQB for providing these devices to get under support.
Joe AE6XE
I don't see it in the Nightly Builds 930 - where can we find it??
check again, it's there now.
Joe AE6XE
Joe,
I see the 'aredn-936-3ad13fd-gl-ar300m-sysupgrade.bin' but not the factory.bin file.
-Damon K9CQB
I forgot - that's the only file needed.
-Damon K9CQB
*PS: You're awesome, Joe!!!!
Joe,
I just tried to flash my AR300M and it said "The uploaded image file does not contain a supported format. Make sure that you choose the generic image format for your platform."
So I re-read your post and you, it looks like you meant the AR300M16 was the current AREDN compatible device.
I'll have to order another one. I just want to be correct, though, it's the AR300M16, right?
-Damon K9CQB
The GL "AR300M16" is supported by AREDN firmware. The "AR300M" is not supported by AREDN (and not supported by OpenWrt). Sorry if the wording was confusioning, hope this clarifies.
The "AR300M" has 2 flash or persistent storage chips, a 16Mb NOR and a 128Mb NAND. AREDN/OpenWrt supports "AR300M16" with only the 16Mb NOR.
Joe AE6XE
Joe,
Thankfully the AR300M16 is cheaper at $28 (the -Ext version anyway) instead of $46. I'm glad it was the cheaper one that turned out to be AREDN compatible. I can see plenty of folks may purchase or own the more expensive one (AR300M) and mistakenly think it is AREDN capable.
I think the AR300M16 is the cheapest 2x2 MIMO device AREDN has and 23dBm is the same TX power as the MikroTik hAP-ac-lite. I also like that you can buy the AR300M16-Ext and have external antennas like a tiny Rocket, at half the power, but with double the Ethernet ports (and a USB port). There is not a PoE version of this, but at 5V, I can find power for it just about anywhere.
Thank you, Joe.
-Damon K9CQB
Can anyone confirm that the "-EXT" version of this device is also supported with this firmware load? I just ordered one to function as a Wi-Fi- bridge, but was interested in the function as a MESH node for future purchases. Any difference in the power output between the Internal and External version? How do the 2 antennas function in MESH Mode??
Confirmed. The -EXT model is the one I tested the image on. It is a dual chain MIMO device. Best to orient the antennas so polarizations are 90deg from each other.
Anyone with a model with internal antenna, and willing to assist, we do need to check that the max power setting is the same as the -EXT model. Need the results of a command to check. For the moment, the assumption is the max power setting is the same and everything is identical except the antenna gain.
Joe AE6XE
GL-iNet adds an external antenna accessory kit to the GL-AR300M16 to make it a GL-AR300M16-Ext. They are the same hardware with the same software/firmware settings. The versions with internal antennas have a test point connector soldered on the RF traces, whereas the -Ext version has ufl/IPEX connectors on the RF traces where they connect the two ufl to RP-SMA bulkhead adapters.
*NOTE: I recommend to whomever buys these GL-iNet devices that you always get the version with external antennas so you can actually do something directional or high-gain with them.
**Also, I would like to highly recommend the AR300M16-Ext which has 23dBm power, 2x2 MIMO, 128RAM, a faster 650MHz processor, dual Ethernet ports, 2 external antenna ports, and cost me $28, yes $28!!!
-Damon K9CQB
Wow! Look at all that flash space - compare that 9436KB to other AREDN implementations! And, you've gotta love using "ubiquitous" 5V DV power. As a side thought: this might be a great alternative to using a separate PI for MeshChat installs, lots more options and less expensive, and it's a real node!
One question: do you actually get to use all 23dBm of power on -2? Some devices (like TP-Link) advertise a power level for the center of the band but it gets restricted down on the edge, like for our -1 and -2 channels on 2.4GHz.
Thanks for the detailed heads-up and recommendation,
- Don - AA7AU
edited to add: and how about using the DtD link in from a -2 (or whatever) bigger node setup outside and setting this unit to -1 for "local" inside use with a small group of local EOC users ...
Yes, all channels are 23dBm. It is only the tplink devices that do this -- power max drops on channel edges. The ch -2 @ 5MHz for a dish on the roof and separating frequency contention by using ch -1 @ 5MHz for local user access may give improved performance, particularly for VOIP applications.
Joe AE6XE
Is it possible to put iperf3 on the GL AR300M16?
Using the standard package install does not seem to work and I do not see iperf3 in the packages directory of the nightly build (as of today 5/25).
Ken
iperf package is hidden in the generic package location for ubnt, tplink, etc. Does this work?:
http://downloads.arednmesh.org/snapshots/trunk/packages/mips_24kc/base/i...
Be sure to reboot or "/etc/init.d/firewall restart" for the server side to have the new firewall rule applied. (it should do this automatically in the package install, but does not seem to be doing this --- if so need a github issue to make sure it gets done.)
Joe AE6XE
OK - its been a while ;-P
What I was looking for was the "iperfspeed" part of the package. I installed that and it works (with the exception that the archived results are missing the actual bit rate result. The printout of the last test conducted does show all the info).
At 10 MHz BW signalling, TxMbps bit rates show around 70 - with MIMO I was expecting to see more like 140. Maybe that is not possible with "rabbit ears"?
(Are these things really MIMO running under AREDN?)
Ken
some more checking of other units I have shows that 140 is the kind of TX rate you would expect with 20 MHz BW and MIMO. For 10 MHz, 70 is what you should expect.
Ken
Yes, there are always 64 carrier waves in the bandwidth selected. The symbol length becomes 2x for 1/2 the bandwidth. Thus half the max rate possible, taking twice as long to send the same number of symbols or bits. With longer symbol length, fading is better mitigated at even longer distances. Accordingly, max ack timeout value from 20MHz to 10MHz is doubled too.
one explanation for the speeds observed is that the 300 might be based on 40 MHz channel width (not available in AREDN).
so (in round numbers)
150 TxMbps max at 20 MHz
75 at 10 MHz
37 at 5 MHz
The mesh protocol itself does not affect these numbers. Instead it affects the throughput that can be had between any pair of nodes (iperf results)
This is common to all devices when comparing OEM "transparent bridge" mode to mesh mode.
There is some benefit to retaining the OEM software when you are using only two nodes in a point-to-point backbone link (on 3.4 GHz for example) because given strong signals you could run 40 MHz BW and AP/station bridge mode and achieve way more throughput .
Ken
ref: 802.11n spec rate table https://en.wikipedia.org/wiki/IEEE_802.11n-2009
300Mbps is achieved only at 40MHz channel width with small guard interval (SGI) which is 400ns -- the MCS15 rate. Best to check the node's rate table to see exactly what MCS rate, guard interval, etc. the device is using to communicate to another.
/sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/<mac address of remote station>/rc_stats
(it will be phy1 and wlan1 on an hap ac lite or ar750.)
example table (no traffic going, so it doesn't show anything interesting for the moment). Look for where there is an "A" in column 4 -- this is what is in use. If 10MHz channel width, then ref back to the spec and cut the value in half showing for the respective MCS rate and SGI or LGI rate.
A 40MHz channel width could be put into AREDN. It hasn't been a compelling reason to do so since we generally see higher data thoughput on longer distance links with 10MHz channel widths seems to be the sweet spot. We've put in this 40MHz setting manually in to the config files before, thus can test out to compare. If this is beneficial, let's get a github issue to put on the radar as a UI option.
"long distance" may be a relative term. The settings for ack timeouts at the hardware chip level only go out to ~46km for 20MHz channel width (and 40MHz is 2 paired 20MHz channels, but further spreading the same power across). This would be the hard limit in the technology for 20MHz. I've never seen a 20MHz channel out perform a 10MHz channel (measured iperf data through put) at distances longer than ~8 miles. But then, in a metropolitan area, this may be dealing with a lot more noise. Half the channel width means half the noise the same power is spread across.
Note, we did a comparison of the TDMA out-of-box 3Ghz ~40 mile and AREDN CDMA here in SoCal. This was a ~year ago and what we found for same hardware, environment, with only firmware differences, AREDN CDMA was about 70% the iperf performance of AirOS TDMA. However, this was back when AirOS was using auto-distance and AREDN was static distance set (and we did not further tune this static setting). Recently, we compared the new auto distance setting in AREDN to a static setting on a ~40 mile 5GHz AREDN link. We were seeing ~50% iperf data throughput improvements. the auto distance setting was floating upwards during heavy load, then settling back down under lite load. This suggests that CDMA and TDMA on a p2p link can achieve the same performance -- when both are using auto distance timing. Further testing is needed.
Joe AE6XE
I would note that airOS has something called a "long distance PtP (point-to-point)" mode to deal with the distance limitations. So, if you are willing to run a proprietary protocol, there is no hard limit on the distance-bandwidth parameter. Mother Nature may be impose harder limit :-)
I've never seen a 20MHz channel out perform a 10MHz channel (measured iperf data through put) at distances longer than ~8 miles.
Hi, Joe:
MIMO, AREDN firmware, suburban areas, distance=13.5 miles.
Theorum:
MCS12 or better at 20 MHz bandwidth is faster than MCS15 (any MCS) at 10 MHz bandwidth.
A link at 20 MHz that achieves MCS12 or better is faster than any MCS at 10 MHz bandwidth.
-----
You specified (measured iperf data through put).
Here are the 'iperf' tests:
@20 MHz BW:
Time Server Client Result
5/11/21 8:16 AM nc8q-centerville-huberheights nc8q-huberheights-centerville 22.6 Mbits/sec
5/11/21 8:16 AM nc8q-centerville-huberheights nc8q-huberheights-centerville 20.5 Mbits/sec
5/11/21 8:03 AM nc8q-huberheights-centerville nc8q-centerville-huberheights 27.7 Mbits/sec
5/11/21 8:48 AM nc8q-huberheights-centerville nc8q-centerville-huberheights 32.0 Mbits/sec
-----
@10 MHz BW
Time Server Client Result
5/11/21 8:34 AM nc8q-huberheights-centerville nc8q-centerville-huberheights 11.9 Mbits/sec
5/11/21 8:34 AM nc8q-centerville-huberheights nc8q-huberheights-centerville 17.8 Mbits/sec
5/11/21 8:34 AM nc8q-centerville-huberheights nc8q-huberheights-centerville 17.3 Mbits/sec
5/11/21 8:42 AM nc8q-huberheights-centerville nc8q-centerville-huberheights 16.9 Mbits/sec
-----
Here are the rc_stats from each end at 20 MHz BW:
root@NC8Q-HuberHeights-Centerville:~# cat /sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/48\:8f\:5a\:25\:2e\:51/rc_stats
best ____________rate__________ ________statistics________ _____last____ ______sum-of________
mode guard # rate [name idx airtime max_tp] [avg(tp) avg(prob) sd(prob)] [retry|suc|att] [#success | #attempts]
HT20 LGI 1 MCS0 0 1477 4.8 4.8 100.0 0.0 3 0 0 3 3
HT20 LGI 1 MCS1 1 738 9.7 9.7 100.0 0.0 0 0 0 1 1
HT20 LGI 1 MCS2 2 492 14.6 14.6 100.0 0.0 0 0 0 1 1
HT20 LGI 1 MCS3 3 369 17.0 17.0 100.0 0.0 5 0 0 37 37
HT20 LGI 1 MCS4 4 246 24.4 24.4 100.0 0.0 5 0 0 1 1
HT20 LGI 1 MCS5 5 185 29.2 29.2 98.5 0.0 5 0 0 92 103
HT20 LGI 1 MCS6 6 164 31.7 31.7 96.7 0.0 5 0 0 968 1022
HT20 LGI 1 MCS7 7 148 34.1 34.1 96.6 0.0 5 0 0 2406 4025
HT20 LGI 2 MCS8 10 738 9.7 0.0 0.0 0.0 0 0 0 0 0
HT20 LGI 2 MCS9 11 369 17.0 17.0 100.0 0.0 0 0 0 1 1
HT20 LGI 2 MCS10 12 246 24.4 24.4 100.0 0.0 0 0 0 1 1
HT20 LGI 2 MCS11 13 185 29.2 29.2 95.1 0.0 5 0 0 181 196
HT20 LGI 2 C MCS12 14 123 36.6 36.6 100.0 0.0 6 0 0 221031 222411
HT20 LGI 2 MCS13 15 92 43.9 29.2 60.0 0.0 6 0 0 407070 432047
HT20 LGI 2 MCS14 16 82 46.3 24.4 47.9 0.0 6 0 0 45786 63797
HT20 LGI 2 MCS15 17 74 48.8 0.0 0.0 0.0 6 0 0 1679 16916
HT20 SGI 1 MCS0 30 1329 4.8 0.0 0.0 0.0 0 0 0 0 0
HT20 SGI 1 MCS1 31 665 9.7 9.7 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS2 32 443 14.6 14.6 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS3 33 332 19.5 19.5 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS4 34 222 26.8 26.8 95.1 0.0 5 0 0 7 10
HT20 SGI 1 MCS5 35 166 31.7 31.7 98.2 0.0 5 0 0 634 676
HT20 SGI 1 MCS6 36 148 34.1 34.1 100.0 0.0 5 0 0 8245 8502
HT20 SGI 1 DP MCS7 37 133 36.6 36.6 91.9 0.0 6 0 0 11827 16082
HT20 SGI 2 MCS8 40 665 9.7 0.0 0.0 0.0 0 0 0 0 0
HT20 SGI 2 MCS9 41 332 19.5 19.5 100.0 0.0 0 0 0 1 1
HT20 SGI 2 MCS10 42 222 26.8 26.8 97.8 0.0 5 0 0 7 8
HT20 SGI 2 MCS11 43 166 31.7 31.7 100.0 0.0 5 0 0 765 795
HT20 SGI 2 B MCS12 44 111 39.0 39.0 100.0 0.0 6 0 0 1086121 1093394
HT20 SGI 2 A MCS13 45 83 46.3 46.3 100.0 0.0 6 1 1 606872 644130
HT20 SGI 2 MCS14 46 74 48.8 26.8 49.6 0.0 6 0 0 58876 78814
HT20 SGI 2 MCS15 47 67 51.2 0.0 0.0 0.0 6 0 0 1951 17328
Total packet count:: ideal 2353363 lookaround 101282
Average # of aggregated frames per A-MPDU: 1.0
root@NC8Q-HuberHeights-Centerville:~#
-----
root@NC8Q-Centerville-HuberHeights:~# cat /sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/48\:8f\:5a\:25\:30\:26/rc_stats
best ____________rate__________ ________statistics________ _____last____ ______sum-of________
mode guard # rate [name idx airtime max_tp] [avg(tp) avg(prob) sd(prob)] [retry|suc|att] [#success | #attempts]
HT20 LGI 1 MCS0 0 1477 4.8 0.0 0.0 0.0 1 0 0 0 0
HT20 LGI 1 MCS1 1 738 9.7 0.0 0.0 0.0 0 0 0 0 0
HT20 LGI 1 MCS2 2 492 14.6 14.6 100.0 0.0 0 0 0 1 1
HT20 LGI 1 MCS3 3 369 17.0 17.0 100.0 0.0 0 0 0 1 1
HT20 LGI 1 MCS4 4 246 24.4 24.4 100.0 0.0 0 0 0 1 1
HT20 LGI 1 MCS5 5 185 29.2 29.2 95.1 0.0 5 0 0 4603 4781
HT20 LGI 1 MCS6 6 164 31.7 31.7 96.7 0.0 5 0 0 31598 33151
HT20 LGI 1 C P MCS7 7 148 34.1 34.1 91.4 0.0 5 0 0 10060 17110
HT20 LGI 2 MCS8 10 738 9.7 9.7 100.0 0.0 4 0 0 1 1
HT20 LGI 2 MCS9 11 369 17.0 17.0 100.0 0.0 5 0 0 1 1
HT20 LGI 2 MCS10 12 246 24.4 24.4 100.0 0.0 0 0 0 1 1
HT20 LGI 2 MCS11 13 185 29.2 29.2 100.0 0.0 5 0 0 5524 5694
HT20 LGI 2 B MCS12 14 123 36.6 36.6 100.0 0.0 6 0 0 489296 503999
HT20 LGI 2 A MCS13 15 92 43.9 43.9 100.0 0.0 6 1 1 307701 341927
HT20 LGI 2 MCS14 16 82 46.3 14.6 30.9 0.0 6 0 0 12143 25769
HT20 LGI 2 MCS15 17 74 48.8 0.0 0.0 0.0 6 0 0 414 12440
HT20 SGI 1 MCS0 30 1329 4.8 0.0 0.0 0.0 0 0 0 0 0
HT20 SGI 1 MCS1 31 665 9.7 9.7 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS2 32 443 14.6 14.6 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS3 33 332 19.5 19.5 100.0 0.0 0 0 0 1 1
HT20 SGI 1 MCS4 34 222 26.8 26.8 100.0 0.0 5 0 0 157 159
HT20 SGI 1 MCS5 35 166 31.7 31.7 100.0 0.0 5 0 0 51080 52596
HT20 SGI 1 MCS6 36 148 34.1 31.7 84.4 0.0 5 0 0 111698 116836
HT20 SGI 1 D MCS7 37 133 36.6 34.1 84.4 0.0 6 0 0 17765 27602
HT20 SGI 2 MCS8 40 665 9.7 9.7 100.0 0.0 0 0 0 1 1
HT20 SGI 2 MCS9 41 332 19.5 19.5 100.0 0.0 5 0 0 6 6
HT20 SGI 2 MCS10 42 222 26.8 26.8 97.2 0.0 5 0 0 41 45
HT20 SGI 2 MCS11 43 166 31.7 31.7 96.7 0.0 5 0 0 46941 48338
HT20 SGI 2 MCS12 44 111 39.0 34.1 76.6 0.0 6 0 0 1006179 1037126
HT20 SGI 2 MCS13 45 83 46.3 34.1 65.9 0.0 6 0 0 421879 470027
HT20 SGI 2 MCS14 46 74 48.8 19.5 36.8 0.0 6 0 0 15585 30377
HT20 SGI 2 MCS15 47 67 51.2 0.0 0.0 0.0 6 0 0 446 12418
Total packet count:: ideal 2419373 lookaround 113821
Average # of aggregated frames per A-MPDU: 1.0
root@NC8Q-Centerville-HuberHeights:~#
Ken,
In my lab I normally get over 100Mbps with Ubiquiti & MikroTik gear set up so that one antenna is vertical and the other is oriented horizontally. When I put both antennas vertical it drops to around 80Mbps.
With these GL boxes (AR300M16) I'm only getting 25Mbps at best when I have the antennas at 90 degrees for MIMO (H-pol/V-pol). I only get 15-18Mbps when I put both antennas vertical. So I think they do MIMO, but why are they so slow? I'm going to look at the board under my soldering microscope and make sure the internal antennas built into the board aren't also transmitting (I can't see how they would've messed that up).
Joe,
Can you think of any reason why these things run slow?
-Damon K9CQB
Put a dummy load on one port and an antenna on the other port of one of the devices. You will see that the unit with the missing antenna reports 1/2 the data rate of the unit with two antennas. The (I think) proves that you were using MIMO.
Ken
If the antenna orientation is optimized to reduce interference between the polarities, then a couple of thoughts:
1) the low cost is achieved by sacrificing quality of parts and manufacturing -- front end band pass filter poor, etc.
2) lower the power from max -- could be over driving the PA and poor signal quality
Joe AE6XE
Joe and Ken,
I set up some stuff here at home and I have some conclusions.
1. I believe I was overdriving the front end of these devices. Because they are lower quality than the MikroTik and Ubiquiti, I believe they are more susceptible to noise. So I lowered the power to 3dBm (it won't actually go lower than that, even though it says it does). This increased my throughput from 25Mbps to 39Mbps. The devices are 10ft away on the same table with no obstructions and I'm using iperfSpeed for my throughput measurements.
2. I believe it does MIMO. I terminated the 2nd antenna on each device and they dropped from 39Mbps to 21Mbps. That's decent proof of 2x2 MIMO.
3. I had it on '0' distance to achieve the auto-distance function. Not being satisfied I dropped it to '1' km. That gave me nearly 50Mbps. So without auto-distance I was getting much faster speeds. That may be because I'm skipping those computations, but I'm not sure. I realized for these little mobile devices I will probably always leave them on '0' for auto-distance... that is until I get to a point where I'm stationary for awhile and setting up a link to a node that is a fixed distance away. Then I may set the distance to make the link more efficient.
By the way, even though the devices say they go to 23dBm, I actually noticed zero difference in power from 22 to 23dBm, and actually I think the GUI is saying 23, but really reverts back to 22dBm. Am I smoking crack on this one?
-Damon K9CQB
Ken and I compared notes offline and we found that my GL-AR300M16 has max xmit at 23dBm and his is 22dBm. Do a "iw phy phy0 info" to see the capabilities by channel. What does yours show? Speculate more recent manufactured devices, GL.iNet lowered the max power? Mine shows:
My units are limited to 22 dBm internally like yours. Apparently Joe has one that goes to 23.
In any case, I am seeing that the power output is close to 22 dBm peak at each antenna. So total power is more like 24-25 dBm peak when you ask for 22.
I was also using a manual distance setting ... and reduced power for bench tests (I think if you really saturate the receiver it probably generates internal distortion and causes signal degradation).
Damon - if you want to see what is going on try running
iw wlan0 station dump
The reported info changes dynamically, so do this as you are running "iperf" or something else that pumps a lot of data
With two units across the room from each other and running 10 dBm I see that they are running at MCS 14 or 15: that is MIMO with a combined bit rate of 130 or 144 Mbps and an "expected throughput" of 44 Mbps (reported throughput varies from test to test).
These numbers are reported based on 20 MHz BW but, of course, I was running 10 MHz BW so actual is:
65 Mbps signaling rate and 22 Mbps throughput.
These latter two numbers match the TxBPS rate on the mesh status page and the iperf throughput, respectively, so that seems to all hang together.
I will eventually take this things out into the field and see what the range is.....
= = =
The newer version of iperf reports retries for each 1 second interval. I was seeing 50-150 retries over 10 seconds even though there were no other nodes on frequency. One supposes that the alternative of running a slower MCS is not as good as running a higher MCS with retries.
Ken
Great information, I am looking forward to getting started and our city connected.
Hi Joe,
I have a GL-AR300M router with the additional 128MB Nand flash.
You have wrote elsewhere that this model isn't supported by OpenWRT.
While looking at the advanced information (see attached file) I see that it is based on OpenWRT.
Can it be adapted to AREDN?
Thanks
Yoram
Yoram,
I have 3x AR300M's lying around from another project. It's unfortunate I can't convert them to AREDN... almost.
I will find another use for them. We are actually very lucky that the AR300M16 is the one that is AREDN compatible because it' much cheaper than the AR300M.
Here's the AR300M16 with external antennas for only $28:
https://www.amazon.com/dp/B07794JRC5/
This is a huge win for us having a device with external antenna ports, 2x2 MIMO, dual-LAN ports, USB port, 128MB memory, 5V powered, and tiny size for only $28.
-Damon K9CQB
GL-iNet vendor has custom changes to create images supporting this device. These changes are not yet incorporated back upstream in OpenWRT that AREDN is based on. Thus, while it could technically be done, it is difficult to justify the effort instead of working on AREDN specific features we also all want. With a little patience, we will inherit support for this device in a future openwrt upgrade and not have to spend this extra time to get it working.
Of course anyone with ability and interest could make it happen at anytime, given the open source model.
Joe AE6XE
OK Thanks
I've got this device about two years ago.
I guess that if the changes didn't make it to openWRT by now, they will not bother doing it now
Yoram 4Z5YR
Openwrt is in a transition to port devices from the ar71xx code architecture to the ath79 architecture. This modernizes the linux kernel and moves driver definitions from compiling 'c' code to a config file definition called device tree. As new devices are added, they're all going on this ath79 architecture. AREDN isn't ready to move to this architecture, because most devices are not yet working there. I noticed a GL-iNet developer submitted code for the gl-ar750 device in the ath79 arch. All their devices will get into the upstream Openwrt code, but AREDN timeline to move to use this new arch is more like 2020+ timeframe.
Joe AE6XE
I've encountered a problem saving and rebooting the firmware. It worked OK while the call sign was left as NOCALL and a new password inserted. However, after changing the call sign, the system didn't recover the reboot. What can be done, as I see above that it really works, and Joe's remark on his priority (you are doing a great job, and we all appreciate it).
Ben 4X1IL
Ben, This is not typical, something else might be going on. Once booted with the .elf image, did the sysupgrade firmware image load OK? Then after the reboot, this is when save settings behavior is occurring? Sometimes it's easy to change settings after loading the .elf, and this only saves to RAM, which is lost after a reboot, and reboots routerOS from flash again back on 192.168.88.x, so appears unresponsive.
Joe AE6XE
Problem solved. First try was done with a small screen running XP, but my eye sight is not as it was while I got my call back in 1954. So I switched to a Win10 with a 15.6 screen portable computer, with no luck. What really solved the problem was pulling the RJ-45 plug, in order to force the DHCP setting.
73s
Ben 4X1IL
I just updated to the most recent nightly build (1307-5244948). Upon reinstalling the tunnel client, I can't connect to my servers. I was connected prior to the update.
Is something broken with this build?
Rusty
Yes, 1307 broke tunnel compatibility with prior builds and releases. The fix is submitted, but the build did not complete last night. Look for the subsequent build out, hopefully tomorrow to resolve this.
Joe AE6XE
Thanks Joe.