Posted on behalf of Doug Reed, N0NAS.
I can understand the advantage of using a Mesh network on a random
omni network for local coverage. But I've never understood what
advantage it gives us over long range point to point links. Has anyone
seen mention of this issue or can explain why it is not better to
stick with the original TDMA protocol for the long links? I'm not
looking to argue, I'm looking for a authoritative technical answer,
something better than "because we want Mesh."
The standard WiFi protocol is intended for a busy channel with hidden
stations all competing to talk to the AP. There are collisions and the
need to retransmit damaged data. Mesh adds the option to hop data over
a changing path to get to a destination. If one hop fails, another hop
path is automatically negotiated. And there is a slight privacy
advantage in that a mesh network is not using the standard
infrastructure protocol and a non-technical user might never know how
to access the network. However obscurity is not a very good
replacement for WPA2-AES.....
But a long range point-to-point link will seldom have more than two
end points. You might have a third or fourth station on the link if
you are using some form of path diversity for reliability but I think
that would be unusual for a ham network. Since we're not going to have
random added stations on the link, what advantage is there to Mesh
software over that link? Why not stay with the TDMA software? As an
added advantage, if I keep my link on standard WiFi channels I can
keep WPA2-AES turned on for link security because it is Part 15, not
Part 97.
If I understand right, TDMA protocol gives every station on the link a
definite time slot to transmit and there is no random activity to
cause collisions. It is kind of like a round-robin QSO on any
repeater. Each new station is added to the list and talks when their
turn comes around. If you only have two stations, the timing gets very
simple. Another advantage is that the timing errors for sending VOIP
packets over the network is much improved with TDMA since latency is
reduced. And I'm pretty sure the TDMA protocol will not show up on the
usual AP scan for activity, or at least it will not be a default
protocol for most users. More security by obscurity but I still prefer
WPA2-AES.....
I think I'll stop this email right here. The above is the basics of my
question and why I think that links should be TDMA. Does anyone have
any better technical info to explain why Mesh would be better over a
long link?
73, Doug Reed, N0NAS.
I can understand the advantage of using a Mesh network on a random
omni network for local coverage. But I've never understood what
advantage it gives us over long range point to point links. Has anyone
seen mention of this issue or can explain why it is not better to
stick with the original TDMA protocol for the long links? I'm not
looking to argue, I'm looking for a authoritative technical answer,
something better than "because we want Mesh."
The standard WiFi protocol is intended for a busy channel with hidden
stations all competing to talk to the AP. There are collisions and the
need to retransmit damaged data. Mesh adds the option to hop data over
a changing path to get to a destination. If one hop fails, another hop
path is automatically negotiated. And there is a slight privacy
advantage in that a mesh network is not using the standard
infrastructure protocol and a non-technical user might never know how
to access the network. However obscurity is not a very good
replacement for WPA2-AES.....
But a long range point-to-point link will seldom have more than two
end points. You might have a third or fourth station on the link if
you are using some form of path diversity for reliability but I think
that would be unusual for a ham network. Since we're not going to have
random added stations on the link, what advantage is there to Mesh
software over that link? Why not stay with the TDMA software? As an
added advantage, if I keep my link on standard WiFi channels I can
keep WPA2-AES turned on for link security because it is Part 15, not
Part 97.
If I understand right, TDMA protocol gives every station on the link a
definite time slot to transmit and there is no random activity to
cause collisions. It is kind of like a round-robin QSO on any
repeater. Each new station is added to the list and talks when their
turn comes around. If you only have two stations, the timing gets very
simple. Another advantage is that the timing errors for sending VOIP
packets over the network is much improved with TDMA since latency is
reduced. And I'm pretty sure the TDMA protocol will not show up on the
usual AP scan for activity, or at least it will not be a default
protocol for most users. More security by obscurity but I still prefer
WPA2-AES.....
I think I'll stop this email right here. The above is the basics of my
question and why I think that links should be TDMA. Does anyone have
any better technical info to explain why Mesh would be better over a
long link?
73, Doug Reed, N0NAS.
I think you nailed it Doug. I don't think anyone is seriously disputing that. The only trick that remains is getting OLSR across it. And from what we've heard in adjacent threads (one being http://www.aredn.org/content/are-m3-nanostation-and-m3-nanobridge-same-radio), bridging the AirOS nodes at layer-2 or tunneling at layer-3 are the choices.
Andre
There are many link protocols that could be used:
1) 802.11 adhoc mode <- what we are using under AREDN 'mesh network'
2) 802.11 infrastructure mode <- typical AP (traffic cop) with clients
3) airOS TDMA <- what comes out-of-box on the Ubiquiti equipment
4) dtdlink (a cat5 cable)
5) ...
It is known that TDMA for a P2P link would achieve higher effective throughput numbers than 802.11 adhoc or infrastructure modes if everything else was equal. There's also some forum threads that 802.11 infrastructure mode for a 'cell site' would be more efficient than 802.11 adhoc mode used in AREDN today. These are modes to consider in the future for AREDN to specialize the protocols depending on these topology issues.
The AREDN challenge is how to adopt more efficient protocols while at the same time trying to KISS (keep it simple...). One could build out an RF network with all these protocols to have the most efficient network around, but then there'd only be a very small number of these networks--because few people have the interest and deep IT skill set to make it happen and keep it working.
What AREDN is doing, is packaging all the complexity so that many groups can create RF ham radio networks and focus on being experts in providing emcomm services. You can take, for example, a VOIP phone paired to a mesh node, and power these devices up anywhere on anyone's AREDN network and the phone works--call its known IP address where ever it's at.
Maybe we have a setup option one day to specify if an AREDN device will be deployed in one of several topology options and automatically configure the most efficient protocol. AREDN firmware could keep the routing, the security, the hostname resolution, and many other things all in sync to stay working. But if we did this, one can't just power it up and expect to show up at an incident to deploy a network that will work. There would have to be a design and configuration process first. Always some tradeoffs to consider.
Joe AE6XE
Regards,
Andrew KC0RKH
I am in agreement with Doug, I had found a TDMA implementation for OPENWRT or DDWRT a while back I will have to look for it again.
I am new in the AREDN world, but have been in the ISP business for 16 years and partners in a Wireless ISP for 4 where my acting duty is Network Engineer. If i were to say lets lay out a network I would select TDMA for point to point and a MESH NODEs for broadcasting from the tower. I would likely give each city its own ssid for the mesh.
K5MOB
Question is, best bet for back haul? The 3 gig in AREDN mode or setup using native OS and set in transparent bridge mode? We then have the question of how we should setup the DHCP servers/clients on both ends if in bridge mode. Looking for thoughts and feedback.
Midlothian has 5 devices running now. Waxahachie will have at least the 5 devices. Then add in the 2 3 gig connections.
Longer term, we plan on adding another 3 gig link to Ennis (about 17 miles) then another 3 gig link down to Italy (about 17 miles). Each of those location will have 5 gig and 2 gig fill in just like the original locations.
Thoughts? Ideas? I can set it up one way, run it a while, then go redo it another way if this is an unknown.
Ray.
ECARC
Be careful using the standard AirOS on a Part 97 AREDN Network without first providing for a means of adhering to the 10-minute ID requirement. The AREDN software does that for you, but the AirOS does not. Actually, the AirOS AP sends beacons that include the SSID (if you use this for your call sign) . However, the "client" AirOS station will never send a beacon, so never identify. You would need to write a script of some sort to do this and remain compliant.
I'll leave the collocation issues for others with more expertise in this area to comment.
Andre, K6AH
As a community, I don't think we have any data to know how beneficial the Ubiquiti AirMAX (TDMA) solution is compared to standard CSMA used in AREDN. Although, we do expect that TDMA is more efficient for P2P links over CSMA. Does anyone have any references?
Consider:
1) Has anyone found the Ubiquiti Specification of their TDMA implementation? There isn't a known standard to implement. I believe this is proprietary and unpublished. I just did a patent search and didn't get any hits. Is the reported efficiency more than "it's better" or are there some test #s out there.
2) TDMA and CSMA are not mutually exclusive. It is possible that TDMA concepts are in use in combination with CSMA to contend with other signals the P2P link has to deal with (and the chipset has CSMA built in hardware that continues to be there, but highly controllable from the firmware).
3) In theory, TDMA techniques should provide more benefit as the distance apart increases. I published results found in a IEEE paper in another forum post of the measured throughput decrease as distance increased using CSMA in 802.11. So does an 8 mile link only yield 2% performance increase using TDMA over CSMA and a 50 mile link is a 20% increase?
Long story short, some caution may be in order to not drink Ubiquiti's coolaid too quickly. I've seen several posts from folks with WISP experience. Does anyone have a live link using AirOS AirMAX technology that can post the performance (like an iperf measure so we can compare apples to apples)? I don't want to talk myself or anyone out of deploying TDMA, rather to get some data so we better understand.
Joe AE6XE
I like your setup! The 4 x 90deg 5GHz sector coverage could each be on their own channel to maximize the throughput. Is this what you are planning?
The omni could be used for mobile operations through the whole area and maintain a good quality link if line of sight the whole time. The 5Ghz devices for all the fixed location QTH, EOC, etc links. I'd call this the Cadillac solution.
If everyone in view of the tower site has something like a nanobridge M5 device, I'd be wondering when you guys would start holding net check-ins with ~10-way full duplex HD video conf calls while everyone on weather watch broadcast the visual conditions live as the storm front passed through.
Joe AE6XE
Joe, thought that was the answer. lol. Since this whole project is still in the learning stage.
Andre, yes, I was going to use our call sign as the ssid. With it being 3 gig and no other allocation in the US on 3 gig I was not toooo worried about FCC timing our ID's. (I did not say that-lol) Plus it was not going to on in the Air-OS mode long. Just testing...
Joe, explain "own channel" for better throughput?
We do have camera out on our mesh we use for storm watch. We will have 2 new ones on this tower. Trying to get folks in the "field" to put up a 5 gig nanostation with a camera. Some do, others do not.
Ray.
WR5AY
With 4 x Rocket - 90 deg Sector Panel devices on the same tower, if all are configured and sharing the same 5Ghz channel, then the total capacity to relay data through the site is limited. (how much data can we do in, e.q 10MHz bandwidth?) If each Rocket-Sector and quadrant of coverage in the area was configured on a unique channel, then the total capacity to relay data is getting upwards of 4 times the capacity (how much data can we pass in 40MHz bandwidth with 4 x 10MHz channels).
Certainly, using more bandwidth means it has to be available and coordinated with the local band plan, and compromises may need to be made.
In my area there are line-of-sight towers that are 40+ miles apart and they contend for the bandwidth--weak signals are still heard. Thoughput-capacity is also improved by coordinating-alternating the 5Ghz channels in use between towers so that they have unique frequency space and coverage areas.
I call this the Cadillac solution, because we don't always have the luxury to consume all this bandwidth
Joe AE6XE
Ray, I don't think you'd find anyone on the team knowingly not in compliance with Part 97. You not only plan to do that, you freely announce it on a public forum. My advice to you is to get creative... Setup a text file on the Station end comprised of your "<call sign>for ID". Copy that file back to the Access Point node every few minutes and you're fully compliant.
It's just too easy to do it right. The last thing we all want is the FCC watching us all the time for having a callous disregard for the regulations.
Andre, K6AH
when you say copy that file to the AP are you talking about an attached computer doing it? My attempts to add something to the OEM program load were met with complaints about a "custom script" which seems to be a no-no in recent firmware loads.
Well, I stand by my point... if you are testing or operating under part 97, then follow the regulations of your license grant... or test under part 15 where you may be able to avoid the requirement.
Andre
One thing suggested elsewhere is to make the MAC address translate to your call sign. Since the first digit of the MAC address should not be touched that means a 4 or 5 character call sign will work but not a 6 character one.
For example:
24 4b 45 32 4e 24
is
$KE2N$
so you could do this
but it is not permanent. I tried an approach that Google found for me where you put it into a "prestart" file - and that is where I ran into the error message.
I believe that firmware revisions after that worm problem (with the OEM firware) no longer permit any sort user changes to be put into the system. But perhaps someone here knows differently.
echo "ID: $(uname -n)" | socat - udp4:172.255.255.255:4919,broadcast,so-bindtodevice=wlan0
(or simlar)
Stick it in a CRON job and go.
(That's what AREDN is doing on the wlan0 interface)
We might do the 4 different channels on the 5 gig sectors in the long term. That is something we can change on the fly. It would just need to be coordinated with folks running the 5 gig in the field as to which direction they were from the tower. Not hard, just one more step.
But installing it is the first step. Time to climb a tower. Wheee.
We also found the 2.4 gig 10 dBi dual polarity omni setup on the Waxy tower sees the RocketM2 sector in Midlo. The sector is pointed South East, but we did not think they would connect. But there you go.
We have a few other clean up things to do on this tower, the 2M antenna showed to be shorted after install, the 900 Mhz antenna needed to be connected, out PTZ camera needs to be mounted and connected.
Then we will work on Ennis and Italy. Trying to ge better prepared for the Spring time.
The Midlo tower has only our 2.4 and 5 gig stuff on it that could be any where close to freq. Or 2 meter and 440 are a bit out there as far as interferance or noise floor are concerned.
W5RAY and AD5DJ did a great job on this install, and I along with others supported as a ground crew.
Keith
W5AGM
It looks like you may have the luxury to do 20Mhz channel widths, thus non-overlapping and non-part15-competing channel slots would be e.g., 170, 174, 178, 182 based on local bandplan coordination. If you find the TxMbps does not go above ~65Mbps on any give 20Mhz channel, then going to 10MHz channel adds 3dB to SNR and might be solid and max out at 65Mbps doing the same in half the bandwidth consumption. It would be good data to know any effects you find from adjacent channels, distance the 4 nodes are apart on the tower, competing channels across towers, and rf shielding conditions with your usage over time.
If the tower sites are ~10 miles apart, I suspect the challenge will be how to keep the sector coverage areas isolated to not compete with each other. You may need to error on the side of pointing the sector antenna down-tilt towards the ground to contain the coverage areas.
Joe AE6XE