It's always amazing to me how reluctant everyone is to grant ISM bandwidth. If you watch a spectrum analyzer you'll immediately notice that the ISM bands are packed and licensed space is entirely empty. At this point with devices being both much more dynamic and much more spatially multiplexed (and more directional) I have a hard time accepting arguments that really anything from DC to daylight should be licensed at this point outside of a few very small niches (GPS and airband mostly). We should be gradually deprecating equipment that requires absolute band exclusion to operate because just slicing up spectrum is by far the least efficient way of utilizing a finite resource.
It's also worth pointing out that we're about to lose about half of one of our ISM bands (915MHz, LoRa, Meshtastic, ELRS, various smart home protocols etc) due to the lovely folks at NextNav wanting dedicated bandwidth for a paywalled GPS alternative. It's by far the best ISM band in the US for long range unlicensed communications due to being in a sweet spot of path loss, penetration, and bandwidth.
In case you're confused like me, here ISM refers to Industrial, Scientific, Medical[0]. I was trying to figure out why it was relevant to transmit these signals in the interstellar medium.
Related to your second point: "FCC seek comments on NextNav petition for rulemaking on lower 900MHz ISM band" (https://news.ycombinator.com/item?id=41226802). No, we are not about to lose the band, FCC is still deliberating. The rest of the industry is pushing back.
I agree that there needs to be more ISM bands, but I don't agree that licencing bands is niche at all.
The bands owned by Cellular providers are not entirely empty at all, in fact they're very well utilised. It is much easier to make efficient use of spectrum when you can plan it properly, and when there is only one system making allocation decisions (e.g. TDMA/FDMA). Collision based systems (like Wifi) have an efficiency cost.
I was trying to keep the point simple, but realistically I'm not advocating for complete chaos, there will probably still have to be allocations, ie "Please don't key up your baofeng in the middle of a cell channel". Equipment will probably always be band specific to an extent, the bands are just widening with technological progress.
The issue is that spectrum auctions are probably the least efficient and equitable way of distributing allocations available. There isn't a standard unit conversion between utility in dollars and bandwidth in Hz. There are individuals with enough money that they could buy their own national spectrum allocations just for their personal use alone.
That said, there are ways to maintain efficiency. TDMA can still be implemented in a distributed fashion using GPS or device collaboration, we just didn't do it with wifi because it would have been cost prohibitive 20 years ago.
For the sake of argument, a naive approach would be to generate a UTM grid square based on a fraction of some maximum reasonable wifi footprint, then the system in that grid would get a frequency allocation, and everything within that grid would time sync and TDMA on that frequency. Neighboring cells would get a different frequency and devices 2-3 cells over could start reusing frequencies. Obviously this doesn't work in an apartment high-rise but that's fixable.
> amazing to me how reluctant everyone is to grant ISM bandwidth
> dedicated bandwidth for a paywalled GPS alternative.
I think you've answered your own question here; especially if I look at the huge prices paid in spectrum auctions in the UK and Europe some years ago. Spectrum is an asset, so there's strong pressure to turn it into property that can be charged for rather than let the public use it.
If only people wouldn't then turn around and tell us how amazing spectrum auctions are, what a novel great application of economics. In practice it has all the hallmarks of a failed market; most bits travel over WiFi, money ends up in slush funds, coverage is hopeless, large parts of spectrum are allocated to governmental incumbents who want to keep running 50s tech.
You don't buy exclusivity for emergency services? What if a local congestion event degrades signal to the extent things stop working? This also demands all of the volunteer fire and emergency and rural fire and emergency recapitalise their radio investment. I would worry the exact thing that causes this is mass widespread "get online and check on aunty jennie" flash mob effects when emergency comms is most needful.
I am probably putting up some FUD argument here. But, I think this should be a contestable line of reasoning. I'd like to hear from some hams, or other spectrum users before I got with this. (I am neither, cell and wifi aside)
I think that reserving bands for meteorology, Radio navigation, radio location, critical services (maritime mobile, time signal), science (space research), amateur radio, and microwaves is a very good idea. But if you were to redesign the allocations from scratch with modern technology, you could probably clear up a whole lot of spectrum.
On the other hand, how bad is congestion in practice anyways? Sure it's congested, but rarely to the point where it's unusable.
I don't think we need to clear out all of the spectrum from 100khz to 100ghz. If we doubled unlicensed spectrum around 2.4ghz, 5ghz, and 6ghz, we'd solve 90% of our congestion issues at the cost of barely any spectrum. Overhauling the entire electromagnetic spectrum allocations would cost millions of dollars of replacing devices; just throwing a bit more at wifi bands would cheaply solve most issues.
We just got a whole bunch of new radios for fire brigade in our state. Every radio has a SIM and fails over to the public cell network if the primary (licensed) network is unavailable.
Which ironically is one of the first networks to fail when we have widespread storms etc.
I don't know. Seems like where I live the cell network is about as reliable as power. Its why I switched to starlink as at least that stays up/comes back quickly during storms.
I wonder if it's still an issue: checking aunt Jennie with text or audio is nowhere comparable in terms of data to checking 2K video news or scrolling tiktok for FOMO. Yes there was cutoffs during mass events in 2010', our 2g/3g infra couldn't keep up with people calling each others, and sure the network is faster now. However the next WTC will be another level if we all live-video check aunty Jenny. It seems way too few people understand (as other said) the bandwidth is not an infinite ressource.
And don't start me with ads bloating most website rendering them unusable on a low speed network. I have the chance to live in Europe and the cookie banner is a neat warning of a cumbersome website.
We used to do check news with channels' cable TV and radio (grandpa' grumble) and there was no "network problem".
It's a long game. Yeah, we can't just say "fuck it, everyone go nuts" tomorrow. There are satellite downlinks that cant be readily changed, equipment capex that needs to depreciate etc. What we should be doing is adding and expanding ISM space as aggressively as we reasonably can and refusing to issue new exclusive licenses without an extremely good reason (and probably exclusively for public/civil purposes).
The caveat should be (as it's always been) only radiate as much as you actually need to, and not having/buying sufficient shielding is not an acceptable "need to" argument. Microwaves share spectrum with WiFi but unless something is horribly wrong turning on a microwave doesn't jam every wifi access point within a mile.
I agree though, reasonable EIRP is something we need to talk about. For instance, there should probably be more of a logarithmic equation for calculating an EIRP alternative that allows more effective power in beam forming systems but to a sane limit. In other words, don't handicap people trying to be responsible with where they send signals but that doesn't mean you can build an absolute laser of an antenna array and start screaming at satellites coming over the hill behind your receiver.
I have some doubts about the efficacy of spectrum allocated per device class/manufacturer. This in itself is not a congestion management mechanism, although some vendors probably treat it as such. The majority of applications probably benefits from more available spectrum and cheaper devices based on COTS components.
Spectrum dedicated to individual users in specific areas is a different matter. I expect that there will always be use cases for this, for backhaul, backup links etc.
Going through their “economic advantage of 6GHz WiFi” document. It looks like there is a “fight” with telecom guys trying to license 6GHz for 5G and others fighting for it to be unlicensed.
Interesting to see that countries are tilting towards unlicensing even with strong lobbying of telecoms
Wifi-7 modems (some of them are AP-capable) are now available for around 15-30€. They support all bands (2, 5, 6)
Have a look at the Mediatek (MT7925E) and Intel (BE200-BE202) cards. There also the more pricy Qualcomm QCNCM865.
The Mediatek and Qualcomm also works with AMD mainboards. The Intel cards are not compatible with AMD mainboards as they are using a proprietary stuff.
At leas the Mediatek works fine with Linux (currently running in Thumbleweed) and AMD.
>Wifi-7 modems (some of them are AP-capable) are now available for around 15-30€
Just to add additional context, there are a whole loads of components from RF Filter to antenna itself, RAM, etc for an AP or Router. In case anyone got the wrong estimate of BOM cost. Ignoring any other R&D / NRE or Capex involved. And all that excludes the cost of software.
Also in Tumbleweed, but a bit in Void and others... Can you confirm that the Mediatek, specifically (MT7925E), is supported in Linux? And if not too much to ask, anything by Atheros?
Keep in mind that, unlike cabled Ethernet, WiFi is a shared medium. Faster transfers also means your device uses less airtime for the same transfer speed. This is especially relevant in dense urban areas, where it's not unusual to see tens or even hundreds of WiFi networks sharing the same few channels.
Eh, seems like the wrong solution if that's the real problem. Building in some signal strength/topology intelligence to weaken the signal would be a better approach.
Congestion, mostly. Really it's a problem at conventions or sports games where every human and probably each pet brings one or two WiFi transceivers. Best way to alleviate that is more bandwidth.
I disagree. The best way to fix that is smaller segmentation for less overlap/congestion. Eg use more APs, make the coverage area for each smaller, intelligently adjust power on the devices so they aren't blasting farther than they need to.
We're already doing that, improving with every version. And 5-6GHz is inherently more range limited to start with.
Is there a reason to only use the best way, and not multiple ways working together? There's even a synergy that more intelligent power adjustment and beamforming is most effective when it gets new frequencies free of legacy equipment.
6GHz doesn't go through walls so it's pretty difficult for any low-energy user to disrupt others with it. It's better if we can move towards it for this reason.
Can be good for high density environments such as student dorms, hotels and hospitals. They can put an AP in each room (or in the wired hotel TV) and not have to worry about crosstalk or turning the signal down to get a high quality connection.
Or if they build one AP wiring point on the ceiling per room for new homes / small offices or rely on the other bands for backhaul in a mesh network.
Home Wifi is what it is, but this is a big selling point for things like offices. The less Wifi stations that can "hear" each other, the better for all of them.
The way this is attempted with 2.4/5GHz deployments is to put an AP in every room and turn the power way down, but this only helps a little bit.
Apart from the environmental externalities, the problem with induced demand and cars is that cars take up too much physical space on the road for the number of people they tend to carry, so induced demand in any reasonably populated area (like a city) will always produce excessive congestion, and you can basically never build enough lanes to fix that (especially because intersections are often a bigger problem).
Whereas with more space-efficient transit, like trains, trams (streetcars) and buses, you can often increase capacity on existing lines or build more capacity for reasonable cost and space, so induced demand isn't as much a problem.
The whole concept of induced demand is really a misunderstanding of the problem.
Congestion indicates transportation capacity is below demand. But once the medium is congested, you can't measure what the demand is; only how long it's congested.
You can reduce congestion by increasing capacity, or decreasing demand. If you have a pandemic and encourage people to stay at home, you'll find that the transportation networks that are still operating have plenty of capacity. If you add another lane to the 405 in LA, you'll find the duration of congestion has a step change reduction, but as population continues to grow, congestion duration will also continue to grow. If you add mass transit capacity, highway congestion duration will likely be reduced too.
When it's a real situation, the question is usually what intervention allows for the most additional transportation capacity given realistic funds availability. Somewhere like the LA area usually takes an all options approach: there is investment in mass transit as well as highways that service mass transit, individual transportation and commercial shipping. Although intentionally reducing population could work too :p
> Congestion indicates transportation capacity is below demand. But once the medium is congested, you can't measure what the demand is; only how long it's congested.
But congestion is also part of what controls the demand. It's not a "true" demand that you're just not seeing because of congestion. Once the congestion is solved, the demand will increase as that changes the attractiveness of driving.
Since driving then for some people will outcompete bus, cycling, trains etc., they will then shift over to driving even though they had no demand for it earlier. It will then reach a new equilibrium, not far off the old congestion.
Eh, I think that’s a misstatement of market dynamics. Yes, congestion is stating that demand is exceeding capacity. However…
Congestion is a pushback signal that reduces actual demand, by reducing quality/utility of the actual ‘product’.
Congestion is demand being throttled through the mechanism of reduction in quality/availability, instead of some other mechanism, like increases in pricing (tolls), or reduction in customers (aka population decreases or exclusionary rules) through the natural usage of the system. Due to induced demand. Because there is almost always more ‘low quality’ demand, than ‘high quality’ demand. It’s a type of tragedy of the commons.
In any ‘free’ system, it almost always ends up this way (see crazy long wait times in nationalized health systems!) precisely because the other options are excluded. Just like on free ways.
That's not bad though, by the preferences of the people driving the cars.
I've done a couple very informal low sample count surveys, and as far as I can tell the result of pricing in the externalities and building what mass transit is realistic (not much, in certain places) would lead to at least a short-term reduction in satisfaction.
> pricing in externalities .. would lead to ... reduction in satisfaction [for those paying]
I agree with everything you said and would add
> That's not bad though
Of course people who aren't paying something would rather not pay it. In the case of externalizing those costs, they're getting an effective subsidy from society at large for something they disproportionately participate in!
I think the complaint about induced demand for roads is that it induces demand in those who have the means and motive to drive. The land those roads are on (and the environment the cars may pollute) is also demanded for other uses like green space or railway or housing.
It's a rivalrous finite resource and the part of society that doesn't benefit from more roads may be the larger part.
People's preferences are heavily influenced by what is available. Ask the average person in a car-centric American city with no (or barely any) public transit, and of course they prefer to drive. Ask that same question of someone who lives in a city with great public transit (e.g., pick any major city in East Asia, or many cities in Europe) and congested roads, and their answer will be very different.
Cars are a bad metaphor for radio physics. Car congestion is a bad metaphor for RF channel collision because of a number of things including TDM and the non fatal quality of what we call a collision in radio space.
I just used a WiFi analyzer and in my 70 unit apartment building see 31 APs on 2.4Ghz fighting over 3 channels. 40 APs fighting over 5GHz. and exactly ZERO devices using 6GHz which is VASTLY larger than the other two bands.
The 2.4GHz band is tiny and also is just pure noise at this point and the 5Ghz band can be pretty congested in urban areas. 6GHz gives WiFi a huge amount of extra bandwidth. So much that you could use only WiFi to network a dense office of PCs pretty effectively if you used sophisticated smart WiFi access points that effectively allocated the 6GHz channels.
It's only a matter of time when 6GHz networks become wide enough that there's going to be similar congestion there as currently on 5GHz. I remember when available 5GHz spectrum was oh so much wider and oh so empty as well; now you can have 160MHz channels there :)
Then why not build solutions to congestion if that's the problem? If you had all APs adjust their power based on the other networks in range, then you could dynamically reduce power output to reduce congestion. If your AP knows you're in a congested area, then it could adjust its behavior accordingly - reduce power, select the least used channel, etc.
> If you had all APs adjust their power based on the other networks in range, then you could dynamically reduce power output to reduce congestion.
Reducing power also reduces the range. It would be very annoying if my neighbor putting a new AP next to the shared wall meant my AP reduced its range to only half of the room, instead of just sharing the bandwidth while keeping the same range like it currently does.
6ghz also has a lot of loss in typical building materials, which is relatively good in that there is a lot less inter building noise. It does require more repeaters though for any given network.
Would you prefer hardwiring to "mesh" as well? Obviously hardwire will always have better performance but the convenience and value should also be a consideration.
Not who you are replying to, but more radio transmission and typically less reliability (you’re either using 2x the channels, or multiplexing on the same channel).
In my experience, it can work but is often not nearly as good as you’d expect.
Hardwired at the APs are way more reliable, and faster.
personally I was looking at replacing laptop wifi cards with BE200s and a new wifi 7 router to get at least 2Gbps for local network file shares sans cables. People also seem to get 4Gbps with MLO enabled. but the 6GHz routers are very expensive right now
To saturate 2.5Gbps in a single direction with WiFi 6/7 modulations with 2x2 MIMO (most phones, tables, laptops have 2 antennas), you need a (total) channel width of 200MHz, and that's for devices in the same room as the AP. If you want to saturate it with devices in the next room, you need closer to 240-300MHz, for example: using say you use MLO with 160MHz on 6GHz, 80MHz on 5GHz and 20MHz on 2.4GHz), totaling 260MHz.
I tested out with U7 Pro Max (MLO enabled on EA firmware) and QC WiFi7 card and it topped out at about 800-900Mbps within the same room. Impressive for WiFi still, but reading the WiFi speed numbers on the box and expecting them to actually be achievable wasn't realistic for a decade now.
Why not? There isn't really a negative externality to it (other than not being able to use the band for something else .. which it isn't being used for at the moment).
For a visual representation of the channels/frequencies, see perhaps:
* https://www.wi-fi.org/beacon/chuck-lukaszewski/wi-fi-6e-the-...
* https://www.juniper.net/us/en/research-topics/what-is-wi-fi-...
* https://www.ekahau.com/blog/channel-planning-best-practices-...
5925-6425 MHz is UNII 5, and is the most available globally. 6425-7125 MHz is UNII 6, 7, and 8:
* https://en.wikipedia.org/wiki/Unlicensed_National_Informatio...
It's always amazing to me how reluctant everyone is to grant ISM bandwidth. If you watch a spectrum analyzer you'll immediately notice that the ISM bands are packed and licensed space is entirely empty. At this point with devices being both much more dynamic and much more spatially multiplexed (and more directional) I have a hard time accepting arguments that really anything from DC to daylight should be licensed at this point outside of a few very small niches (GPS and airband mostly). We should be gradually deprecating equipment that requires absolute band exclusion to operate because just slicing up spectrum is by far the least efficient way of utilizing a finite resource.
It's also worth pointing out that we're about to lose about half of one of our ISM bands (915MHz, LoRa, Meshtastic, ELRS, various smart home protocols etc) due to the lovely folks at NextNav wanting dedicated bandwidth for a paywalled GPS alternative. It's by far the best ISM band in the US for long range unlicensed communications due to being in a sweet spot of path loss, penetration, and bandwidth.
In case you're confused like me, here ISM refers to Industrial, Scientific, Medical[0]. I was trying to figure out why it was relevant to transmit these signals in the interstellar medium.
[0] https://en.wikipedia.org/wiki/ISM_radio_band
Related to your second point: "FCC seek comments on NextNav petition for rulemaking on lower 900MHz ISM band" (https://news.ycombinator.com/item?id=41226802). No, we are not about to lose the band, FCC is still deliberating. The rest of the industry is pushing back.
I agree that there needs to be more ISM bands, but I don't agree that licencing bands is niche at all.
The bands owned by Cellular providers are not entirely empty at all, in fact they're very well utilised. It is much easier to make efficient use of spectrum when you can plan it properly, and when there is only one system making allocation decisions (e.g. TDMA/FDMA). Collision based systems (like Wifi) have an efficiency cost.
I was trying to keep the point simple, but realistically I'm not advocating for complete chaos, there will probably still have to be allocations, ie "Please don't key up your baofeng in the middle of a cell channel". Equipment will probably always be band specific to an extent, the bands are just widening with technological progress.
The issue is that spectrum auctions are probably the least efficient and equitable way of distributing allocations available. There isn't a standard unit conversion between utility in dollars and bandwidth in Hz. There are individuals with enough money that they could buy their own national spectrum allocations just for their personal use alone.
That said, there are ways to maintain efficiency. TDMA can still be implemented in a distributed fashion using GPS or device collaboration, we just didn't do it with wifi because it would have been cost prohibitive 20 years ago.
For the sake of argument, a naive approach would be to generate a UTM grid square based on a fraction of some maximum reasonable wifi footprint, then the system in that grid would get a frequency allocation, and everything within that grid would time sync and TDMA on that frequency. Neighboring cells would get a different frequency and devices 2-3 cells over could start reusing frequencies. Obviously this doesn't work in an apartment high-rise but that's fixable.
> amazing to me how reluctant everyone is to grant ISM bandwidth
> dedicated bandwidth for a paywalled GPS alternative.
I think you've answered your own question here; especially if I look at the huge prices paid in spectrum auctions in the UK and Europe some years ago. Spectrum is an asset, so there's strong pressure to turn it into property that can be charged for rather than let the public use it.
If only people wouldn't then turn around and tell us how amazing spectrum auctions are, what a novel great application of economics. In practice it has all the hallmarks of a failed market; most bits travel over WiFi, money ends up in slush funds, coverage is hopeless, large parts of spectrum are allocated to governmental incumbents who want to keep running 50s tech.
You don't buy exclusivity for emergency services? What if a local congestion event degrades signal to the extent things stop working? This also demands all of the volunteer fire and emergency and rural fire and emergency recapitalise their radio investment. I would worry the exact thing that causes this is mass widespread "get online and check on aunty jennie" flash mob effects when emergency comms is most needful.
I am probably putting up some FUD argument here. But, I think this should be a contestable line of reasoning. I'd like to hear from some hams, or other spectrum users before I got with this. (I am neither, cell and wifi aside)
I think that reserving bands for meteorology, Radio navigation, radio location, critical services (maritime mobile, time signal), science (space research), amateur radio, and microwaves is a very good idea. But if you were to redesign the allocations from scratch with modern technology, you could probably clear up a whole lot of spectrum.
On the other hand, how bad is congestion in practice anyways? Sure it's congested, but rarely to the point where it's unusable.
I don't think we need to clear out all of the spectrum from 100khz to 100ghz. If we doubled unlicensed spectrum around 2.4ghz, 5ghz, and 6ghz, we'd solve 90% of our congestion issues at the cost of barely any spectrum. Overhauling the entire electromagnetic spectrum allocations would cost millions of dollars of replacing devices; just throwing a bit more at wifi bands would cheaply solve most issues.
Define ‘unusable’?
If you assume it’s a smooth degradation where it goes from 50MB/s to 100kb/s, yeah, that’s rare. It is often ‘nothing at all’ for a second or two.
And that is unusable for a lot of things. Or at least infuriating.
We just got a whole bunch of new radios for fire brigade in our state. Every radio has a SIM and fails over to the public cell network if the primary (licensed) network is unavailable.
Which ironically is one of the first networks to fail when we have widespread storms etc.
Could the idea be to use satellite cellular as that becomes more commonplace?
The cell network is much more reliable than typical emergency comms, especially when you factor in ‘radio shadows’, etc.
Or at least will be out at different times/ways.
Oh, unless there is a major disaster, but those are rare.
A lot of money goes into building the cell network.
I don't know. Seems like where I live the cell network is about as reliable as power. Its why I switched to starlink as at least that stays up/comes back quickly during storms.
> get online and check on aunty jennie
I wonder if it's still an issue: checking aunt Jennie with text or audio is nowhere comparable in terms of data to checking 2K video news or scrolling tiktok for FOMO. Yes there was cutoffs during mass events in 2010', our 2g/3g infra couldn't keep up with people calling each others, and sure the network is faster now. However the next WTC will be another level if we all live-video check aunty Jenny. It seems way too few people understand (as other said) the bandwidth is not an infinite ressource.
And don't start me with ads bloating most website rendering them unusable on a low speed network. I have the chance to live in Europe and the cookie banner is a neat warning of a cumbersome website.
We used to do check news with channels' cable TV and radio (grandpa' grumble) and there was no "network problem".
Was waiting for this response...
It's a long game. Yeah, we can't just say "fuck it, everyone go nuts" tomorrow. There are satellite downlinks that cant be readily changed, equipment capex that needs to depreciate etc. What we should be doing is adding and expanding ISM space as aggressively as we reasonably can and refusing to issue new exclusive licenses without an extremely good reason (and probably exclusively for public/civil purposes).
Do we really need to allow Industrial use of RF across the whole spectrum though?
Maybe we need a new definition of public use at “reasonable EIRP” without the industrial heating and whatever other weird uses of RF are out there.
The caveat should be (as it's always been) only radiate as much as you actually need to, and not having/buying sufficient shielding is not an acceptable "need to" argument. Microwaves share spectrum with WiFi but unless something is horribly wrong turning on a microwave doesn't jam every wifi access point within a mile.
I agree though, reasonable EIRP is something we need to talk about. For instance, there should probably be more of a logarithmic equation for calculating an EIRP alternative that allows more effective power in beam forming systems but to a sane limit. In other words, don't handicap people trying to be responsible with where they send signals but that doesn't mean you can build an absolute laser of an antenna array and start screaming at satellites coming over the hill behind your receiver.
This I can get behind.
I have some doubts about the efficacy of spectrum allocated per device class/manufacturer. This in itself is not a congestion management mechanism, although some vendors probably treat it as such. The majority of applications probably benefits from more available spectrum and cheaper devices based on COTS components.
Spectrum dedicated to individual users in specific areas is a different matter. I expect that there will always be use cases for this, for backhaul, backup links etc.
[flagged]
I saw "ELRS" and panicked for a bit but remembered it also has a 2.4ghz option
There's also the 'fuck the FCC' option since you're going to be breaking FAA's line of sight regs with any long to medium range flights anyway..
No decision has been made in the NextNav case, and there’s a lot of significant opposition. What’s with the needless fearmongering?
Fear mongering? It’s not like this topic is going to cause a mass panic or anything.
I develop products for the 915 MHz band and became very anxious that I missed FCC's decision (none has been made yet). I agree with GP.
Going through their “economic advantage of 6GHz WiFi” document. It looks like there is a “fight” with telecom guys trying to license 6GHz for 5G and others fighting for it to be unlicensed.
Interesting to see that countries are tilting towards unlicensing even with strong lobbying of telecoms
FYI:
Wifi-7 modems (some of them are AP-capable) are now available for around 15-30€. They support all bands (2, 5, 6)
Have a look at the Mediatek (MT7925E) and Intel (BE200-BE202) cards. There also the more pricy Qualcomm QCNCM865.
The Mediatek and Qualcomm also works with AMD mainboards. The Intel cards are not compatible with AMD mainboards as they are using a proprietary stuff.
At leas the Mediatek works fine with Linux (currently running in Thumbleweed) and AMD.
>Wifi-7 modems (some of them are AP-capable) are now available for around 15-30€
Just to add additional context, there are a whole loads of components from RF Filter to antenna itself, RAM, etc for an AP or Router. In case anyone got the wrong estimate of BOM cost. Ignoring any other R&D / NRE or Capex involved. And all that excludes the cost of software.
If you are using the naked modem chip of course.
But by buying one of these m.2 cards this stuff is already done. Connect it to the PC and U.fl antenna and start playing around with them.
Maybe I should have clarified that these mentioned chips are already places onto M.2 cards you can buy from Ali/Amazon&Co.
Also in Tumbleweed, but a bit in Void and others... Can you confirm that the Mediatek, specifically (MT7925E), is supported in Linux? And if not too much to ask, anything by Atheros?
Edit: Qualcomm= atheros, sorry
https://www.dm5tt.de/2024/10/17/Wifi-7-and-AMD/
It needs to a M.2<->PCIe adapter card.
Yes. Works.
Thanks. My slots are M.2. The trick is getting those ridiculous NGFF/MHF4 antenna connectors to snap together without breaking.
I’ve been on a project for 2+ years to support automatic frequency selection for this. I’m excited to start seeing so much chatter about it.
Related discussion from 2020:
Wi-Fi Alliance Brings Wi-Fi 6 into 6 GHz
https://news.ycombinator.com/item?id=21965774
I'm turning into a luddite - why do we really need this? Just slightly faster transfers?
Keep in mind that, unlike cabled Ethernet, WiFi is a shared medium. Faster transfers also means your device uses less airtime for the same transfer speed. This is especially relevant in dense urban areas, where it's not unusual to see tens or even hundreds of WiFi networks sharing the same few channels.
Is it also good for power for small devices? Can power down for longer between sending.
Eh, seems like the wrong solution if that's the real problem. Building in some signal strength/topology intelligence to weaken the signal would be a better approach.
Congestion, mostly. Really it's a problem at conventions or sports games where every human and probably each pet brings one or two WiFi transceivers. Best way to alleviate that is more bandwidth.
"Best way to alleviate that is more bandwidth."
I disagree. The best way to fix that is smaller segmentation for less overlap/congestion. Eg use more APs, make the coverage area for each smaller, intelligently adjust power on the devices so they aren't blasting farther than they need to.
We're already doing that, improving with every version. And 5-6GHz is inherently more range limited to start with.
Is there a reason to only use the best way, and not multiple ways working together? There's even a synergy that more intelligent power adjustment and beamforming is most effective when it gets new frequencies free of legacy equipment.
if we just build one more frequency band lane...
6GHz doesn't go through walls so it's pretty difficult for any low-energy user to disrupt others with it. It's better if we can move towards it for this reason.
Can be good for high density environments such as student dorms, hotels and hospitals. They can put an AP in each room (or in the wired hotel TV) and not have to worry about crosstalk or turning the signal down to get a high quality connection.
Or if they build one AP wiring point on the ceiling per room for new homes / small offices or rely on the other bands for backhaul in a mesh network.
> 6GHz doesn't go through walls
That will make it useless for a home network. If I need to mount wired access points in every room, I can just use the wired network I already have.
Oh, it goes through American home walls. Those aren't real after all.
But how are you going to use Ethernet for your phone and smart lights?
Home Wifi is what it is, but this is a big selling point for things like offices. The less Wifi stations that can "hear" each other, the better for all of them.
The way this is attempted with 2.4/5GHz deployments is to put an AP in every room and turn the power way down, but this only helps a little bit.
As far as going through walls, I don't think there's a big difference between 5GHz and 6GHz.
And run those Ethernet cables directly to your phones and tablets.
Induced demand is good. People just complain about it with cars due to what they claim are excessive environmentally detrimental externalities.
Apart from the environmental externalities, the problem with induced demand and cars is that cars take up too much physical space on the road for the number of people they tend to carry, so induced demand in any reasonably populated area (like a city) will always produce excessive congestion, and you can basically never build enough lanes to fix that (especially because intersections are often a bigger problem).
Whereas with more space-efficient transit, like trains, trams (streetcars) and buses, you can often increase capacity on existing lines or build more capacity for reasonable cost and space, so induced demand isn't as much a problem.
The whole concept of induced demand is really a misunderstanding of the problem.
Congestion indicates transportation capacity is below demand. But once the medium is congested, you can't measure what the demand is; only how long it's congested.
You can reduce congestion by increasing capacity, or decreasing demand. If you have a pandemic and encourage people to stay at home, you'll find that the transportation networks that are still operating have plenty of capacity. If you add another lane to the 405 in LA, you'll find the duration of congestion has a step change reduction, but as population continues to grow, congestion duration will also continue to grow. If you add mass transit capacity, highway congestion duration will likely be reduced too.
When it's a real situation, the question is usually what intervention allows for the most additional transportation capacity given realistic funds availability. Somewhere like the LA area usually takes an all options approach: there is investment in mass transit as well as highways that service mass transit, individual transportation and commercial shipping. Although intentionally reducing population could work too :p
> Congestion indicates transportation capacity is below demand. But once the medium is congested, you can't measure what the demand is; only how long it's congested.
But congestion is also part of what controls the demand. It's not a "true" demand that you're just not seeing because of congestion. Once the congestion is solved, the demand will increase as that changes the attractiveness of driving.
Since driving then for some people will outcompete bus, cycling, trains etc., they will then shift over to driving even though they had no demand for it earlier. It will then reach a new equilibrium, not far off the old congestion.
Eh, I think that’s a misstatement of market dynamics. Yes, congestion is stating that demand is exceeding capacity. However…
Congestion is a pushback signal that reduces actual demand, by reducing quality/utility of the actual ‘product’.
Congestion is demand being throttled through the mechanism of reduction in quality/availability, instead of some other mechanism, like increases in pricing (tolls), or reduction in customers (aka population decreases or exclusionary rules) through the natural usage of the system. Due to induced demand. Because there is almost always more ‘low quality’ demand, than ‘high quality’ demand. It’s a type of tragedy of the commons.
In any ‘free’ system, it almost always ends up this way (see crazy long wait times in nationalized health systems!) precisely because the other options are excluded. Just like on free ways.
That's not bad though, by the preferences of the people driving the cars.
I've done a couple very informal low sample count surveys, and as far as I can tell the result of pricing in the externalities and building what mass transit is realistic (not much, in certain places) would lead to at least a short-term reduction in satisfaction.
To be clear, this is what happens when you pander to car commuters and build out car infrastructure instead of livable cities: https://ggwash.org/view/4315/houstons-cars-vs-rotterdams-bom...
Google some more before/after car infrastructure photos: https://www.google.com/search?q=before+after+car+infrastruct...
So you see it’s NOT just about the environment. It’s about cities being pleasant or dreadful.
> pricing in externalities .. would lead to ... reduction in satisfaction [for those paying]
I agree with everything you said and would add
> That's not bad though
Of course people who aren't paying something would rather not pay it. In the case of externalizing those costs, they're getting an effective subsidy from society at large for something they disproportionately participate in!
I think the complaint about induced demand for roads is that it induces demand in those who have the means and motive to drive. The land those roads are on (and the environment the cars may pollute) is also demanded for other uses like green space or railway or housing.
It's a rivalrous finite resource and the part of society that doesn't benefit from more roads may be the larger part.
People's preferences are heavily influenced by what is available. Ask the average person in a car-centric American city with no (or barely any) public transit, and of course they prefer to drive. Ask that same question of someone who lives in a city with great public transit (e.g., pick any major city in East Asia, or many cities in Europe) and congested roads, and their answer will be very different.
Cars are a bad metaphor for radio physics. Car congestion is a bad metaphor for RF channel collision because of a number of things including TDM and the non fatal quality of what we call a collision in radio space.
Induced demand is not inherently good or bad.
[dead]
I just used a WiFi analyzer and in my 70 unit apartment building see 31 APs on 2.4Ghz fighting over 3 channels. 40 APs fighting over 5GHz. and exactly ZERO devices using 6GHz which is VASTLY larger than the other two bands.
The 2.4GHz band is tiny and also is just pure noise at this point and the 5Ghz band can be pretty congested in urban areas. 6GHz gives WiFi a huge amount of extra bandwidth. So much that you could use only WiFi to network a dense office of PCs pretty effectively if you used sophisticated smart WiFi access points that effectively allocated the 6GHz channels.
It's only a matter of time when 6GHz networks become wide enough that there's going to be similar congestion there as currently on 5GHz. I remember when available 5GHz spectrum was oh so much wider and oh so empty as well; now you can have 160MHz channels there :)
As others had said: the higher the frequency, the less it penetrates walls, so congestion will be less of a problem.
Then why not build solutions to congestion if that's the problem? If you had all APs adjust their power based on the other networks in range, then you could dynamically reduce power output to reduce congestion. If your AP knows you're in a congested area, then it could adjust its behavior accordingly - reduce power, select the least used channel, etc.
> If you had all APs adjust their power based on the other networks in range, then you could dynamically reduce power output to reduce congestion.
Reducing power also reduces the range. It would be very annoying if my neighbor putting a new AP next to the shared wall meant my AP reduced its range to only half of the room, instead of just sharing the bandwidth while keeping the same range like it currently does.
That could also be due to the high attenuation at 6ghz. Unless your neighbor has a device transmitting on it, it’s way less likely you’d see it.
I mean, that sounds like it'd be even better for high-density environments like apartments.
It is - but much worse for doing a census of networks within x feet of where you are standing.
The census is the number of networks close enough to potentially interfere with my 6GHz AP.
there are some cool uses cases in the enterprise space: https://youtu.be/7p_AHPIVo_0?si=cgPQWPSwpN-yML-5
main benefit for home use is that 6 ghz is uncongested, which is very very nice if living in a dense neighborhood.
and of course lightning fast wireless file transfers are possible on the LAN now, which could be beneficial to certain people.
6ghz also has a lot of loss in typical building materials, which is relatively good in that there is a lot less inter building noise. It does require more repeaters though for any given network.
To anyone reading: please don't use repeaters. If you can, hard-wire all your wifi APs.
What's the reason not to?
Would you prefer hardwiring to "mesh" as well? Obviously hardwire will always have better performance but the convenience and value should also be a consideration.
Not who you are replying to, but more radio transmission and typically less reliability (you’re either using 2x the channels, or multiplexing on the same channel).
In my experience, it can work but is often not nearly as good as you’d expect.
Hardwired at the APs are way more reliable, and faster.
I think on net it works out - surely the average number of access points per network is less than 1.1
Is there a downside you have in mind?
personally I was looking at replacing laptop wifi cards with BE200s and a new wifi 7 router to get at least 2Gbps for local network file shares sans cables. People also seem to get 4Gbps with MLO enabled. but the 6GHz routers are very expensive right now
The Unifi U7 Pro is $189 with 2x2 WiFi 7 6Ghz. Seemed reasonably priced to me.
Beware that their (Unifi U7) MLO implementation is still in Alpha and Early Access. 2.5GE at the Ubiquiti Wifi-7 APs is a bad joke.
TP Omada at least has 10GE support.
The UniFi E7 also has a 10GE port.
Seeing as the majority of Wi-Fi devices will get absolutely nowhere near the theoretical maximum though, even 2.5GE is unlikely to be saturated.
thanks, could definitely consider doing an access point instead of full router replacement
seems like this one doesn't currently have MLO but might come in a future firmware update. Though then limited by single 2.5Gbps ethernet port
To saturate 2.5Gbps in a single direction with WiFi 6/7 modulations with 2x2 MIMO (most phones, tables, laptops have 2 antennas), you need a (total) channel width of 200MHz, and that's for devices in the same room as the AP. If you want to saturate it with devices in the next room, you need closer to 240-300MHz, for example: using say you use MLO with 160MHz on 6GHz, 80MHz on 5GHz and 20MHz on 2.4GHz), totaling 260MHz.
Your going to be hard pressed to saturate that “single” 2.5g eth port.
Wifi 7 + MLO = 4Gbps >> 2.5Gbps
Those numbers aren't close to being realistic.
I tested out with U7 Pro Max (MLO enabled on EA firmware) and QC WiFi7 card and it topped out at about 800-900Mbps within the same room. Impressive for WiFi still, but reading the WiFi speed numbers on the box and expecting them to actually be achievable wasn't realistic for a decade now.
i believe the “Pro Max” variant of this access point has MLO in the early access branch. I have not tested it out myself yet.
hopefully they roll it out sometime this year!
Tangential: BE200s remain Intel only.
There's a Mediatek and Qualcomm card for Wifi-7 available.
I'm running the Mediatek MT7925E on a AMD with OpenSUSE Thumbleweed distribution without problems.
Why not? There isn't really a negative externality to it (other than not being able to use the band for something else .. which it isn't being used for at the moment).
[dead]