Extreme is Bringing Purple Rain from the Cloud

During Networking Field Day 21 Aerohive, I mean Extreme, presented on their new “Cloud Driven End to End Enterprise” using ExtremeCloud IQ, formerly HiveManager. After the acquisition of AeroHive by Extreme there had been lots of speculation in the wireless community on what was going to happen with the product. The most obvious conjecture was the reason Extreme made the purchase was for the cloud technology that AeroHive already had, but how would they fold it into the mix with their other offerings?

Abby Strong (@wifi_princess on Twitter) started us off with a quick introduction into The New Extreme and the vision of the company. As Abby started us down the path we got some quick stats around the new technology users in the world, including the 5.1 billion mobile users and USD$2 trillion dollars being spent on digital transformation which was explained more. Digital Transformation is one of the hot marketing buzzwords in the industry at the moment, but what is it exactly? According to Abby, “Digital Transformation is the idea of technology and policy coming together to create a new experience.” This is what Extreme has been focusing on, but how? Extreme is doing this via their Autonomous Network, using automation, insights, infrastructure and an ecosystem all wrapped in machine learning, AI and security.

Picture1

The concept behind this is using the insights and information Extreme has gathered and looking at issues that arise in the network and being able to recommend if it is a possible driver issue, a recommended code upgrade to fix a network issue and so on. This is a really cool concept around automation and insights which is where most companies are trying to get in the industry and from what was shown at NFD20 in February and then again at NFD21, I think they are almost there with their expanded portfolio of solutions in Applications, Switching, Routing and Wireless and open ecosystem and open source. Check out more on those solutions and more about Extreme at https://www.extremenetworks.com/products/.

Next Extreme brought us into their 3rd generation cloud solution, ExtremeCloud IQ and showing their roadmap towards the 4th generation cloud.

The ExtremeCloud IQ Architecture was presented by Shyam Pullela and Gregor Vučajnk (@GregorVucajnk on Twitter) with a demo of the system.

The architecture is still the previous Aerohive design, however, without ever really digging into the product I was impressed with how they have done the back-end cloud. Currently Extreme is using AWS to host their infrastructure, but we were assured it was not dependent on AWS but could be run on any cloud provider. The setup is interesting as they have multiple regional data centers connecting back to a global data center. This provides resiliency built-in to the system, the ability to run in any country in which a public cloud can run and to collect the analytics and ML/AI data globally and not just from regional areas. With the architecture the ExtremeCloud IQ can also be run in different formats, public cloud, private cloud and on-prem to provide the customers with flexibility. From a basic cloud architecture standpoint, there is nothing crazy or specific Extreme is doing with the setup. The key to how they have done it comes into the scalability that has been designed into the system. Using a simple architecture makes it easy for Extreme to just add compute power to the back-end to scale it for large organizations.

Screen Shot 2019-11-08 at 9.27.55 AM

Picture2

With these regional data centers in use, the ExtremeCloud IQ is processing data to the tune of 3.389 PETABYTES per day and an astounding number of devices and clients to help with the ML/AI decision-making that the infrastructure is handling. These stats were mind-blowing to me and really shows the power of what Extreme has been building, especially around the Aerohive acquisition.

Screen Shot 2019-11-08 at 2.39.44 PM

Screen Shot 2019-11-08 at 2.40.27 PM

All of this data gets fed into the cloud dashboard as we see with the majority of other vendors. The client analytics is very reminiscent of the dashboards we see from Cisco, Aruba, Mist, etc., there is nothing too different in this regard with the exception of only getting 30 days of data, with no longer options available at this point in time. This is not a major hit against the technology, only that there is no way to correlate data longer than a one month period.

One of the differences that I see in the system is the lower number of false-positive issues that may be flagged by the system. Using the ML that is built into the CloudIQ is the ability to see anomalies and not present them as a possible bad user session. This is something that can cause headaches, especially in a wireless system with users entering and leaving areas with applications running. I will get deeper into these capabilities in an upcoming post.

The team that was on-camera also did not back down from some interesting and hard questions surrounding the roadmaps of the products, where things are and announcements that were made within 24 hours of the presentation being delivered.

All-in-all the solutions and products I am seeing from Extreme and very positive, they seem to have begun the integration of AeroHive nicely and I am excited to see where they go with the big purple cloud.

 

WiFi6 Ratification: Not So Fast My Friend

There has been a lot of publicity lately about WiFi6 obviously and even more visibility when the WiFi6 certification was announced September 16. So now we officially have WiFi6 and we can move on. NOT SO FAST.

Over the past few weeks I seem to be having the same conversation in-person with people as well as in Slack rooms, etc. around this announcement. There is a perception that once this announcement was made it is a done deal and we have 802.11ax as a ratified ammendment now. This is most certainly NOT the case. The announcement that was made in September was around the WiFi Alliance certification occurring not ratification. Well, those are the same thing, I can hear some of you saying. They are not, and this is where the marketing and big money companies come into play.

The WiFi Alliance is a group of companies that pay for the privilege, from USD$5,150/year to be a contributor up to USD$20,000/year to be a contributor according to the WiFi Alliance membership page (https://www.wi-fi.org/membership). According to the Who We Are page:

WFA Who

 

Basically the WiFi Alliance is a group of companies, including Apple, Cisco, Intel, Qualcomm, etc. that pay to work together on collaboration within the industry, testing equipment in labs to verify devices function ‘properly’ and advocate for spectrum usage, etc. In other words, a WiFi marketing company on how devices connect and function. But, this makes it a standard right?

Just as in wired networking and many other industries, the IEEE is the standards body that develops, writes and ultimately ratifies standards for wireless networking in working groups. The 802.11 working group within the IEEE are the ones responsible for publishing the standard, not the WiFi Alliance. This is where the confusion comes in for most people.

The working group puts together the draft of the new technology, then creates publishes this draft. For 802.11ax this draft was not fully completed and approved by the working group until February 25, 2019 according the IEEE website (http://www.ieee802.org/11/Reports/802.11_Timelines.htm). And from the working group timeline we still will not have ratification until at least September of 2020 with final approvals not coming until November of 2020.

Standard

So as we hear in the media and online that WiFi6 is here and certification is complete, let’s not lose site of what that actually means. Is WiFi6 here, yes it is. Devices are beginning to be released at a quicker pace, especially now that certification is complete. Wireless vendors have been out pushing these new APs for a time now and there is beginning to be an install base for them, but nothing too pervasive at this time. Within the wireless community the sentiment is that there are not going to be any large changes, if any at all, before ratification takes place. However, we just need to be careful about going around spreading the word that the WiFi6 standard is published and ratified. There is still another year of work for that to be reality.

What is the Perfect Wireless Design?

Perfection is always something we hear a lot about but we know is almost impossible to achieve. The perfect game in baseball, an undefeated season, completing Super Mario Bros. with a single life. It is hard to get there, but a few have over the years. But what makes the perfect wireless design and how do you go about doing it?

Wireless designs and deployments are as varied as the engineers that do them. Those of us that have been doing this for 20 years or more are definitely set in our ways and have our little tricks and trade secrets on how we look to do configurations, etc. We all have our ways we stick with, RRM configs for Cisco, antenna combinations for stadiums, making pretty designs in Ekahau. All of this adds to our diversity as individuals. This has never been more apparent than sitting in a room with more than a dozen of the brightest at Ekahau Masters while having 30 minute debates over simple things. But that is what makes our industry and community so special. We can have people from 3 different manufacturers, people from competing service organizations and just strong personalities in general and still all come together, disagree vehemently with one another and then have a drink afterwards and laugh until we cry. If all of this is the case and this group of people cannot even agree how can we actually put a box around what a perfect design is?

I think our friend Sam Clements puts it best with the most well known quote in the industry, “It Depends”. A perfect design depends on so much. Yes the RF and physics are important, but what about the other issues we are trying to solve for? Did we capture the customers requirements and actually listen to what their problem is and what their version of success looks like? Did we make the least-capable device work properly?

If you keep up with the community I am sure you have heard Keith Parsons tell you at some point or another that if you meet the customer requirements then it is a success. You do not have to deploy the latest and greatest of everything all the time to make this true. Just because a customer comes to you and says they need to have an ax network, do they really? Our jobs are to help them understand what is out there, how wireless actually works and then listen to what their problems are and advise them on how to deploy a system to address those problems.

I know this sounds like blasphemy, but think how many times you have seen something on BadFi or in life in general and said you could have done that so much better. But do you know the requirements or constraints the customer put on the engineer? There have definitely been times I have installed something in a way I was not happy about, but I had limitations put on me by the customer around aesthetics, etc. and had to do the best I could. Same goes for designs and configurations, I may look at a config someone else does and say, What the hell were they thinking. But, I was not in the meeting with the customer to get the requirements for the network and to hear what problem they were looking to solve. When I meet with a customer for during a kickoff for a remediation or a new network I always ask the same questions, and I may repeat a couple because during the course of those meetings you may get different answers from the customer, or new things may come up that were not apparent to them or to you at the beginning. This is where we start to design the perfect wireless network.

There is a lot of discussion these days about what number CWNE someone is, or what version of the IE are you studying for. I am all for certifications, but don’t make the mistake of putting your knowledge and understanding ahead of the customer’s needs and what they are actually looking to do. In my opinion when you do that, no matter how ugly that baby may look to others, you have created the perfect wireless network. Because it was for that customer and that customer alone.

Ekahau’s ECSE Advanced Class – Why you need this

Recently I attended Ekahau’s ECSE Advanced course. I had heard about the class through WLPC and knew the class had changed since its inception to include some really cool stuff so I jumped at the opportunity to attend.

During day 1 of the 3 day course it became apparent this was not going to be a standard class on surveys and wireless. After introductions and housekeeping we jumped right into the content. It was very refreshing to see that we would be covering more processes and workflows than the actual software and how to use it. The curriculum was very timely as well as I am working in my day job to build a team of engineers doing designs and surveys. Workflows and processes are always the hardest thing to deal with and get in place when building a team. As we continued the discussions of how and why certain workflows should occur within a team of engineers and surveyors you could see lights going off in the attendees heads and the discussion began to pick up with lots of ideas, information and thoughts around the subject. Things definitely started clicking for me on how the team should be setup for management and project sharing.

We then continued on with a discussion around Foundation of Success for a wireless project. Most of us that have been doing this for any length of time already have our idea of success and what determines success. But the discussion around this subject and the content was very thought provoking. Success has 4 foundations that equate to a repeatable process that, when followed will provide the same outcome each time which is what we are looking for with our projects.

The discussion then moved to how to work in teams with Ekahau files and manage the project files successful. This is not as easy as it sounds if you have teams of engineers and surveyors out on multiple projects that are complicated sites and they split the workload and then need to bring it all together. This is where a majority of teams begin to struggle. We then did an exercise within our lab groups to show how this works and the importance of following the workflows and lifecycle laid out at the beginning of the project. Things can quickly go off the rails as we found out.

The second day of class we began discussing Ekahau Connect and how the tools we have in Ekahau Pro help with teams of multiple users as well as the cool new tools Ekahau has added for the Sidekick, like Packet Capture and Cloud Sync. We began leaning on our wireless skills and knowledge for the labs we did at this point. We did a couple of surveys to capture data then did some analysis for spectrum analysis to get used to the RTFM interface. Once this was complete it was time to really use the RTFM and find hidden interferes in and around the classroom space. This is always a challenge and definitely helped remind me how important it is to go back to your roots in RF.

Finally we discussed and did some labs around attenuation testing and mapping. This is becoming more of an integral part of wireless surveys in many different forms. When used correctly the information gathered from attenuation readings can help to build out an information database for your team as well as cutdown on the time on-site APoS surveys can take, but still provide just as much data.

The class then finished on the final day with discussions around file manipulation, scripting and report templates. These three topics can really help shape how a wireless team uses the data from surveys and can really set the team apart from others. The scripting and manipulation is still new to me so I will not comment too much on it, but the report template aspect within Ekahau is one of the most important items of the software. We have for years written reports with a standard template and then copy and pasted screenshots and data sets from within either AirMagnet or Ekahau for presentation to a customer. Inevitably a reference to a previous customer or project would get lost in the shuffle and lots of late night quick editing would need to occur. With the way Ekahau handles report templates teams can save literal hours and even days in reporting. Beware before starting down the road, the templates are written in JSON either some knowledge is needed or some strong Google-Foo. When starting for the first time it seems overwhelming on the templates but as you get into and understand how things work and use the Ekahau site for reference and examples it comes quickly. Which is needed for the final exam for the course.

The course finishes with using data from the project we worked with during the week being used to build a report based off an example report. The example is what the final report should look like and we needed to build the code and formatting within the JSON template. This proved a little overwhelming for some, just because JSON may have been new and they had not dealt with the templates previously. It was a little bit of a challenge, but again was good as it helped to provide different aspects on reports and some ideas on formatting that I had not thought of previously, including using the Notes and Pictures features within Ekahau.

After the final, my head was full of ideas, thoughts, questions and excitement which is exactly what a course should do for us. The ECSE Advanced is more than worth the time and cost, especially if you manage or work with a team of multiple engineers and surveyors. The training arm of Ekahau has again scored big with this course in my opinion.

Ekahau Pro in the Field

IMG_1136

 

In the wireless field Ekahau has started to become the standard for wireless site surveys and predictive designs. Earlier this year the latest version of their software, Ekahau Site Survey, was released with a cool new facelift, cloud sync functionality, new functions to use with the Sidekick and new branding to Ekahau Pro.

I personally have been hesitant to use it for a few reasons but mainly because I am not one to go all-in on new software that has not been put through the paces by me personally before turning over to a whole team as a ‘corporate standard’. Especially when there was such an overhaul as there was with Ekahau Pro. I had the opinion the software was kind of rushed to market and still had some issues that needed to be worked out before turning over to the larger team. Most have been fixed as Ekahau, as they always have, is listening to the users and professionals and working to bring us one of the best packages on the market.

Recently I attended the Ekahau ECSE Advanced course, this will be covered in another post, and got my hands truly dirty in the software and all the other tools Ekahau Pro has brought to us. This helped calm some of my misgivings on issues with the software as well as really helping me to understand some workflows, team concepts and basic awesomeness Ekahau has provided to the industry. After the course I needed to perform an outdoor survey that came out to about 2.8 million square feet, so I figured this would be a great time to really put the iPad App and some of the really great features of the software to the test.

I started with really sitting down and working through what my workflow should look like. This is something that I had somewhat done in the past, but not to the point of actually writing out from project inception to reporting how the flow should look. Without this workflow I now realize Ekahau is just data collection software. Once you get a solid workflow in place and really use it, the software really stretches its legs.

I began by setting up my project as I normally would. I then decided to try out the iPad survey instead of dragging my laptop all over this outdoor survey in the heat. I got my Sidekick all setup on my bag and got the iPad app running. I had to make a decision on getting the project to the iPad. I was on-site to tune the WiFi and really get it working better so I decided to transfer to the Sidekick to move it to the iPad. The cloud sync for me and my team at this point is not feasible as a usable solution as there is no file structure to keep projects separated by customer, survey, etc. With hundreds of surveys with multiple surveyors this begins getting out of hand and unusable very quickly. I am confident Ekahau has a solution they are working on and am excited for it so I can really start using this cool feature.

Having the project on the internal drive of the Sidekick was super useful. This gave me a central drive to use for both the iPad and the laptop as I needed to edit, etc. It also gives me a built-in backup of the project in case I do something stupid, like that would ever happen, and delete the wrong file or have some sort of corruption and lose hours of data that might not be able to replicated. Having the iPad connected to the Sidekick via a USB cable makes transfer very quick of the file and very simple. The connection for the Sidekick to the iPad can be somewhat challenging depending on what generation of iPad you are using. The Sidekick is a Micro USB connection and the iPad can be either a lightning connector or USB-C connection. I have had an issue with finding a Micro USB to Lightning connector cable that works without adapters and the like for the iPad. The ones I purchased did not hold up well in the field during surveying. Now came the fun part surveying.

Surveying with the iPad was a welcome change, but not without its own challenges. After years of holding a laptop in one hand and clicking while trying to read a map are coming to an end. The iPad was obviously much lighter than a laptop of any kind and the clicking with the Apple Pencil was nice and easy as opposed to using a touchpad or some such thing on a laptop and mis-clicking, right clicking on accident, etc. The main issue I had with the iPad and the pencil the heel of my hand accidentally tapping and placing a data collection point that I was constantly having to remove. This taught me another trick I should have done years ago. Clicking more often so I can easily execute an undo without having to re-walk all the real estate already covered.

I then decided to use the Notes function within Ekahau for the installation. This feature has been expanded nicely in Ekahau Pro to allow notes and pictures to all be together along with a running history of notes with each person who added a note and when they added it. This helps when multiple people are using a survey file and the notes are being pulled out into a report after the survey. I was using this feature in particular to capture pictures of the AP installation along with location and serial number and MAC address to output an as-built type table at the end of my report. The feature is very cool with the iPad as you can use the internal camera, then use the pencil to do markups right on the note and then type out any other notes needed for installation or information. The only drawback I had was I was using this during my validation survey and when I wanted to take a photo or place a note, I had to stop the survey and then restart after the note was captured. It was a pain the first few times, but you get used to it quickly and just work with it.

I had one other issue during this survey that was no fault of Ekahau, the iPad began overheating very quickly in the heat of the day. I made it about 30 minutes out of the gate before the iPad totally overheating and shut down for an hour or so to cool-down. Once I began in the evening and early in the morning I had no other issues with the iPad.

IMG_1116

All-in-all after delaying using the iPad and Ekahau Pro for a few months, I am very happy I decided to put it through it paces and was very pleased with the final outcome and flow of work. As explained in the ECSE Advanced course, the workflow is the most important part of using the software. The ease of surveying with the iPad was very welcome and the ability to hold the survey file on the Sidekick and move back and forth from the iPad to the laptop for further analysis was very welcome and very exciting. Ekahau has yet again bought us what we have been asking for and is setup very well for the future.

 

Security is the New Standard

Everywhere we look today we hear about hacking of servers or email systems, credit card systems being compromised and public Wi-Fi as a ‘use at your own risk’ service. With all of the  big bad’s out there, security should be the new standard within wireless.

Security is more than a buzzword

There are so many buzzwords in the industry at this point with 5G, WiFi6, OFDMA, WPA3 and so on, security should not be considered one as well. For years wireless security was nothing more than a rotating passphrase, if someone remembered to change it. WEP finally got hacked which gave way to WPA and then WPA2. But for the most part all devices where still using a passphrase that was proudly displayed on a white board, sandwich board or the like. When wireless was a ‘nice to have’ commodity this was just fine. With wireless now becoming the primary medium for access, security is a must. Data moving back and forth from private and public clouds requires data have better security than a passphrase. Certificates, central authorization and accounting has become a must. Centralizing these needs into a single system makes securing and monitoring devices within these data sensitive networks.

How can this go further within the network?

Taking security to the next level

Basic monitoring of security within the network, user logins, MAC authentications, machine, authentications, failures, etc. is great to keep up with what is happening or to troubleshoot when a user is having an issue. But with the risks in today’s networks, both wired and wireless, a deeper-level of understanding and monitoring is needed.

This is where a User and Entity Behavioral Analytics (UEBA) system comes into play.

The basics of a UEBA seems simple, but it is a very complicated process. Multiple feeds being provided by items such as packet capture and analysis, SIEM input, NAC Devices, DNS flows, AD flows, etc. all come into the system and are correlated against rules that setup by the security administrators. As this traffic comes in and is analyzed by user a score is provided to that user based on where they are going on the Internet, traffic coming in and going out to ‘dangerous’ locations (i.e. Russia or China), infected emails that were opened, etc. This score is then updated or times. Once customized thresholds that are configured by the administrators are met or exceeded different actions can be taken on that device, disconnected from the network, quarantined on the network, or an alert sent to an administrator.

Total Package

Designing and deploying networks with complete 360º security visibility is no longer an option but a must. With data flowing in and out of private and public clouds, into and out of Internet-based applications, and the pervasiveness of wireless as a primary access medium there has never been a more important time to make security a standard and not an after thought.

WiFi 6 Why We Need It And What It Isn’t

Wireless networks have been around for a long time. We all know the history of the industry starting as a nice to have feature that we could work without a cable. Today wireless has become the primary medium for connectivity in most industries and most households. As the shift has occurred, wireless technology has had to try and keep up. The latest phase of this race is the 802.11ax, or WiFi6, amendment.

Why do we need WiFi6?

By now everyone has heard that 5G is coming and the crazy fast speeds that it will bring from a cellular-side. We will look at that more in another post. But WiFi is fighting the same issues as cellular in today’s world. We are oversubscribed on WiFi, speeds suffer because of older technology, wireless is the primary connection method of almost every device in the world and IoT is coming. Enter WiFi6.

To be upfront as we begin this, ratification of the 802.11ax standard looks to be at least a year away with most stating a date of September 2020 before this will happen. Even without full ratification manufacturers are starting to put out access points and a few clients are starting to trickle into the market.

So with ratification still a year away, why do we need to worry about WiFi6 now? WiFi6 is more about capacity than speed. As more and more devices are accessing the wireless network, bottlenecks begin appearing. The way WiFi6 will handle this is a trick taken from the cellular industry with OFDMA (Orthagonal Frequency Division Multiple Access. The easiest way to explain it is we are taking a highway that has 8 lanes today and then funnels to a one lane road. Huge bottleneck occurs and all traffic grinds to a halt like the 405 in California. Now with WiFi6 and OFDMA, those 8 lanes stay 8 lanes and traffic can flow freely. With having these extra ‘lanes’ capacity is now increased. This is the key part of WiFi6. There is a great white paper on the traffic lanes with well done diagrams and more information on here (https://www.arubanetworks.com/assets/so/SO_80211ax.pdf ).

We have all heard about the speeds and how fast we can now send and receive traffic on WiFi6, but capacity is the key to the system. More capacity equals more opportunities for devices to be serviced on the network, especially for time-senstive data like Voice and Video over WiFi. As we move to Mobility First workplaces and stop pulling cables to desks, wireless is more and more important. Design is ever more complex now for wireless and how we can use the spectrum smarter to allow more of these devices to function and function well.

As stated previously, the key to the new ammendment for 802.11ax is not all about speed. It is about capacity. We need to be looking at how we handle these time-sensitive data and not how we push them faster. With WiFi6, yes the speed is there if you have the right client, but how do we service that least-capable device and make that function as if it is a WiFi6 device? Capacity is the key and as we continue to add more devices, i.e. IoT, wireless first deployments, nurse call devices. WiFi6 is the key to solving this issue and granting that capacity we so badly need.

Auto-Channel Timing and the Issues it can Cause

All wireless network vendors have some Auto-RF Management of some manner, RRM for Cisco, AirMatch (Formerly ARM) for Aruba, etc. Most of the industry uses these features for about 95% of their installs to handle power level changes, channel changes based on interference or utilization. But something I have noticed time and again is the number of installs that use the default values for these Auto-RF algorithms to run.

So the question is, why do we care about this?

When using this for control of power I typically do not see a big issue in using default values for timing of the algorithm. However, for channel assignment I have seen lots of problems over the years using defaults values and the issues it can cause clients.

What is auto-channel management?

Simply, auto-channel management is exactly what it says, centralized automatic management of the channels being used in the network by an RF or mobility master. Each manufacturer has their own way of managing and handling these changes but the concept behind it is universal. We will look into each manufacturers way of doing it in another blog. This one is simply how it generally works.

During normal operation of the wireless network access points collect data about the RF environment, either from dedicated sensors, off-channel scanning, RSSI values that clients are being seen at, as well as neighbor messages from surrounding APs in the same RF group or neighborhood. This data contains client load, interference seen from radar, microwaves, Bluetooth or other networks in the surrounding area.

All of this data gets sent back to the RF Master, typically the wireless controller on the network or a master controller that is handling these duties. This master then takes all of this data to make the calculations for the APs in the network for an optimized channel plan to help mitigate interference as much as possible.

Once this data is compiled on the master the changes are sent back to the network based on anchor times and interval settings. Cisco does this default every 10 minutes starting at midnight. Aruba sends this at 5 am local time to the Mobility Master by default. A common misconception I have run into over the years is just because RRM runs every 10 minutes, does not mean that the channels are necessarily changing every 10 minutes.

 

Why is this an issue for clients?

With the addition of 802.11h the Management Frames Information Elements now include Element ID 37 for Channel Switch Announcements as shown below from the IEEE.

             
  Element ID Length Channel Switch Mode New Regulatory Class New Channel Number Channel Switch Count
Octets: 1 1 1 1 1 1

The Channel Switch Announcement is sent from an AP that has been marked as needing to change channel by the AutoRF calculations. The important parts of the element are the Channel Switch Mode, New Channel Number and Channel Switch Count.

The Channel Switch mode informs the clients on the AP that is scheduled to change channels that a change is going to occur. If this value is set to 1 the clients should cease transmitting data to the AP until the change has occurred, which will cause a disruption in communication for a short period until the change is complete. If the value is set to 0, there are no restrictions on the clients transmitting during the channel change.

The New Channel Number is pretty basic, this the new channel that AP will be on after then channel change is complete.

The Channel Switch Count is basically the countdown timer for the channel switch.  If the count is set to 0 the channel change could occur at anytime. If it is some other number, that is the remaining time before the change occurs.

So with this very basic overview, why does it matter to a client?

In wireless networking a client’s channel is based on the AP it is connected to. If the client is connected to an AP on channel 11, the client will communicate on channel 11. But again, why does this matter?

When an AP changes channels based on RRM calculations, every client associated to that AP must change as well. So our AP that was on channel 11 changes to channel 6 now every client associated to that AP need to change to channel 6 based on the Channel Switch Announcement and the values within that element. Based on the Channel Switch Count, if a client is downloading a file, making a video call, or just doing basic online tasks from their computer there would be a disruption to that client. It could be very brief, but it depends on how long it takes the client to reassociate or roam to the new channel for the AP. With time sensitive applications this can seem like jitter or lag or even just slowness on the network. This can equate to the dreaded, “The network sucks right now”.

Back to the opening, if the defaults time is set to use say 10 minutes, there is a possibility that a network that is seeing interference from surrounding wireless networks, high channel overlap, lower RSSI values, etc. could change channels on AP that frequently. So clients that are connected to these APs are changing channels as well every 10 minutes which could be confused for a small service disruption or just poor network quality. This topic will be looked at in-depth in a coming post.

In the next post we look at some other issues this constant changing of channels can produce as well as how a couple of different manufacturers handle AutoRF within their products.

 

Networking Field Day 20 Recap – Juniper is Hedging Their Bets

During Networking Field Day 20 that just wrapped on February 15, 2019, there was a most unexpected presentation from Juniper around automation and some things they are doing to hedge their bets on where the industry is moving over the next several years.

 

Mike Bushong (@mbushong) took the stage first for the team and laid out Juniper’s vision of where the industry is headed and gave warning to some of us old guys, either evolve your skills and be ready to leave CLI behind or you will be left behind. Automation is not a buzzword in our industry any longer, it is the here and the now. If you are looking into automation, looking to understand or learn automation, or even just try to understand what automation means you are already behind. As Mike points out in his Networking Field Day 20 talk, Juniper has lead the way in automation for quite some time but we are now at a tipping point where CLIs are are going to be a thing of the past very soon. Mike also made something very clear during his intro that had a few of us in the room scratching our heads, the tools Juniper is putting out are open to the public, not all are Juniper specific and they are getting no monetary vlaue back from them it is for a greater cuase to us all. A fundamental shift in the industry the needs to take place. And I truly must agree with Mike on this, we as engineers have to start getting better at this or we will be left behind.

Next Raunak Tibrewal (@raunaktib) took the mic over and introduced us to Juniper’s new EngNet site and portal. EngNet is built around 3 bases that help an engineer to prepare for and learn automation, Learn, Build, Explore. One of the things I was impressed with is this was built with community in mind. Juniper has a dedicated Slack channel for community support as well as J-Net to make this a very collaborative and open learning experience. I connected to the EngNet site and was really pleasantly surprised with the content, how it was laid out and was really shocked at the amount of content available. Right up front you can signup for the Slack channel, but then as you continue down there is a nice roadmap to get you going on things no matter what you level might be. Obviously a lot of this content is around Junos OS, but there is some vendor agnostic lessons as well. I think two fo the coolest features are the Automation Exchange which have readily available Ansible playbooks, NAPALM scripts and other goodies which are all sotable and searchable by either Type, Market Segement, Network Role and Operational Process. The final piece that brings this all together is the Learn area in which you can followed Assisted Learning via different options or follow the Self-Learning track. Most everything within EngNet is free, but there are some items that you get a free trial for 60-Days or so and then will need to pay for. All-in-all this is a great place to start if you are looking to get into Junos OS or just to learn through some open labs and even just see what others have done for automation.

The final presentation came from Matt Oswalt (@mierdin) who unveiled the Juniper NRE Labs platform. Matt started by building off what we had already heard from Mike about sutomation today in our industry is not a production-side problem but a consumption-side problem. The tools are there, the technology is there, but the people are not consuming them. To try and help solve this consumption problem, Juniper has released their NRE Labs  which is a “Community Platform for learning and teaching automation and Network Reliability Engineering”. Basically they have put out a totally free (you do not even need to supply an email address), broswer-based platform to learn vendor-neutral automation using tools such as YAML, Python, REST APIs, Git, Linux and so on. It starts with fundamentals if you are just getting your feet wet in autmation or coding. Then there are tools availabel to try our like Salt or NAPALM or Ansible. All of this runs natively in the borwser with no need to download anything. The lessons are customizable based on your current strengths or weaknesses which then bases the tasks on your current knowledge level and provides a roadmap for learning with links. One of the collest things Matt and the team have done is to enable the use of Jupyter notebooks in the learning. Basically this enables you to have a Python interpreter running real-time so you can see the output in your browser window as you run the code. There is so much that has been done by the team on this. I would suggest to go check it out and see for yourself the greatness that is there.

What Juniper has been working on to enable the users to actually consume the tools and automation that is out in the industry is really amazing, especially the fact in the case of NRE Labs, they are not looking to monetize from it. This huge in my opinion and in the long-run could actually help Juniper based on their product set and strong autmation reliance on their products.

 

Check out the presentations from Networking Field Day 20 here.

Cisco RRM Restart

Recently when working with Cisco wireless networks I have been really working to get Dynamic Channel Assignment tuned in and really understand much more about it. Some of the important things to make sure you are setting correctly include Anchor Time, DCA Interval (please don’t use the default, there is a blog post coming about that), etc.

One thing that became an option via CLI in the 7.3 code train was the ability to restart the RRM DCA process on the RF Group Leader. Why is this important I can hear some of you saying, or why would I want to do this? Here are a couple of examples of why.

If a controller enters or leaves an RF Group or if the RF Leader leaves and comes back online, as in a reboot, DCA will automatically enter startup mode to reconfigure DCA regardless fo the settings that have been changed on the controller, i.e. not using default of 10 minute intervals. But is there a need to do this manually? Yes.

As you add new APs into the network it is a good idea, and a Cisco recommendation, to initialize DCA startup mode. The reasoning behind this is as APs are added, DCA needs to rerun calculations and provide a more optimized channel plan based on the newly added APs and what the other APs are seeing over the air. When this command is run, it should be done from the RF Leader and will only affect the RF Leader.

The command should be run on both 2.4 GHz and 5 GHz radios:

2.4 GHz: config 802.11b channel global restart

5 GHz: config 802.11a Channel global restart