Managed Switch/Routing Ethernet Infrastructure for The Gravesend Inn, Part I

Note: I'm working here through new material to extend the networking coverage in the new edition of my book. If you’re not already comfortable with the general network concepts like IP addresses, etc, you might want to review chapters 18-21 of the current edition.

Our annual Gravesend Inn haunted attraction is a pretty good reflection of the progress of networking technology in the field of entertainment control. The first edition of the show, way back in 1999, was built around a hardware-based, Conductor show control system, and all the sensors were connected using contact closures and RS-232 based input/output (I/O) (see photo at right, click for larger version). In 2002, I shifted over from Conductor to Medialon Manager (which we still use today), with all the I/O still handled via serial connections.

2004 was the first year that IP addresses started appearing on my I/O paperwork, because in that year I started switching all the I/O over to network based systems.  Over the years, as we added more and more, and the network grew into a bunch of isolated, physically separated, “unmanaged” Ethernet switches: one for show control, one for lighting, one for sound, etc. 

 Last fall, we moved all the video playback and surveillance systems onto the network, and upgraded from 8 analog to 16 PoE (Power over Ethernet) cameras. This approach had a huge advantage in that we could run one Cat 5 cable to a camera position, and that would supply power, receive video, and audio, replacing three separate cables. We could have just added two more physically separated networks (one PoE switch for the surveillance system, and another for video playback through Watchout), but at this point, it was getting a bit crazy.  Plus, I had long studied Virtual Local Area Networks (VLANs), but had never actually implemented one, and being in an academic environment, I feel like we have an obligation to push the envelope a bit.  So, I got approval for a set of managed, layer 3 (more on that in Part II), Cisco SGE2000P 24-port Gigabit Switches, and integrated the entire network into one physical network, with seven VLANs.

What is a VLAN?  Before going through that, we actually go back in time a bit and delve a bit deeper into networking.  In the old days (say, six or seven years ago), we would build Ethernet networks for controls systems using simple Ethernet hubs. These devices simply took every incoming Ethernet frame coming from a computer (or “host”) connected on one interface of the hub, and forwarded it out to every other interface on the hub (see figure at right), thereby “broadcasting” all the information to every single connected device (if you know the OSI layers, this all was operating at Layer 1, physical). If you wanted to make the network bigger, you would just plug in a second hub to the first one, and now your “broadcast domain” would expand out to all the additional devices connected to the second hub. You could connect up to five network segments before you hit the limits of the “5-4-3” rule. When I first started working with Ethernet back in the mid-1990’s at Production Arts Lighting, our networks were small enough that we didn’t have to deal with this much.  Many of our networks were built using a single four-port Ethernet "concentrator" (basically a hub with some ability to shut down an interface is there's a problem on a connected computer), which cost something like $1500 (today the same thing would be like $10). Over the years, designers of larger networks had to carefully abide by the 5-4-3 rule and other design constraints.  PLASA (then ESTA) generated a recommended practice with some guidelines for these networks, and this now obsolete document makes for some interesting historical reading.

By the early 2000’s, Ethernet switches started to become affordable for our industry, and they are so cheap today that it’s hardly worth buying a hub. A switch looks physically just like a hub--it's just a box with a bunch of Ethernet connections.  But inside, it operates very differently--it’s smart enough to know something about the addresses of the equipment connected to each port, and only forwards on to a specific interface Ethernet frames intended for the connected gear. This solves a bunch of problems (collisions, etc--that’s all covered in my book), and can manage traffic and use the network bandwidth much more effectively than a system built out of hubs. A simple switch that just automatically learns the connected devices is called an “un-managed” switch, operating at OSI Layer 2 (data link), and high performance, un-managed switches are very affordable these days, and work great for many entertainment control applications.  For example, the GrandMA2 lighting console we use on the Gravesend Inn needs a network connection to its processing units; if that’s all we need to do, an un-managed switch works just fine (although it’s worth investing in a good quality, “non-blocking” switch with a quality power supply).  

In simple terms, each switch you connect to a network extends the “broadcast domain” throughout the network until it reaches a router, which is used to route traffic between separate networks (separate broadcast domains). In a well-designed system, this really doesn't present much of a problem for the network performance, because we have lots of bandwidth now, and most of the broadcast traffic you're likely to see these days are low bandwidth things like ARP requests (an address housekeeping message, covered in my book).  And even if you are using something broadcast-heavy like ArtNet, this really shouldn't put that much of a burden on modern, high-speed networks.

OK, we’re back up to 2012, and I started out asking the question: What is a VLAN, anyway?  It’s pretty much what it sounds like--a virtual Local Area Network, where all connected devices in the VLAN share the same broadcast domain. But the difference here is that, using VLANs, traffic from multiple broadcast domains (multiple virtual LANs) can be segregated, while still running on the same physical switch hardware. This gives a huge advantage in the physical architecture of the network, allowing a single physical network to serve a number of purposes.  

For example, on the Gravesend Inn, I use four switches looped together using Cisco's proprietary "stacking" system, which essentially makes the four separate 22 interface switches into one giant 88 interface switch:

And with more than 50 separate devices plugged into those four switches, I was actually able to implement this:

Each of the VLAN “clouds” on this diagram is, effectively, a separate network, but the entire network shares a common physical infrastructure. This requires more cable management (we're good at that in our industry), but makes things a lot more straightforward.

OK, now that we've got the background out of the way, in Part II we'll talk more about the specific issues of the Gravesend Inn network.

Previous
Previous

Managed Switch/Routing Ethernet Infrastructure for The Gravesend Inn, Part II

Next
Next

Draft Table of Contents for my Updated Book: Best Feedback Wins a Copy!