Managed Switch/Routing Ethernet Infrastructure for The Gravesend Inn, Part II

Note: I'm working here through new material to extend the networking coverage in the new edition of my book. If you’re not already comfortable with the general network concepts like IP addresses, subnets, etc, you might want to review chapters 18-21 of the current edition.

In Part I of this series, I talked about the evolution of the network for the Gravesend Inn, and gave a little background on VLANs and managed switches.  Here, I’ll talk more about the specifics of the implementation of the Gravesend Inn network.

By 2011, I had moved almost every single control system fully onto out integrated, managed, VLAN’d network (the one last hold out is a MIDI feed to the Yamaha PM5D mixer).  Let’s take a look again at this network diagram:

Let’s go through each VLAN, starting with lighting.

Lighting designer John Robinson used our MA Lighting GrandMA2 lighting control system, with a “full size” console as the main system, a “Light” system as backup, an MA Network Processing Unit (NPU) for distributing data, and two wireless access points for programming, one upstairs and one in the basement. All the Ethernet interfaces used by these devices (on two separate switches) are assigned to VLAN 2, and work on a subnet of 192.168.2.0, with a subnet mask of 255.255.255.0 (/24). Here’s a wireshark screen capture of one lighting control message going from show control to the GrandMA2, captured while the show was running:

You can see the blue highlighted text on the right, “goto cue 7 executor 1.22” (more on GrandMA control over IP in this detailed entry). 

Sound Designer Bruce Ellman used a Stage Research’s SFX sound effects playback system (with some input from me--SFX’s main competitor, QLab, as of this writing, does not have an adequate network control scheme for our purposes), running a Dante Virtual Sound Card driver, which sent out all the audio for the show over Ethernet to the Yamaha PM5D mixer for the main system, and a Yamaha DM1000 mixer in the basement, which then went out analog to the various amps or self-powered speakers. All these devices are in the subnet 192.168.7.0/24, and, originally, were on the main Cisco network. We had a bit of difficulty getting the Dante system to communicate properly (buried configuration issues which we eventually worked out), so I moved these few devices off onto a simple unmanaged switch.  In the end, with the time pressures of opening, this network stayed separate (this year I plan to bring it back onto the main network). With SFX on a physically separated network, how did we talk from Show Control to the SFX system? Take a look at the diagram again and you will see two lines leading out from the SFX system; one of those goes to the Audinate streaming network; the other goes to the Show Control VLAN (1). To make this work, we actually equipped the SFX machine with two physical Ethernet interfaces; one resides on the sound streaming subnet; the other resides on the show control VLAN in the 192.168.1.0/24 main show control subnet.  Windows routes these messages from SFX based on the appropriate IP address; audio samples flow out on the sound subnet; control messages come in (and are responded to) on the main show control network interface. Here’s a wireshark capture of an actual sound cue trigger from the show:

(More details on controlling SFX using TCP/IP here.)

When we started working on the Gravesend Inn last year, we only owned Watchout version 3 (we had version 5 on order but it didn’t arrive, of course, until right before the show started).  WO 3 was only designed to run a single video timeline, and last year we used two WO systems in two different--and unrelated--areas of the attraction. WO uses a proprietary protocol to synchronize multiple video displays on the network, and not surprisingly, in an experiment, I found find that two WO 3 systems on the same broadcast domain did not play well together. The solution? Two VLANs. VLAN 3, and subnet 192.168.3.0/24 were for a single “dining chamber” WO machine; VLAN 4, subnet 192.168.4.0/24 were for our new, three-screen bay window effect in our new “Conservatory” area.  Both of these system were controlled directly by Medialon Manager (more details below).

Next up is the video surveillance system, which, with 16 IP cameras, generates an enormous amount of traffic. (See my series of blog entries on the crazy PoE problems I had with Netgear switches and PoE cameras here.)  The cameras, of course, were spread all over the attraction, and connected via whatever of the four switches was closest. The server, though, transmits streaming video to two “search client” PC's (used by student operators to do instant replay and recall) located in the sound booth next to the commercial grade, network DVR. To keep the search clients' traffic separate from the streams from the 16 cameras, the solution was, again, two VLANs: VLAN 5, subnet 192.168.5.0/24 for the display and search systems; VLAN 6 subnet 192.168.6.0/24 for the cameras and their streams to the commercial-grade security DVR server. The server comes with two Ethernet interfaces, so two cables connect that one physical computer to the same switch; each of the Ethernet interfaces is, of course, assigned to the appropriate VLAN/IP. 

OK, I’ve skimmed over one important VLAN: VLAN 1, which contains the Medialon Manager show control system running on a PC, several input boxes that read in sensors from all over the attraction, and Programmable Logic Controller (PLC) boxes that control the animated effects throughout.  In order to do its job, the show control system also needs to communicate with the other control systems--lighting, sound, video--which all reside on different VLAN’s. How can they communicate? In this case we need inter-VLAN routing (remember, a router connects separate networks together). 

To see why, let's say we try to send a “play” command from the show control system at 192.168.1.111 to the Watchout machine at 192.168.4.112. Generally, these systems are on different subnets (192.168.1.0/24 and  192.168.4.0/24) and therefore will not be able to connect, even if we replaced the entire switch system with a bunch of hubs. And even if we backed out the subnets to something like 192.168.1.0/16 and  192.168.4.0/16, it still wouldn't work, because each VLAN is a separate broadcast domain, and the broadcast ARP commands that would be issued as part of the communications process would not be able to reach across the VLANs.

The Cisco SGE-2000P switches I selected have Layer 3 capabilities, meaning that they understand not only the raw Ethernet data addresses of each packet, but also their associated IP addresses. So, the "stacked" system as a whole offers a feature that (when enabled) actually offers not only a switching infrastructure, but also a router that can pass traffic back and forth between the VLANs. And with the Cisco management interface, this is pretty easy to use--you just enable the feature, and when it sees IP addresses on a particular VLAN, it creates a "routing table entry" to forward packets to the correct destination on each VLAN. But for that to work properly, we have to make sure one final network setting is configured properly: the "default gateway". In smaller, closed networks, this is a setting that can often be ignored, but we need it here.  Why?

A default gateway, for our purposes, is simply the configured IP address to which a network (or VLAN) will forward packets to which it can't otherwise find a route. So, let's configure the 192.168.1.1 as the default gateway on the show control machine (and turn on the routing in the switches), and now what happens when we try to send our play command from show control to Watchout?  The PC running Medialon Manager realizes that the IP address of the Watchout machine, 192.168.4.112, is not in its subnet, so it sends it to its configured default gateway of 192.168.1.1. The switch system then accepts the packet, and figures out that has a route to the 192.168.4.0 subnet on VLAN 4, and it then forwards the message onto the PC running Watchout.

The messages we are sending to Watchout use TCP, which needs now to respond back to the Show Control system at 192.168.1.111 to acknowledge the receipt of the message. How does this message, targeted back to 192.168.1.111 in VLAN 1, but being issued on the 192.168.4.0, VLAN 4 network, make it back?  Via VLAN 4's default gateway, of course, which I set to 192.168.4.1.  I’m sure this is getting confusing, so let’s put it into a graphic:

That all sounds like (and is) an enormous amount of work for the network, but these switches are made for this, and it all happens lightning fast. In the Gravesend Inn, for example, I was getting back 100 frame per second time code updates from both Watchout systems, on two separate VLAN’s with all this routing going on, and I never saw a glitch.  

OK, what about that second connection in the diagram from the show control computer to the GrandMA2 Lighting control console? Now that we've covered default gateways, we can go over that.  The problem is that, as far as John R and I could figure out, you can't configure the GrandMA with  a default gateway (something I hope they will fix in a future software release).  This meant that I couldn’t use the Cisco inter-VLAN routing to reach the lighting console for control cues, because, as in my example above, while I could get a packet to the console, it then didn’t know where to send any returning acknowledgement, because the show controller was on a different VLAN/subnet. The solution to this?  A second Ethernet card in the show control PC (just like the SFX machine) and two IP addresses; one on VLAN 1, show control, and the other on VLAN 2, lighting.

The system was rock solid for the run of the show, and the couple glitches we did have were easy to track down because the problems were logged right in the switch system. And, in many ways, this Gigabit network is basically just idling through our show.  I captured 4:29 of show control VLAN 1 data, which was over 120,000 packets. That may sound like a lot, but it averaged out to .261 MBit/sec, which is 0.02611% of the capability of just that one VLAN!

In the end, this integrated network approach worked really well for us, although if you're not interested in pushing things like this, then it would be certainly just as effective to build the system using several separate, unmanaged switches (not hubs), one for each network.  That would mean, however, that you'd have to put multiple Ethernet adaptors into the show control machine, and manage all those issues.  Me, I'll stick with the VLAN'd system--it's just more fun!

And, if I've given you just enough information to confuse you, all of this will be covered in the forthcoming update to my book.

Previous
Previous

More Seals In New York Harbor

Next
Next

Managed Switch/Routing Ethernet Infrastructure for The Gravesend Inn, Part I