We've been doing the Gravesend Inn haunted hotel since 1999, which means 2018 is our 20th year! Some years ago, I started a quest to get all the control systems for the entire attraction onto a network, and I completed that in 2011 (more here). And as the attraction has grown in popularity (we have been getting nearly 5000 attendees a year for the last few), in recent years I've been on a new mission to streamline, simplify and completely document all the control systems to maximize reliability (and so that I don't have to be in the building to babysit the systems every minute that the show is open). Last year, I moved all the video playback to Brightsign players, and that really went great (you can read about it here).
One of the oldest systems we currently use are our "TCP I/O" boxes, which date back to 2004, and are based on Advantech ADAM 6050 Ethernet I/O units. These units are cheap and, after I navigated a documentation nightmare, the units we built have been working well for 13 years now, with only a couple units failing over that time. But in the last couple years, we've had a few intermittent problems, where the box wouldn't boot up properly or other similar issues. So, I've been figuring to move onto a new system, and given our educational mission, I wanted to go with what was most widely used in our industry, and these days it seems to me that the most widely used I/O systems are based on Beckhoff technology. The big theme parks use a ton of Beckhoff stuff, as do other major players like Tait, Hudson, etc etc. In addition, our original boxes were 12VDC, which worked well with the initial alarm grade motion detectors we used, but my survey of the industry showed that 24VDC is much more commonly used. With all that in mind, four years ago, as funding allowed, I started purchasing some Beckhoff parts, and worked with students through the years, culminating in the completion of three boxes this past spring. On the summer break I finally got a couple days to finish configuring and documenting the boxes, and as usual, before I put anything into the Gravesend Inn haunted house attraction, I stress test is for typically several days or a week--I want to find any problems now and not in October.
For the system, I worked with Brian Buck, our Beckhoff salesman and selected a BK9050 bus coupler, which connects the I/O bus to the Ethernet network and speaks MODBUS TCP; and a KL1809 "HD Bus Terminal", which has 16 channels of 24VDC digital input.
This Beckhoff system is reasonably priced (by industrial standards) and very flexible. A wide range of I/O is available and can be mixed in the same system; Beckhoff is also very aware of our market and even has a section on their website for stage and show applications.
Many years ago, in our I/O systems, we used XLR cables. But at the time we had a limited stock of XLR, so I eventually moved to Cat 5 for sensor connections when we built our TCP I/O boxes. I chose Cat 5 because it is cheap and could easily carry these small currents over the distances we need.
This worked fine, but inevitably confused the students (students often think that a cable dictates what is carried over it (rather than the other way around), so they often think that the simple contact-closure based sensors we use were speaking Ethernet), and this led to patching problems. Fortunately, since Ethernet is transformer isolated, the systems wouldn't be damaged if incorrectly patched, but it still lead to confusion. So, moving to the new I/O system also gave us an opportunity to change connectors. After a lot of thought and research, we decided to go with "M12" connectors, which are commonly found on industrial sensors.
These aren't used for show purposes so cross patching won't be a problem; also they are very robust once connected. Most factory automation systems, in which these sensors are typically used, are not set up and struck every year, so key to us being able to use this approach was finding field-terminatable connectors like these so we can make extension cables.
Setting IP Addresses
Beckhoff has a very clever method to change IP addresses using ARP commands (see my book for more information about ARP and associated network topics). To start this process, you first factory reset the bus coupler by powering off, turning all the DIP switches on and removing any I/O and terminating only with the KL9010 end block:
You then power off the unit again, connect your I/O, turn all the DIP switches off and power up again; this resets the bus coupler to its default IP address of 172.16.17.0.
To change the IP using the ARP method, you first run arp -a command to find out the MAC address of the unit, then you delete the entry, then you create a new entry, and then you do a special ping command to set the address.
The manual in section 4.4.3 says, "It is, however, only possible to alter addresses within the same network class"; I assume the reason for this is in the final ping command you have to ping outside your subnet, and that won't be possible without a router. But I figured out that if you have a computer with two ethernet jacks you can set one to a class B address and another to class C, and then connect cables from both interfaces to a switch connected to the Beckhoff (Note: I assume this would also work if you changed the IP address of your machine mid configuration, but I didn't try this).
Update July 9: Jim Janninck learned me on the fact that you can assign multiple IP addresses to the same interface! Very cool. Use the "Advanced" button on the IPV4 interface screen.
It will show up in IP Config thusly:
Here's a log of the commands which are detailed in the Beckhoff manual (replace your desired target IP with the 192.168.1.xxx address, some MAC and other addressed munged for security reasons)
First, let's run IPCONFIG to check the two IP addresses I configured into my PC:
Ethernet adapter LAN 2 PCI:
Connection-specific DNS Suffix . :
IPv4 Address. . . . . . . . . . . : 192.168.1.123
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
Ethernet adapter LAN 1 Integrated:
Connection-specific DNS Suffix . :
IPv4 Address. . . . . . . . . . . : 172.16.17.123
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . :
Now let's ping the newly factory reset bus coupler to get it in the ARP table:
Pinging 172.16.17.0 with 32 bytes of data:
Reply from 172.16.17.0: bytes=32 time=3ms TTL=60 ...
Now let's run arp-a to get the MAC address of the bus coupler:
Interface: 172.16.17.123 --- 0xb
Internet Address Physical Address Type
172.16.17.0 00-01-05-xx-xx-xx dynamic
Interface: 192.168.111.123 --- 0xd
Internet Address Physical Address Type
Now, let's delete the ARP table entry for the bus coupler:
C:\Windows\System32>arp -d 172.16.17.0
Now let's create a new ARP table entry
C:\Windows\System32>arp -s 192.168.1.xxx 00-01-05-xx-xx-xx
Let's check to make sure it's there (note that it associated with the 192.168 interface):
Interface: 172.16.17.123 --- 0xb
Internet Address Physical Address Type
Interface: 192.168.111.123 --- 0xd
Internet Address Physical Address Type
192.168.1.xxx 00-01-05-xx-xx-xx static
Now let's run the special PING command that sets the address. This is where you need the second interface (or you need to change your IP address to something in the new subnet range before you try this):
C:\Windows\System32>ping -l 123 192.168.1.xxx
Pinging 192.168.1.xxx with 123 bytes of data:
Reply from 192.168.1.xxx: bytes=123 time=622ms TTL=60
Reply from 192.168.1.xxx: bytes=123 time=2ms TTL=60
Now let's run ARP -a to check to make sure everything came through OK
Interface: 172.16.17.123 --- 0xb
Internet Address Physical Address Type
Interface: 192.168.111.123 --- 0xd
Internet Address Physical Address Type
192.168.1.xxx 00-01-05-xx-xx-xx dynamic
And that's it!
An alternative to the ARP based method is the BootP Method , where a BootP server is used to assign addresses based on MAC addresses. To allow the BootP method, you turn DIP switches 1-8 and 9 on (10 off) and once you configure the address, if you leave the DIP switches in this setting, “The address assigned by the BootP server is stored, and the BootP service will not be restarted after the next cold start. The address can be cleared again by reactivating the manufacturers' settings ...“ Beckhoff supplies a BootP server called “TCBootP Server”
These Beckhoff modules have a watchdog timer for enhanced safety on outputs. From Page 53 of the manual:
The watchdog is active under the factory settings. After the first write telegram the watchdog timer is initiated, and is triggered each time a telegram is received from this device. Other devices have no effect on the watchdog. ... The watchdog can be deactivated by writing a zero to offset 0x1120. The watchdog register can only be written if the watchdog is not active. The data in this register is retained.
Note it says the watchdog timer is activated by the first write "telegram" (MODBUS TCP write Coil or similar) so if you only ever do read operations (what we do for inputs on these boxes) the watchdog timer will never be started. Once the timer is started by a write command, if no MODBUS TCP command (read or write) is received within the timeout (typically 1 second) the unit will go into a fault state (red watchdog error LED will light) and the outputs will shut off. You then have to go through a reset procedure to activate the unit again.
To disable the watchdog timer, you have to (as detailed above) write a zero to offset 0x1120. Through experimentation I found that you have to do this before any write (output) commands are issued, but only after the unit has completely booted up. Watch for the activity LED on the Ethernet port to light up before issuing the command.
Note: If you trigger the watchdog timer (red light on), then issue a reset, and then write the timer to 0, then the unit will function with no watchdog functionality but the red Watchdog error light will remain red.
Connecting to Barco Medialon Manager
The Gravesend Inn has been run by Barco/Medialon Manager show control software since 2002, and the way I always incorporate any new technology into the network is to first write a small, stand alone Manager program for the new device only, test that, and then import it into the larger Gravesend Inn program. Since this box is input only, and we are not doing any write operations, we disregard the watchdog timer. In Manager you have to configure both the I/O resource and a device pulling from that resource.
You then assign the resources to a Medialon I/O device:
Reading and Writing
Note: The Register Numbers in Medialon are "off by one", meaning they are one higher than the Register Numbers in the Beckhoff. For example, Register 4385 (0x1121) in Medialon addresses Register 4384 (0x1120) in the Beckhoff.
The inputs on the device are read automatically by the I/O resource if configured as shown above.
We're not using this (and if you do, you have to manage the watchdog stuff) but for posterity here's the information: outputs are written using a “Write Coil” command starting at address 1
I ended up with a simple little polling program:
The boxes are undergoing stress testing now, and if we get funding to replace our cables, I plan to start transitioning the Gravesend Inn sensor systems over this year to the new boxes.
Several excellent students (many now alumni) worked with me on this project over the last four years. Michael Sauder, now at Radio City, did the initial testing on and configuration of the Beckhoff units from Medialon manager. Tim Durland, now at Smart Monkeys, expanded on Michael's work and figured out the watchdog timer issues. Woody Woytovich, now in his Local 1 IATSE internship, did the box layout and schematic and constructed the first one. Dominique Hunter, Bailin He, Neil Carman, and Daniel Santamaria all worked on constructing and wiring the last two boxes this spring. Thanks as always to Eric Cantrell of Barco/Medialon for helping us with protocol details on this project and Jim Janninck of Timberspring for advice on box construction and initial testing.
Note: Updated July 9, 2018 with updated ARP-based IP and Medialon Configuration; thanks to Eric and Jim for the updated info.
It's become an annual tradition-since 2009 I've been writing about networking from a live sound perspective after catching up from Infocomm (and this year after storm chasing). You can see last year's entry here, and in this overly jargon-filled post I'm assuming you're familiar with the A/V networking landscape; if not, much of the terminology I use here was defined in previous write ups.
This year, I would describe the state of the A/V networking world to be pretty much the same as last year, with some interesting developments on the horizon. The lion's share of the products on at the show running audio over ethernet are doing so with Audinate's Dante; a few (important) companies are running AVB.
Update July 6, 2018: Roland Hemming's annual audio networking survey data has been released, it's an interesting read and supports my anecdotal observations above.
I unfortunately missed the Audinate networking world day-long event this year, so I might have missed some new products. But seeing new Dante product at Infocomm is such a common occurrence now that I didn't even really bother to try and document it. The AVNU (AVB/TSN) alliance did not have a booth as far as I know, but Audinate was there with a Dante booth and there were numerous "Dante spoken here" signs throughout the hall.
One of the trends I saw is that there are now quite a few small Dante I/O boxes on the market now, including the original ones from Amphenol, from Audinate itself and also NeutrikAll these smaller boxes that I've seen only deal with line input and don't have head amp control or phantom power, so if you want to use a mic input you need something like a Yamaha Rio. Yamaha showed their new V2 series of boxes but I completely forgot to get over there to look. To control the head amp/phantom on these you still need to use a modern Yamaha console or their R Remote software. Update 12:11pm: See comment from Uwe Weissbach below--apparently the new Rio's can be controlled from the front panel, which I think is great.
One cool new Dante product I saw was a USB DANTE Interface from RME that supports redundant Dante operation. This is a great a benefit over the Audinate Virtual Sound Card, and also offers more flexible clock options (I'll be looking into these for our haunted house to increase our redundancy). They also are making an AVB interface.
Cisco's Catalyst AVB switches were announced in 2016 (see my writeup here) and are now finally AVNU certified (and Cisco has a section on their website devoted to AVB). This is a long overdue development for AVB in large installations where the users can have the IT support to manage these high-level switches. Personally, though, for most typical live sound applications I recommend simpler switches; I personally use the Cisco small-business switches and that's also what Yamaha typically recommends. I asked the Cisco rep if AVB was coming to that line of switches but he didn't know.
Presonus also had an AVB switch on display, but, like the MOTU AVB switch which came out a few years ago, it's not AVNU certified. The Presonus sales rep on the stand didn't know what AVNU certification was (here's a writeup and photos of my visit to the AVNU Testing lab in 2013), so it doesn't seem to be a priority for them and they don't have any products on the certification website in any case. In 2015, a rep from MOTU said on the theatre sound mailing list in response to a thread I started, "AVnu certification is a priority for us but our interfaces and switch are not certified nor have we applied for certification yet. There were a couple of corner-case AVB features that kept us from starting that process when we released our first AVB interfaces last year. Once those are ironed out and solid, we plan to get certified." That was three years ago, and they are still not listed on the AVNU site, so it seems that certification is no longer a priority for MOTU either. So what's the value of the certification, if these well known companies are out there selling AVB product that is not certified? Since last year's Infocomm, in addition to the Cisco switch, there's only four new products on the AVNU certification list: the L-Acoustics LA4X amp/controller and P1 Measurement Platform/AVB Processor; and the Control4 S3-24P PoE switch, which describes itself as a "leader in the smart home market" (and I don't think they were at Infocomm as far as I could tell).
Connecting Dante and AVB and AES67
There seemed to be more connecting and interfacing boxes available, and AES67 functionality was advertised all over the floor (and I'm still looking for some non-Dante stuff that speaks AES67 so I can learn it--please contact me if you have any gear I could borrow!). Point Source Audio had this on their stand, the AuviTran AVBx3 Audio Toolbox, with both AVB and Dante card. Presonus was rumored to have a similar product, but the sales rep there said it wasn't ready in time for the show.
To Watch For the Future: AVB/TSN and Milan
The biggest A/V networking news at Infocomm was the announcement of "Milan", a new effort to help with AVB inter-operability. They had a roll out at Infocomm, but I was not invited--I only found out about it at all because I was hanging out talking to friends on the d&b stand. My friend and audio networking expert Roland Hemming, who I missed at Infocomm, did make it into the meeting and has written his insights which you should read here. "It was a nice presentation from the Avnu team", he said in his write up, "but gave the impression that the rest of us are crawling around in the dirt, barely able to connect anything presumably due to our lack of opposable thumbs." This is kind of the tone I too felt reading through the AVNU Milan web page here, and the whitepaper which you can download here if you register.
What is Milan?
The AVNU Milan website describes it this way:
AVB is an open standard that each manufacturer can use in their own implementation, but device interoperability isn’t guaranteed without certification. Avnu Alliance compliance testing and certification is ideal for network infrastructure switches and ensures interoperability at the network layer, but doesn’t outline specification requirements for the application layer such as media formats, media clocking, and etc. It doesn’t assure interoperability amongst Pro AV end devices. Milan does.
Effective inter-operability was a goal of AVB from the beginning. The first thing I wrote on AVB in 2009 had a quote from a (now gone) roll-out article written by one of the key developers of AVB saying a key goal of the effort was to, "...enable the construction of highly interoperable Ethernet networks capable of streaming audio and video with perfect QoS." So the very existence of this Milan effort for me points up the shortcomings of AVB, and one of the reasons I think AVB hasn't gained broader acceptance: nothing in the existing standard offers the user anything like the plug and play patching convenience we already have today with Dante (of course I've written a lot about that already). Even in Milan, from what I understand, the way the patching will done--which is what the users actually interact with--is still left up to individual manufacturers (or maybe a third party?) to handle. For competitive reasons there's not really much incentive for clean, unified plug and play multi-manufacturer signal routing that is consistent for the user, and I doubt it will happen any time soon in AVB systems, if ever.
What Will Milan Offer?
In the section titled benefits for "AV System End Users" (who are listed fourth after Manufacturers, AV Managers, and IT Managers), the Milan white paper gives us the following:
* Milan fulfills expectations for real plug-and-play net-work setup and functionality. Network structures don’t require setup or complicated switch configuration tasks.
* Networks as signal and control transport structures becomes easy, fast to set up and reliable. Users can concentrate on their creative tasks.
Neither of these things is true today in AVB systems, where real, practical multi-manufacturer inter-operability exists only in limited ways. But every time I fire up Dante Controller I'm always impressed at how fast it discovers everything in a truly plug and play way. Most Dante networks require little or no switch configuration, and I would say already allows users to concentrate on their creative tasks.
The whitepaper goes on to lay out a list of promises, and while the word "Dante" doesn't appear in the document it's clearly targeted at some perceived version of the Dante world, with references to not depending on a "single company", etc. And there some inferences in the paper that, if interpreted as veiled references to Dante, are not accurate (like they imply that you need to configure switches and QoS for Dante, which you only have to worry about for the largest of systems (think airport size systems).
In terms of technical details, the white paper offers the following four things:
Media Clocking Specification
Stream Format Specification
AVDECC Specification for Endpoints
This is pretty low level, basic stuff; Roland has more detail in his writeup. And of course none of this changes the fact that to run AVB you still need to buy AVB-capable switches (certified or not). In terms of actual benefits for the user, the most interesting thing is the AVDECC implementation (which I had to look up). Regarding this, the whitepaper says, "... Milan defines a profile for professional audio devices with a small subset of the standard, and tries to remove all ambiguities from this subset in order to achieve basic inter-operability at the Control layer."
"Control" of the discovered devices is promised, and, being the controlgeek blog, this is something that caught my eye. As we've seen many times, real, multi-manufacturer control that is successful in the market is something that is rare in a competitive industry, and I've documented this in my book since the 1990s and here on the blog (I have examples under the heading "limitations of standards" extracted here). But specifically for the audio market, we can look back to the development in the late 1990s of AES24 and its eventual market failure and withdrawal in 2004. AES24's legacy lives on in OCA/AES70, but for similar reasons that's only achieved limited success in the market, as I've detailed starting in 2012. Again, OCA and AES24 were developed by very smart and capable people (and friends of mine), but market adoption has been limited for the usual competitive reasons: many audio manufacturers put EQ in their products, but it's very difficult to get them all to agree as to how that EQ should be controlled. And it was amplifier manufacturers, back in the days where you would buy them separately from speakers, who all offered incompatible, non interoperable control solutions and did not support the initial standardization effort. Sound like a familiar situation? Easy exchanging of digital audio streams, on the other hand, is something that's in everyone's interest. (And ironically, Audinate is actually in a position where it could dictate some basic control functionality, like at least head amp gain and phantom power status).
Who Developed Milan and Why?
This part of the whitepaper really felt condescending to me, and seems to have been written by a bunch of very smart people who haven't really done their market research to see what people are actually doing today in the field (or the text was fluffed up by a very competitive marketing person):
Milan is the result of 18 months of close collaboration amongst direct competitors including AudioScience, Avid, Biamp, d&b audiotechnik, L-Acoustics, Luminex and Meyer Sound. Milan was created by the technical experts designing the systems and driving product roadmaps to impress upon other manufacturers the importance of this technical transition for the future of their business.
Market leaders decided long ago that AVB is a technically superior network technology that guarantees deterministic delivery of audio, video and data, and offers a sustainable standard technology that is not limited by one company’s vision and its future development and support decisions for its technology.
Today, major manufacturers in the Pro AV space have taken the lead with the first tangible solution to promise deterministic, reliable and future-proof delivery of networked media.
The Milan initiative is a long-term approach to bringing about change across the Pro Av market, and product certification will guarantee fool-proof interoperability of deterministic networked Pro AV devices.
This all seems to reflect the viewpoint of many of my manufacturer friends who are not on the Dante bandwagon. They understandably don't want to base their products around a core technology from another company (Audinate), and many of them seem to think that Audinate will eventually get bought up and could stop development of Dante (which is what happened to Cobranet), or go in another direction or something. And they think AVB will be there waiting to take over. These are a lot of brilliant people but I think they are wrong.
Where Will it Go?
At my school, we bought a Yamaha CL-5 mixing console. I picked that board because it's widely used in the NYC event market, many of our graduates will encounter them in the field, they are well made and affordable. Given the reality of budgets, we will likely be using this console for the next 10-15 years. That means we're running Dante for the next 10-15 years, even if Yamaha and Audinate both went out of business. And many rental shops around here and touring sound companies are in the same boat. And with so much Dante product in the world (and the rumor that Yamaha owns a stake in Audinate) if Dante development stopped, it's likely a consortium of manufacturers would continue it anyway.
But in the end this Milan effort seems to be led primarily a group of high-end loudspeaker manufacturers (all of which I'm a fan of), and I'm not really sure how--other than guaranteeing that there will be a group to continue some kind of development--Milan will really benefit users. If you're buying d&b speakers, you're going to have to buy a d&b controller/amp anyway. If you're running all self-powered Meyer speakers, you could use anyone's speaker controller but why would you when Meyer makes such great controllers already optimized for their systems that's integrated with their modelling software? And L'Acoustics has been moving in recent years towards their multichannel, control-intensive L-ISA system, which is dependent on their own DSP. So while I applaud and wish well any sort of forward movement towards any sort of inter-operability, and I hope Milan succeeds, I don't think it's the game changer they seem to be advertising.
And I don't think I'm alone in thinking this way. I was emailing with a friend who works on major Broadway productions about this new effort, and he already knew about it and said, "It seems to me, as an end-user, unless you can get DiGiCo, Yamaha, Studer, and the console big-boys on board, any discussion of audio transport is missing a crucial component. If they can get DiGiCo to come out with a Milan card, that could be cool." But that wouldn't mean he no longer needs Dante. "Right now, DMI-Dante [interface cards are] how we interface with our audio networks, and it works seamlessly at the other endpoints (speakers, monitoring, other consoles, computers, and remotes)."
And Roland Hemming summarized thusly, "Having met with most of the Milan creators, I think, or at least I hope, they know this will not be the audio networking protocol to take over the world. Milan offers AVB features that frankly should have been there in the first place. It's a welcome addition to help AVB offer a robust, high-performance local audio network."
Two Networks for the Future
It seems to me (again, in the live sound world) we will remain for the foreseeable future a two-network world (plus MADI, etc of course), and while this is not ideal, that's the way these things always seem to work out. On many systems, Dante will connect the console, microphones, stage boxes, wireless mic receivers, recording rigs, in-ear monitors, measurement systems and associated gear and often the loudspeakers, with companies like Martin offering direct Dante to their boxes. But if the speaker system is Meyer, d&b or L'Acoustics, then these systems will be running AVB, probably on a completely separate network with separate AVB switches, and the two networks will be connected with some kind of rudimentary, small channel-count interface, a converter box, or--if we're really lucky--AES67 based network interchange.
But while Audinate has embraced AES67 and incorporated it into Dante, companies like Meyer have not embraced AES67 and don't seem to have any interest in doing so. I talked to my friends at Meyer about a huge and very cool AVB-based system they built for Metallica, and asked them how they got audio into their speaker system from the console. The answer? AES3, the two-channel point to point digital audio over XLR connector standard first developed in 1985. This is better than analog, of course, but we should be doing better than that in the 21st century. But in reality, this is an OK (but limiting for the future) solution, since in a concert system typically only a handful of the many channels of the overall system are sent over from the mixer to the speaker system, and this console/speaker system dividing line also reflects a common division of labor anyway, with the (human) mixer responsible for most of the stuff up to the console outputs and then the system tech handling the speaker systems and alignment. This is good enough.
And, as I've written for many years, "good enough" is where technology typically settles, until there is a compelling reason to move forward. Milan, at least as currently outlined, is a useful, incremental, and overdue improvement to AVB, but not compelling (or complete) enough to replace Dante in the market any time soon--certainly not within the time horizon of someone who has to buy or spec a system today. For the future, who knows, but of course, Audinate is not sitting still:
SDVOE Over Dante?
The world of video over Ethernet is still a bit like the wild west. One really interesting proof of concept on the show floor was Software Defined Video Over Ethernet (SDVOE) implemented by Audinate into Dante. They were doing presentations and had a working proof of concept system; I saw it patch video right through Dante controller, which is very cool.
At least for an end user like me, seeing things like this operate is when it really seems real to me. We've been seeing SDVOE development boards and so on for years, but when you see Dante controller patch video, even in prototype form, then it seems like a real thing. We'll see what comes in the next year.
More photos from Infocomm here.
It's been a tough year for storm chasing generally, with a near record-low tornado count (see graphic). But I got lucky and had a great two-part trip, which really made up for the last couple years, where I busted on nearly every trip (see here and here and here). I also broke my tornado dry spell, seeing my first since 2015, and in notoriously difficult Iowa at that. As usual, I had to wait for my classes to end and then planned to chase before and after the Infocomm show in Vegas, where I also taught a very fun networking workshop for Cirque du Soleil,
The forecast was showing a big setup in Nebraska, so I pushed my flight up a day and flew into Kansas City on Thursday May 31, and made it to Lincoln, NE that night. Along the way, I discovered that my rental car had been left in a weird mode where its speed was limited to 80 (not good when the Nebraska speed limit is 75) and certain radio stations were blocked (on return, Hertz said this was not a new policy but a previous renter had messed with it; this goes on the rental car checklist). I randomly picked a hotel in Lincoln which was also the home base for the Grainex weather science project, so I saw all the trucks and their operation center. I headed out from Lincoln the next day and, as forecast, there was a decent system, eventually leading to a small tornado in Ord, But I was too gun shy to get close with the hail when I realized my rental car had a sunroof, but it was a good day chasing and I did get out in front of a beautiful storm right at sunset (click on any photo to see a larger version):
I followed the storm after dark, and it put on a great cloud-to-cloud lightning show.
The storm lined out and followed me to my hotel in Grand Island, so I watched it come in there:
Unfortunately after this day, the weather completely died out, and my flight to Vegas wasn't until Monday June 4 and it was too expensive to change it. But late on Saturday morning, I was sitting in the lobby of the Hampton Inn trying to figure out what to do for the day and--luckily for me--well-known storm chasers Daniel Shaw and Jeremy Holmes walked in. I had met Daniel last year randomly in a Hampton Inn lobby in West Texas, and Jeremy for the first time here. We talked storms for a while then had a nice lunch where we continued to talk...storms. Daniel and Jeremy were heading way north for the next setup, but I had to get to KC for my flight to Vegas by Monday, so I had a leisurely drive back and found really nice BBQ along they way.
I had just been in Kansas City back in August for the eclipse, and on this trip I caught up with a long-time friend from boarding school and her husband for dinner, which was nice. On Monday I then headed off to Vegas which was an excellent trip on its own. Here's the map of this leg of the trip:
On this trip I experimented with doing Facebook live transmissions from the field, and it seemed to work out well. Here's a consolidated video of all the broadcasts, which is nice for me to be able to look back--chasing is so intense and you have to continually make so many decisions that I often don't remember exactly where I was or what exactly happened when until I go back and look through the GPS tracks and so on.
I came back from Vegas to KC on Friday June 8th, and headed north to Des Moines, Iowa that night for a setup in this state which is notoriously difficult for chasers. On Saturday the 9th I headed north and saw one storm up near Forest City that got funnel reports before it even went severe-warned. I got up there, found the base of the storm and almost immediately saw a tornado! My first in Iowa:
I stayed in front of the storm and tracked it down south east towards Mason City:
Mason City already had been hit with pretty major flooding in previous days, and the ground was saturated, but I ended up above the tornado warning so I gassed up and sat there for a little while thinking the storm would move on; It didn't. Instead, the whole town flooded, and when I tried to get out and back on the storm, I got blocked in nearly every direction, and finally ended up going through some very deep water. I'm pretty careful in flash flooding situations but in this case I was fooled; fortunately the Jeep Grand Cherokee powered right through it:
With all this delay I ended up stuck in the storm and had to core punch out through it, which was no fun.
But I did finally get in front of the now line before sunset, and ended up in Waterloo for the night.
People are generally nice and often come up and talk storms when they see you're a chaser:
The next day, Sunday June 10, was a down day weather wise so I watched the Formula 1 race at a sports bar and then took a leisurely drive down south and ended up in Omaha, to be in position for a setup the next day in Nebraska. The weather on Monday June 11 got a late start, and although I was in a good position, all the storms were heading right back into the Omaha metro area around rush hour, which I wanted to avoid, so I ended up behind the line and the tornado warnings.
My car even told me about the storms:
But it was still pretty spectacular back there:
The line was just moving too fast and in the rental car I didn't want to core punch through the hail, so I couldn't get in front of the line. And the models were showing it developing to the south, so I dropped south to Kansas (where I wanted to be for the next day anyway) and ended up in Manhattan around sunset:
And then a severe warned storm fired up just as I was coming into town::
The next day I dropped south again. It's really pretty country around there.
From around Greensburg, Kansas I saw--from about 50 miles away--the anvil of an impressive storm to the south west that hadn't shown up on radar yet. So I headed to that and it became this stationary, monster storm. The storm just sat in one place and pumped out rain and hail:
After punching through it and wandering around back and forth around Buffalo Oklahoma for a while, the storm was powerful but not particularly photogenic, so I pretty much gave up on it and started heading back up to Kansas. But then, in the rear view mirror, I could see that the storm was intensifying, and pumping out unbelievable amounts of cloud to ground lightning. I turned around and immediately got stuck behind a pre-fab home being moved slowly in the strong inflow winds on several trucks, but when I got past them I found a good spot west of Buffalo, and it was one of the most amazing lightning shows I've ever seen. I set up the camera and actually got back in the car because I was afraid a bolt might get too close.
While the storm was intensifying it developed south, so I moved south with it:
As the sun set, I got a room and headed south to Woodward, OK, and stopped on the way to shoot this wind farm:
The next day, Wednesday June 13, not much was forecast to happen, but a strong forecast for North Dakota on the 14th had been indicated for some days. My flight was on Friday but there was no way I could make it back in time from North Dakota so I agonized about it, and then pushed my flight back a day to Saturday and drove north. I stopped in Greensburg, KS, a town completely leveled by a massive tornado:
Had a nice lunch:
Helped several turtles out of the road:
I made it that night 500 miles north to Winner, South Dakota, up a long string of beautiful two way roads (I can not wait for self-driving cars!)
The next morning I woke up and immediately started heading north, trying to get to Minot, ND before the storms fired. The Storm Prediction Center ended up with a moderate risk for the area right near the Canadian border, and then a tornado watch with an expectation for long-tracked supercells (every chaser's target). But in the end, morning convection undercut the setup (at least on the US side of the border), but there was still some interesting skies and several self-standing supercells:
I also found the limit of the four wheel drive on this dirt (this was much scarier than it looks here, especially with no audio):
I eventually got a room in Bismark for the night, and the storms did fire up again around sunset on my way down there:
One of the things about chasing is that you end up in some pretty remote areas and see some interesting stuff. I noticed while doing a Facebook live shot a fenced in area with a big power feed and a radio tower right by my road. Sure enough, it was a missile silo! They were all over the place, interspersed with all the oil and gas drilling in the area. Also, blasting down a dirt road off a dirt road, I turned a corner and passed a truck simply marked, "Security forces" with two guys in uniform inside, which turned around and started following me. They must have been baffled by all the chasers in the area with all their cameras and computers and antennas; I'm sure we were carefully monitored. The truck turned off somewhere behind me and I passed by this, a "Missile Alert Facility" (you can read the sign by zooming in) more on Wikipedia here), more on the whole system here. Also, And they recently lost a box of grenades and a machine gun.
The next Friday June 15, I drove 770 miles back from Bismark to Kansas City, and then flew home the next day and made it in time to see the Loser's Lounge. Here's my approximately 3000 mile GPS track and FB live compilation:
A special thanks to George Sabbi, who was my remote chase partner from his base in NJ for most of the chase days!
Some years ago, after my Mom died at the age of 63, I decided that I wasn't going to have a bucket list--I was going to, as my always conservative "nose to the grindstone" father then said, "do it now". For my whole life, I've always loved seeing the power of nature in action, whether it's the ocean or a tornado or a whale breaching or a stunning view. I chased my first hurricane in 1985, and for 10 years now, I've been going to the plains to chase these monsters whenever possible.
I typically chase alone, not out of desire but simply because it's a hard sell to try and convince someone to take a risk and spend a lot of money, drive for thousands of miles, inevitably eat a gas station cheeseburger at midnight, and--in the end--possibly see nothing. Or, see one of the most amazing and truly awesome sights you will ever see and then maybe get struck by lightning. For me, I enjoy the whole process, the sights, the decisions, and the constant engagement. When I'm on a severe storm I'm almost always in a state of "flow" and rarely notice what time it is, except to check how long until sunset. And the electronic connections of Facebook and seeing other chasers out there keep loneliness at bay.
I grew up a country boy but have been in the biggest of big cities for almost 30 years now; periodically I need some time in wide open country like the plains (which feels like a more expansive version of the land where I grew up in rural Maryland) to clear my head.
There's a big set up in the plains today, and while I'd love to be out there, I'm content for now to chase whatever happens locally for the rest of the season. But I know, come about January, that pull from the plains will start again and I be out there again next year...
I was happy to be able to go on my second “NASA Social” trip, this time at Wallops Island Mid-Atlantic Regional Spaceport (MARS), which has special significance for me. I grew up on the eastern shore of Maryland, about 100 miles north of Wallops, and somehow we got a special tour there as a child. I also spent a week each summer in Ocean City, right up the coastline. So that area has always been a bit of an interest of mine, and while I had visited the facility during an open house I had never seen a launch there. I had been accepted twice for launches there before; the first time the launch got rescheduled to a time I couldn’t get out of work, and the second time got messed up by an administrative issue. But this trip lined up on a weekend where I didn’t have class obligations right at the end of my crazy semester, so I jumped at the chance to do it.
NASA Social is a very cool program run by NASA which gives enthusiasts with an online presence almost the same access as fully credentialed media (my last NASA Social trip was to a Space X launch in Florida, which was amazing and enlightening; I wrote about here). It’s a brilliant idea by NASA since it spreads the word via people really excited about the whole process, and also fills in a bunch of gaps in terms of media coverage--the only traditional TV media, for example, that we saw on this trip was reporters from the Ukraine.
This launch mission was designated Cygnus CRS OA-9E, where Cygnus is the name of the spacecraft, and CRS is Commercial Resupply Service, which brings cargo and scientific experiments to the International Space Station (ISS). The Cygnus spacecraft was carried via an Orbital/ATK Antares rocket, which uses first stage Ukranian rocket engines brought by overseas by ship and then truck to the facility for assembly. The launch was initially scheduled for very early (5am or so) Sunday morning May 20, so I drove down Friday, and thought I’d have a nice leisurely drive back to NYC on Sunday. The launch got pushed to 4:44 am on Monday May 21, so I stayed another day and I’m glad I did.
This launch carried cargo to the ISS, and also several scientific satellites which would be deployed after separating from the space station, and then the Cygnus was sent off to burn up in the atmosphere along with a lot of ISS waste.
We first met on Saturday, and even the people in the NASA social group--about 40 if I remember right--were pretty amazing, including everyone from teachers to Washington DC security contractors, and an interesting mix of extroverts and introverts. But all of them had a passion for one or another aspect of this launch, and in these tumultuous times, it's heartening to see such amazing, rational, passionate people both working on the mission and also taking their own time to observe and communicate about the project.
We then got to watch the press conference detailing all the science that would be carried into space on the rocket; Spaceflight Now has a good overview of the science missions here, and a NASA video of the fascinating presentations is here.
On our rescheduled Sunday, we got to go out to the launch site, which was really cool:
I'm glad I brought my big telephoto. This thing is impressive:
We then got to tour the amazing horizontal integration facility (HIF), where the rockets are assembled.
One thing that was interesting to me was even with all this high tech stuff, scale aside, this place used a lot of the same kind of techniques and tools that we use in building entertainment systems.
Next up was a press conference for the mission:
Of course this is the control geek blog, so visiting the range control center was pretty cool:
We got a surprise visit from newly Trump-appointed NASA administrator Jim Bridenstine, formerly an Oklahoma congressman with no science background. Fortunately, as director of NASA, it's good to hear that he recently changed his tune on climate change.
Next up was the Balloon Research Development Lab:
Here they make and test balloons for research missions all over the world. They also make sounding (not orbital) rockets in a very cool, high tech machine shop:
They make a lot of cool parts here, but not the nose cones since that's apparently a speciality:
The passion of the people who work here was infectious:
They also do electronics assembly and testing here, and again the similarity to the stuff we make to go on tour is interesting:
We then got to hear about the HaloSat mission and then hear from Astronaut Kay Hire who had spent time on the ISS. I tried to go to sleep early (not my nature) to be up for the 2:15 AM bus call, but in the end I got like maybe an hour's of sleep. But seeing the launch from just two miles away was worth it, especially since thunderstorms cleared the area just in time:
On my last trip, one of the things that was so amazing was the sound. So on this trip I brought my little pro stereo recorder, and sync'ed it up with video from my GoPro. You should listen to this on headphones, or better yet loud with a subwoofer (keep in mind the audio is about 10 seconds behind picture due to the distance):
Even though we were all just observers, everyone felt invested in this launch, and it's a pretty profound experience. Everyone just fell silent once the thing roared off over the horizon. The rocket went off and performed flawlessly. They docked it with the ISS a few days later and then it re-entered and burned up over the ocean (and the rocket we saw is now at the bottom of the Atlantic somewhere). And what's really amazing to me is that the second stage stayed up there for more than 14 days Update July 15: The capsule has now departed the space station..
I got back to my car at about 5:15am and drove directly home to make an 11am doctor's appointment and then to teach my 2pm class. The class I was teaching was the final sound system setup, so I made sure to add a subwoofer to the system and played my raw audio file for the students.
In the end, it was an inspiring trip; this mission directly supports the quest to answer basic questions about the universe, and the entire intent of this program is to better the world and our experiences on it. This is the kind of thing I'm proud to spend my tax money on and I encourage more of it.
Click on any of the photos above to enlarge them; many more photos here.