An experience report from measuring uptime. Many software people believe measuring uptime to be a useful tool to support or assess improvement in software reliability. My experience is different.
I joined a team at an internet scale company whose job was to manage an incident chat bot and related incident database. The main job of the database was to track uptime for teams and products. It’s the best calculator of uptime I’ve seen through my career and better than most I’ve heard of from other engineers. It is the kind of tool most software companies think they want.
But I’m telling you this story in the month of October as a horror story and a cautionary tale. The cliché summary: be careful what you wish for. The conclusion up front:
measuring uptime is deceptively expensive and inaccurate
reporting lapses in uptime leads to counterproductive behavior
using lapses in uptime to trigger mechanical consequences destroys morale
The chat bot’s main job was to support incident response. It had a bunch of features. But for this story we’ll focus on how it helped with calculating uptimes. The bot would record the start time and end time of any incident along with any time severity changed over the duration of the incident.
After an incident, teams were expected to estimate the customer impact for each of the major products and for each of the severity timespans over the duration of the incident.
One of the expected outcomes from an incident retro was to identify which team owned the impact for the incident.
From this data, we would generate reports of uptime expressed as number of nines, adjusted by the percentage of customer impact. So 99.5, 99.7, 99.8, whatever was happening for a specific group.
These were broken down by both team and product and grouped over the past three months, alongside the past 30 day rolling window. The cells were colored green, yellow, or red according to team-specific or product-specific objectives for uptime. Reports were delivered in a weekly email to pretty well everybody in the engineering organization.
These tools were built alongside a deep investment in nurturing a world-class incident response culture. For example, a self-guided training module was required as part of onboarding every engineer to teach them how to use the chat bot, how to run an incident, and how to know when to escalate. There were a lot of beneficial returns on the investment of developing that kind of culture. It is tracking uptime that I hope to discourage.
These tools had been under development by a team of about four engineers for four years at the time I joined the company. This level of investment doesn’t seem particularly outlandish. At the time I joined the company there were about 500 developers—a back-of-the-envelope estimate of 1% of engineering effort is maybe even inexpensive.
One hidden expense was that the longer incidents created more expensive data collection and data entry. They included ups and downs in severity; symptoms would cascade from one product area to another. Where those cascades started or ended were hard to identify and didn’t correlate cleanly with the changes in severity. Each change in severity and cascade of impact would surface ambiguous boundaries for estimating impact. The more complex incidents involved many teams and many products. This further multiplied the ambiguity, difficulty, and costs of estimating impact.
Another unexpected outcome grew out of the ownership of incidents. Ownership was meant as a kind of accountability. But many retros would fixate on “who owned the impact?” or reassessing the impact instead of surfacing the things that would actually improve our incident response and service to customers: discovering mechanisms of failure, communication breakdowns, or places where the existing architecture wasn't keeping up with the customer growth.
Probably the most popular feature request that we got on the team was to allow incidents to share ownership between the teams involved. This was also the hardest thing for us to implement: it would have required a significant amount of change in the database schema and related calculations, and would have doubled (or more) the complexity of an already difficult and costly UX.
Best of Intentions
So time passed. We had a pretty rough couple of months over one August and September. It was in October (oh hey! an anniversary!) when leadership implemented a new policy: a kind of targeted code freeze. If teams entered the red, they were expected to stop feature development, and develop a plan that focused on reliability engineering. The plan had to be signed off by their VP and would include specific exit criteria that would enable them to resume work on their existing roadmap.
As teams encountered the new policy, it became universally hated. This memory is particularly acute for me because not long after the venom started flowing, I wrote an impassioned defense of the new process. Teams have an accumulation of technical debt. We know there are areas that get neglected. And the purpose of the policy was to create organizational cover, to buy time for teams to be able to invest in cleaning up some of that neglect.
What I learned in the ensuing backlash from my blog post is that leadership were not universally aligned on the new policy. In some parts of the company the pressure to keep to our roadmaps was higher than the pressure to preserve reliability. It seemed few leaders were adjusting their schedule when they entered the code freeze. Many kept to their expected deadlines. A few former colleagues remember it this way:
One thing that I witnessed during this time frame was managers wrangling with each other over who would “own” the incident and be forced into [the code freeze]. Rather than doing what was best globally, they were both trying to optimize locally for their team. And, it led to misleading ownership that was assigned not for good reason, but so that managers could save their own SLAs and push things on to other teams who hadn’t used up their budgets yet. So, in essence, the game became “how to not be forced into [code freeze]” rather than “how to most effectively fix our overall system.”
For these teams, the result was perhaps the worst of policy outcomes. Teams already most exhausted from recent incidents were now getting double the demands of their time. Instead of us creating cover, the policy was doubling the workload on the teams already collapsed from overload.
I should add that other former colleagues remember some mixed or positive outcomes from the policy—not uniformly terrible.
I remember feeling pretty defensive (which is, like, the least useful emotion to have ever) and yes, it became more about “getting my team out of [code freeze]” in addition to fixing the underlying problems. Because it felt like the focus was more on “Here are the hoops the team needs to jump through to get out of [code freeze]” rather than (but, to be fair, in addition to) “here’s how we get better as a company”. We ... really didn’t need that split focus, IMO. We didn’t need hoops to jump through, or “reliability training wheels”. We had enough engineering excellence gravity that was already pulling us toward Doing the Right Thing. [Code freeze] was just noise on our end. Needless friction.
While preparing this report, I got feedback from one of the former VPs who put a ton of their own time into ensuring incident data was filled out thoroughly despite having very good automation around collecting that data.
I’ll reiterate: it was deceptively expensive to get good data into the system. Teams who were already displaying internal motivation to balance their reliability engineering with feature development were the ones making the extra effort to provide better data. But as cited in the earlier quote, they were also the teams who were least in need of “reliability training wheels.”
I further learned that the report itself had a subtle effect of shaming teams by publicly drawing attention to their team in red. This had the effect of suppressing the reported severity of incidents. Low severity incidents could skip the extra data entry and accounting visibility.
These features had the combined effect of converting the reliability work into a kind of punishment.
But there’s more. As I looked more closely at the data that was in our database relative to the incidents that I witnessed, I recognized that every piece of data we had was being negotiated during the incidents.
They weren’t crisp measurable points. They were all judgment calls. Every one of them.
What’s more, there were existing company processes related to customer root cause analysis documents our team was involved in that further negotiated the customer impact reported to customers. When a customer would demand a report, say after a bad month, our job was to identify the incidents over the span of that report that would have affected the customer based on what we had, which products were affected, and which products that customer was paying for.
So a great deal of effort was spent on our part to clean the data and double-check with teams who had maybe not finished their data entry on the customer impact to ensure that that customer’s impact based on the incidents over the period was focused on only those things that could have affected them.
And I don’t want to suggest that the work we were doing for the RCAs was in any way deceptive. I think it was appropriate. But what I do want to make clear is that it was very expensive.
Only my teammates and I could actually see how much it was costing the company to collect the data. It was spread thinly across every single team, hidden in ordinary day-to-day work. The resulting numbers were based on judgment calls. For the many teams where roadmaps and schedules remained even under the code-freeze, all these very expensive-to-collect numbers failed to reduce technical debt or otherwise improve reliability.
The costs to morale across the company were substantial. And all of it further undermined the quality of what the company learned from incidents because so many were too busy fighting over who owned the impact.
Light strikes the retina and signals fire along the optic nerves, through the optic chasm, through the optic tracts and into the left and right thalamus on route to the visual cortex at the back of our brain. Before the signals reach the visual cortex, they must first pass through the limbic sections of the brain, that is, the emotional center.
By the time our brain has started to gather the shape and color and symmetry of whatever we see, long before we have words for what our eyes have met, we already have an emotional reaction. The language cortex and prefrontal cortex are almost literally the last to find out what's going on.
Riot or revolution?
When we see violence in the street, the word that appears in our mind reveals our emotional position to that violence. If we see a "riot", our heart is with the establishment. If we see a "revolution", our heart is with the protestors.
The same general principle applies for all of our senses. We are emotional creatures first and only occasionally have fits of reason. There really is no such thing as "being reasonable." We rationalize our emotional state, but we are not actually rational.
Today's date, numerically encoded according to US conventions: 3/16/15
The time this article posted (Mounatin Daylight Time): 07:02
Woo Hoo! π day all over again.
If you casually ignore the first two digits of the Year of Our Lord. And ignore that I actually faked the publication date and time. And ignore that the Year of Our Lord is at best an approximation. And ignore that there is no 0 between 1BC and 1AD on The Number line of Our Lord. And if you ignore that at least 2/3rds of the world disagrees about the "Our Lord" part. In general you kinda have to overlook that everything about this exciting temporal milestone is layer upon layer of arbitrary human convention.
I mean, except for the ratio of a circle's circumference to its diameter and the corresponding conversion from a base ten representation of that ratio to the base eleven representation.
But by all means, don't let any of that stop you from celebrating this momentous occasion with a slice of pie.
I'm actually more excited about τ in base eight day: 6.2207732504205. If you arbitrarily chose a point close to the international date line, you could almost celebrate THAT day on the solstice. Tau Day.
I rushed out to catch the bus for fear of missing it. There was only
one other person waiting. I needn't have worried. There's a whole
story in there about unnecessary fear. But that's not today's story.
I recognized the woman waiting at the bus stop. I've seen her fairly
often on the bus. Our schedules are similar.
We're waiting together for the bus. Just the two of us. It's dark
out. She's looking at her phone.
I take out my phone too. I put my phone away. Feels awkward.
Then I see a couple men walking along the sidewalk in our direction.
The one looks her up and down.
And then. As he's passing us.
His. Eyes. Locked. On. Her. Face.
Too agressive. I thought. The moment passed with them as they
proceeded along, yet her gaze seemed to follow him.
Or she might have been looking down the street to see if the bus was
That felt creepy.
I should ask her if that was creepy to break the tension.
That's what micro-aggression looks like, right? Would it help if I
What if he'd actually stopped walking to talk to her? I think the
unspoken social contract calls for me to intervene. Nevermind social
contract, my gut was already preparing to step in if things escalated.
"Move along," I imagined saying to him.
"What? Is she your girlfriend?" He asked knowing the answer.
I imagined her awkward body language at me picking a fight with a
stranger to protect her from the escallating microagression. Was that
fear that things would get out of hand? Or was it relief to not be
standing alone at the bus stop?
I should ask her if that was creepy to break the tension.
This time she was a skilled martial artist. Her body language was
angry at me for assuming she needed my protection.
I should ask her if that was creepy to break the tension.
But how is my impulse to talk to her any different than his stare?
Would that break the tension or just pile on? Am I just looking for
an excuse to talk to a beautiful woman? Am I competing for her favor?
This time things escalate. He's armed with a knife. I wake up
briefly in the emergency room. Images of my young children playing at
home, then interrupted by the sound in my wife's voice as she gets The
Call. The joy on their faces melts to puzzled, worried looks as I
fade to black.
This time I'm waiting at the bus stop with a man. The gut check is
completely different. He's got this. It would be insulting to step
in. None of my business, anyway.
This time I know she's transgender. This is an unexpected variation.
She's beautiful. Did he know her before the operation? There's no
hello nor even a nod nor raised eyebrow. Still my gut steps in to
defend. "Move along."
This time she's ugly. How does this one plays out? Does he even
pause in his step? Was it about her beauty? Or was it the sense of
power? As he gets further away I notice a subtle weave in his path.
It's dark, but way too early to already be drunk. Was this stare the
best he can do for a power trip? An angry reaction to being out on a
Friday night with a friend instead of a date. If he does stare and
then stop, does my gut step in? Or am I only interested in competing
for the favor of a beautiful woman?
Why am I still thinking about this? I thought.
Should I ask her if that was creepy?
Was she creeped out by standing alone in the dark at the bus stop with
me? I was watching him, not her. Could her face have been pleading
to him for protection from me?
What images do you think of when I ask you about the invisible part of design... the lines and shapes and proportions that make your design hang together in a coherent way?
I'm writing an article to teach computer geeks about design and to explain why CSS sucks as a language for designers. I need some visual support to explain the invisible. Words aren't going cut it. Although beautiful images about typeface design would work nicely. :-)
I've got a few examples here: two from architecture, one a study for a figure drawing. These are in the right direction, but I'd lovee images from many other design disciplines.
Overheard: "I'm just a web designer. I don't program or anything."
Here a web designer adopts the cultural bias which values programming above design. But the bias cuts both ways. Designers are not to be trusted with code and coders are not to be trusted with design.
HTML and CSS are unfortunate consequences of this bias. In the ideal world, HTML can be purely semantic and the look-and-feel can be done completely with the CSS. Except that world doesn't really exist and HTML gets littered with extra <divs> to prop up the design needs. And CSS gets littered with duplication of paddings and margins (at the very least) to adjust and control the positions of elements on the page.
And so we have grown templating languages on the server side to try to manage the deficiencies in HTML and CSS in various ways. The menagerie of HTML templating languages is beyond imagination. For CSS we now have SASS and LESS and SCSS: basically templating languages for CSS.
What the server-side languages have in common is introducing turing completeness for languages that are not themselves turing complete. When one language doesn't do what you want, invent another language which can be compiled into the lesser language. This is how C begat C++ begat Java and C# which... never mind, I've gone too far already.
You can see Conway's Law at work here. The programmers and designers are on separate teams and speak different languages. So architectural interfaces are created between the teams. Code goes on this side. Design goes on that side. Over time the architectural boundary between the teams accumulates a lot of kludge on either side to accommodate the inability for the teams to really communicate. And that boundary becomes a point of friction that slows down development and growth.
CSS is especially unfortunate. It is intended for design and it completely misses the mark right from the outset. Seriously. The heart of CSS from a design point of view is the box model. Let me say that again just so you really get the complete and painful irony. The language designed for designers jams all web design into a BOX model. Designers by nature want to think non-linearly and outside-the-box and the language they've been given confines them to a hierarchical tree of boxes. Seriously. So it's hobbled as a programming language and it's a cruel form of torture as a design language.
Allow me to introduce you to the Framework Adoption Antipattern. And
with it I will share some software history that you youngin's might do
well to learn.
The software industry is built around cycles of new adoption. The
churn creates an artificial pressure to keep up with the latest and
greatest. There's always a new hotness. There's also the illusion
that this time around maybe we'll get started on the right foot.
Maybe this new hotness will not lead us into a tangled mess.
For those of us who've been to more than one rodeo, it's a depressing
to watch history repeat itself in the new hotness, just like it did in
the old and busted. The next wave is super tempting. Get ahead of
the crowd and you can become the hot shot writing books or speaking at
conferences. The early adopters always seem like the coolest kids on
the block. Added bonus, you can ditch the tangled mess you're in and
start fresh. But every revolution becomes the new
establishment. Which is why we keep going in circles.
Advice for a young programmer
I know how this sounds to you. I'm just old and crotchety. I don't
get it. I'm part of the establishment. This is a new world.
What you're building is really going to change everything.
You're right. You are going to change everything. But you will also
learn the truth in the cliche: the more things change the more they
stay the same. In five or ten years you will look back at what you've
created and see some depressingly familiar tangles. And there will be
another new hotness. Your once revolutionary new hotness will grow up
to become the new old and busted.
This story is for the long term. As an industry we still don't know
how to teach what we do. The only way to learn these lessons is to
join a revolution and experience the transformation to establishment.
This advice-disguised-as-a-story is for programmers starting their
Historical background on the path to MVC architecture for web apps
Sherman, set the WAYBAC machine to 1995. It was a momentous year.
Three items launched with particular fanfare: the Internet was
commercialized, Sun released Java, and Netscape released
released one year earlier. The first public announcements of PHP and
ruby were also in 1995. And the first working draft of XML was in
1996. All of these things in their respective communities were the
new hotness. All of them are now the establishment. I'll also mention that
was published in late 1994 'cos it comes into the story later.
At the time enterprise computing was dominated by two-tier,
client-server architecture: a fat Windows client connecting to a fat
database. Over the next few years web applications would be dominated
by Perl CGI and Cold Fusion and its copycats: ASP, JSP, and PHP. Sun,
IBM, Oracle, WebLogic, BEA and others jumped on the new three-tier
architecture. They were selling java middleware in hopes of
breaking Microsoft's grip on desktop computing. Instead of a fat
Windows client, businesses could use the web browser that's installed
with the OS and move their applications onto expensive servers.
By the turn of the century, Internet Explorer had nearly won the
browser wars. Netscape would be bought by AOL in a few years. On the
server side, Sun and friends were facing backlash against Enterprise
Java Beans (EJBs) and Microsoft started its push to move the ASP
community to .NET. Sun began evangelizing the Model 2 architecture as
the new hotness: separate the display of content from the business
logic. It was a fashionable pitch at the time:
CSS was promising similar benefits of separating design from
Sun's model 2 marketing and MVC
It was right at the turn of the century when our cultural wires got
crossed and we started using the MVC pattern to describe web
architecture. MVC was a profound innovation in object-oriented user
interface design from Smalltalk-80. That dash eighty refers to 1980
so we're clear that the pattern was already twenty years old at the
time. In fact, MVC is used in Design Patterns as an example to help
explain what a design pattern is. This was a rare moment when the new
hotness was consciously applying lessons from software history.
In the final days of 1999, JavaWorld published
JavaServer Pages Model 2 architecture.
In May of 2000, Craig McClanahan from Sun, donated a reference
implementation of Sun's Model 2 architecture to the Apache Software
Foundation. Struts would become the de-facto standard for java web
frameworks. No question it was a terrific improvement to apply the
MVC pattern to web apps in contrast to the Cold-Fusion-JSP-ASP-PHP
tag-soup model 1. And yet, and yet....
In Sun's marketing and the hype around Struts, Model 2 was described
as an architecture. In every explanation the MVC pattern was used to
explain the architecture. And so the Model 2 architecture, the MVC
pattern, and the Struts framework were all conceptually muddled in the
And then Rails was the new hotness
Another half-decade later when Rails burst onto the scene, MVC was
taken for granted as the de facto best practice for web application
architecture. A new generation of programmers were introduced to web
applications and MVC-as-architecture and The Rails Way at the same
What's wrong with MVC and Model 2 for web applications?
MVC originally lived in a Smalltalk image, which is sorta like that
virtual machine you have up in the cloud running your modern web
applications. Only it was a lot less complicated. Importantly the M
and the V and the C were all living in the same image. When messages
were passed between the different components in an MVC pattern in
Smalltalk, the messages didn't have far to go.
Model 2 by contrast was full of network latency because it grew up
when Sun was trying to sell hardware into the enterprise. There were
browsers on the desktop and there was middleware running java on
expensive hardware, and then a database (probably Oracle) running on
another bit of expensive hardware.
Web frameworks have been contending with two key pieces of friction
for the past decade or so. On the client side there's the
statelessness of HTTP and on the back end there's the
object-relational mapping to get data back and forth from a pile of
object-oriented business logic on the server into a pile of relational
algebra in the database.
MVC in Smalltalk suffered from neither of those key problems. Data
were persisted within the image right along side the class
definitions, and the View and Controller were in direct and very
Ever since the Model 2 architecture co-opted MVC, Model has come to
mean some object-relational mapping, View is something from the
menagerie of templating languages, and the Controller... Ahh the
Controller as a term is meaningless. No, it's worse than that.
Controller is actively destructive. I know exactly
what a Controller is, and so do you. But my Controller is different
from your Controller. We're using the same word to describe
completely different things. The only common ground we have is that
we know there's something between the Model and the View.
MVC is not an architecture and neither is your framework
MVC is a pattern. It's beautiful and full of wisdom. It's an
exceptionally good example to teach the principle of separating
concerns. But the coopting of MVC into an architectural framework
effectively blinded us to the principles and left us with software
dogma. And such powerful dogma that the Rails revolutionaries
embraced the dogma wholesale even as their rhetoric railed against
excessive ceremony and dogma in the java community.
If you looked at a typical rails app you'd think that MVC and
ActiveRecord were the only design patterns you need. And as
applications have grown from simple foundations in Rails into
enterprise-sized beasts, we hear about developers reaching for Plain
Old Ruby Objects to speed up their test suites. There's buzz about
refactoring away from fat controllers and fat models. Rails apps have
become most of what it originally opposed.
What's more insidious is the pervasive use of object inheritance in
web frameworks. Design Patterns has been published for almost two
decades and itself summarized wisdom from the previous two decades of
object-oriented design. A core principle espoused therein is to
prefer composition to inheritance and yet frameworks continue to
recommend their developers inherit. This and a database schema is all
you need: class Post < ActiveRecord::Base; end
Yep, Rails apps are a tangled mess. Let's switch to the New Hotness.
Backbone.js, Ember.js, Node.js, and Meteor are a few examples.
There's buzz around various functional languages: Scala, Erlang,
Haskell, and Clojure, for example. But really, why bother with the
web anymore when there's Andriod and iOS? As always there's a lot of
options in the category of new hotness.
Nevermind me and my war stories. Just make the switch and start
conferences. You can re-invent Rails in a new language as a fast path
to join the next generation of thought leaders.
For a few weeks I've been experimenting with some new (to me) HTML5 apis for multitouch events, device orientation and device motion. I'm planning to work these into my turtle graphics implementations, but needed to understand what information these sensors provide.
I'm trying to fix a bug in the popular history of computing.
We know Alan Turing and his role deciphering enigma codes in WWII. But can you name anyone who secured the Allied communications? Why were the Axis unable to decipher our codes? The heros who secured Allied communications were bound to secrecy while the history of computing was being written. Ironic: their success in keeping secrets has kept their role secret too.
Over the holidays in 2001 I met Sarah's extended family for the first time. I was introduced as a computer programmer to her grandfather, Ralph Miller. "What do you know about the Internet?" he asked like the opening question in an oral exam. Over the next couple hours I remember feeling like we were modems trying various ways to handshake. He was speaking the telecom jargon of an electrical engineer who'd been retired for 20 years. I was speaking with the limited telecom knowledge left over from configuring Ascend and Cisco routers with frame relay and ISDN lines five years earlier. It was hard to find common ground.
"Grandpa's telling Eric how he invented the Internet."
"Oh good. Maybe he'll be able to explain it to the rest of us."
In retrospect, I'm profoundly lucky to have had regular conversations over the past decade with one of the pioneers of the digital age. He didn't invent the Internet. It was more fundamental than that. Ralph worked at Bell Labs on the team which created the X System, as it was called at Bell Labs, known as SIGSALY when it was in service. The National Security Agency has heralded it as the start of the digital revolution. But it and its engineers need a promotion in the popular history.
In that first conversation with Ralph, there was one point that sticks in my mind. It was one of the few places where we had common language. He said "they brought in a hot-shot kid from MIT to try to break the code. What was his name?" It was a name I'd heard before. "Shannon. Claude Shannon." After a moment of reflection he added, "He never could break it."
I think what struck me most was his tone of voice. He completely lacked the sense of reverence I'd always heard from people talking about Claude Shannon. Here was my fiancée's grandpa describing one of the demigods of computing as a bright kid, a math wiz who'd nevertheless been beaten in by a math problem. There were other names which Ralph reveres: R. C. Mathes, R. K. Potter, and H. W. Dudley. But Claude Shannon was just a youngster in Ralph's eyes, and given too much credit as an individual for work that was created by a very high performing team.
The cypher used in SIGSALY was a one-time pad. Shannon ended up writing a proof that the one-time pad is unbreakable. Part of the reason Shannon's initial publications on cryptography and information theory were so complete is because he'd been involved in analyzing the most ground-breaking secret communications system of the day -- a system that would remain a tightly guarded military secret for another thirty years. The implementation and essential innovation came first. The groundbreaking theory came second. But the popular history of computing was written while the implementation was still under wraps.
On Friday, Talk of the Nation interviewed Jon Gertner about his new book The Idea Factory: Bell Labs and the Great Age of American Innovation. The chapter on Shannon is a perfect example of popular history missing this key part of the story. Although the rest of the world were taken by surprise by the insights in the "Communication Theory of Secrecy Systems", and "A Mathematical Theory of Communication" neither Ralph nor his colleagues were. For them Shannon had captured the common knowledge among the engineers involved in the project.
There are a few details which Gertner gets wrong about Pulse Code Modulation (PCM), by the way. On page 127 he writes "Shannon wasn't interested in helping with the complex implementation of PCM -- that was a job for the development engineers at Bell Labs, and would end up taking them more than a decade." On the contrary, a patent for PCM was filed in 1943 by Ralph and his assistant Badgley. From the outset of the project Bell Labs were looking for a way to combine the Vernam cypher, the one-time pad which had been devised for telegraph encryption, with H. W. Dudley's vocoder which could compress and synthesize speech. PCM was the inflection point. Analog to digital. Once the signal was digital it could be combined with a random key, the essential ingredient for unbreakable encryption. It's not so much that Shannon wasn't interested. That work had already been done by mere "development engineers".
Here's the other interesting part. Ralph turned 105 in March. I'll have a chance to visit with him again this summer. Got any questions for him? Imagine you could talk to someone like Turing or von Neumann or Shannon. What would you want to know?
Here's a list of inventors and their patents related to SIGSALY. These are some of the unsung heros.