Identifying a problem does not fix it
Take a second and think about that simple statement. Identifying a problem does not fix it. Duh, right. Anyone that has a brain in their head can understand that pretty easily. It’s not a difficult concept to grasp. But it is profound in the realm of cyber security. As if you really take a minute and think about what we spend most of our time doing in this space, we identify a lot of problems but we don’t necessarily fix them. Let me explain a bit more here.
In cyberspace we have about every kind of technology that can be thought of to find and identify problems. We have a multi billion dollar market built solely around the identification and analysis of threats, ever heard of a SIEM. We have collective intelligence operations in the public space that rival those of many nation state governments, and we have a variety of tools and technologies that are available to use to find hacked passwords and excessive accesses. We can basically find anything and everything that is “wrong” with our systems, seriously we can.
If we have all that capability and we can collectively identify the problem then why aren’t we better at it. If a SIEM is a multi million dollar solution whose sole reason for existence is to find and identify problems then shouldn’t all of those companies who have been breached be made aware of the problem before it became an issue? If we can identify the problems and we have all this capability to “know” what is taking place and to “see” where the threat will likely operate then why do we still have breaches?
The answer is we know where the problems are, fixing them is not always easy.
With all of this capability and all this amazing AI powered super technology we can figure out what is wrong, we can. But the real issue is that even when we do that most of the systems that we identify a problem in will be actively in use, and are likely part of some system that is critical to the business. So we will know where the issue is, but because of the need or perceived need to keep that system up and “not mess with it” we will leave that issue right where it is and try and “get back to it later”. This plays right into the hands of the bad guys. The hackers, the nation states. They know we do this, we have done it for years. They know that as long as a system is critical or needed for business, regardless of if it really is, that the system will stay up and online. They prey on this fact.
Cybersecurity is an engineering discipline if you ask me. Just like any other engineering discipline or practice there are “physics” that one must deal with. Consider aerospace engineering, specifically the airline industry. They too are really good at knowing where there are problems. In the airline industry they spend billions of dollars and thousands of man hours working to identify problems and keep their customers as safe as possible. But they also operate under the practicum that knowing of a problem and fixing a problem are not always on the same timeline. Consider the recent issues that have been noted around the Boeing 777 models that are powered by Pratt & Whitney 4000-112 engines.
The most recent engine failure from metal fatigue in the fan blades of that engine are not “new” or unknown by Boeing. As a matter of fact, another incident involving a Boeing plane running Pratt & Whitney engines also dropped engine parts after a midair explosion over the Netherlands on the very same day as the 777 issue in Colorado. That incident involved a Boeing 747 cargo plane powered by Pratt & Whitney PW4000 engines, which is actually a smaller version of those in the United Airlines Boeing 777 involved in the Denver incident. There was also a United Boeing 777 from San Francisco that lost its engine cover and began to shake about an hour from Honolulu in 2018. The plane was able to land safely. In that incident as well, Investigators said a broken fan blade caused the failure. Alarmingly, in December of this year, two fan blades broke off in flight on a Japan Airlines 777-200 with a Pratt & Whitney 4000-112 engine on a flight from Naha to Tokyo. To put it bluntly the airline industry knows that there is a problem with these engines, they have “identified a fail point” but it wasn’t until a video went viral online of one of those engines roiling in fire as the plane it was attached to tried to return to the airport that the industry and the company finally reacted.
While what we do in cybersecurity is not as immediately pressing as an engine falling off of a passenger jet, well for now at least, we do often act the same way. We know there are problems, we see them and we can usually understand the impact they might have. We identify the problems well, but just like with the 777 engines, we don’t act until it is potentially cataclysmic.
Identifying a problem, especially after the components are broken and “on fire” does not fix them. It just says there is a problem and we know about it, and that we have a new blinking light on our SIEM. To fix a problem like that requires action and means the business might be impacted negatively for a period of time while a fix is applied, but isn’t that better than waiting and crossing your fingers in a hope that things don’t “explode”?