Say yes to cyberimmunity and no to fear

To build a safer future, we need to stop fearing and start immunizing.

I’ve been in the cybersecurity industry for more than 15 years. During that time, and together with other infosec veterans, I experienced the rise of the FUD (fear, uncertainty, doubt) hype firsthand. I have to admit, it worked. Neuromarketing science got it right with that one. Fear really did help sell security products. Like any strong medicine, however, FUD had a side effect. Not just one, actually — it had many.

We as an industry cannot escape FUD because we’re addicted to it. For us, FUD manifests itself in some of our customers demanding proof that what we’re telling them about is not just another potential breach but a real danger. Unfortunately, the best proof that a danger is real comes when something bad happens. And that’s why the media got addicted to FUD as well. The more millions of dollars — or euros, or whatever other currency — someone loses, the better the story.

Now, enter the regulators, with their tendency to overreact and to impose strict compliance regulations and fines. That effectively puts security researchers, product developers, marketers, media, and regulators into  a strategic trap that in game theory is called the prisoner’s dilemma: a situation in which all players must use suboptimal strategies because to do otherwise would cause them to lose. In the case of the infosec industry, using that suboptimal strategy means generating even more FUD.

To break out of this trap, we need to understand one thing: the future cannot be built on the basis of fear.

The future I’m talking about is not distant, it’s already here. Robots are already driving trucks and roaming around Mars. They write music and create new recipes for food. This future is far from perfect from many perspectives, including that of cybersecurity, but we’re here to empower it, not to hinder it.

Eugene Kaspersky recently said that he believes “the concept of cybersecurity will soon become obsolete, and cyberimmunity will take its place.” That may sound bizarre, but it has a much deeper meaning that is worth explaining. Let me dive a little bit deeper into the concept of cyberimmunity.


Cyberimmunity is a great term to explain our vision of the safer future. In real life, an organization’s immune system is never perfect, and viruses or other malignant microbiological objects still find ways to fool it, or even to attack the immune system itself. However, immune systems share a very important trait: They learn and adapt. They can be “educated” through vaccination about possible dangers. In times of peril, we can assist them with ready-made antibodies.

In cybersecurity, we used to deal mostly with the latter. When our customers’ IT systems succumbed to infection, we had to be ready with solutions. But that’s when the addiction to FUD started, with security vendors providing ready relief to diseases that hurt badly. That “superpower” feeling proved addictive to infosec vendors. We were like, “Yes, it’s time for hardcore antibiotics, because, trust us, the problem is really that serious.” But using hardcore antibiotics makes sense only when the infection has already clawed its way in — and that, we can all agree, is far from an ideal scenario. In our cybersecurity metaphor, it would’ve been better if the immune system could have stopped that infection before it took hold.

Today, IT systems have become very heterogeneous and cannot be viewed outside of the context of humans — those who operate the devices and those who interact with the devices. The demand for “educating the immune system” has become so great that we actually are seeing a trend toward prioritizing provision of services — over even the product, which used to be primary. (The “product” nowadays is in many cases a customized solution, something that is adapted to the specifics of the IT system it’s designed to fit in.)

Understanding of this vision didn’t come at once. And just like with vaccination, it’s not a one-shot approach, but, rather, a series of vaccination attempts, all aimed at the same goal: stronger cyberimmunity for a safer future.

First, and foremost, a safer future can be built only on a safe foundation. We believe this is possible when all systems are designed from the start with security in mind. Real applications in the telecommunications and automotive industries are already testing our visionary approach. Carmakers being especially keen on safety, our mission statement of “building a safer world” is critical. In the automotive world, security really means safety.

As with biological vaccination, we expect the cyberimmunity concept to be met with skepticism. The very first question I’d expect to hear is: “Can we really trust the vaccine and its vendor?” Trust in cybersecurity is of paramount importance, and we believe that simply giving our word is not enough. If a cybersecurity firm’s clients want to see software’s security and integrity, they have every right to demand it — in the form of source code. We make that available, and all clients need is a pair of attentive eyes and a PC to analyze how things work. We do require a PC in sanitized condition for that code viewing, however, to ensure that observers can’t tamper with the code themselves. And just as you may seek consultations from various doctors, having a trusted third party view the code as well makes sense. With IT solutions, that outside viewer could be representatives of a Big Four auditing firm who can explain what those bits and bytes actually mean for your business.

Another important component is the ability of the immune system to withstand attacks against it. Cybersecurity software is still software, and it can have flaws of its own. The best way to learn these flaws is to expose them — to white-hat hackers, the ones who find flaws and report them back to vendors. The idea of offering a prize for finding a bug in software, first introduced in 1983, was absolutely brilliant, as it greatly reduced the financial incentives for black-hat hackers (who peruse found flaws or sell them to other cybercriminals). However, white hats demand guarantees that the company they investigate won’t turn on them and prosecute them.

Where there’s demand, there’s supply, so recently we’ve seen suggestions for agreements between researchers and companies such that the former can safely try to crack the latter without fear of being accused of any crime, as long as they follow the rules. I believe that moving in this direction is a step toward a safer future — one with less fear-mongering than the past — but this journey is going to take some time.

Tips