Security
SECURITY AND THE LIFE CYCLE OF IT SOLUTIONS

Although the importance of cybersecurity has been talked about more and more, then it is important to note that the most hideous incidents that reach public awareness via media still only form just the tip of the iceberg. Cyberattacks are showing strong signs of a constant growth trend and have become a daily part of the increasingly more laborious jobs of IT systems managers.

From the point of view of the end user, cybersecurity means using secure passwords and two-factor authentication, having basic knowledge of cyber hygiene, being attentive to where and how you share your data, knowing how to differentiate between secure and unsecure devices and environments etc. That is already quite a lot, but it only makes up a small (albeit very important) part of cybersecurity as a whole. At the other end of the spectrum, we find different service providers, such as IT companies dedicated to infrastructure solutions. The precise responsibilities of an infrastructure provider are dependent on their exact field, but in the case of cloud solutions, the infrastructure providers are not only responsible for the security of the rooms and the hardware, but also of the virtualisation layer, the internal networks, of ensuring the necessary computational power, and so on.
Between those two ends of the spectrum lies the most important part AKA a company that “owns” a specific IT service, whether that is the company’s website, e-mail service, online store, CRM solution, or something else. The decisions made by these companies are what determine how much the efforts made by either end of the spectrum towards cybersecurity even matter.
The infrastructure providers are in the best position here since they can see the biggest picture, which means they encounter different attacks and other security issues on a daily basis. At the same time, this has created the misunderstanding that if an infrastructure provider contributes enough and creates a secure environment, then everything is fine and that it ensures that all systems are automatically secure. In reality, that is unfortunately not the case, because that responsibility is a shared one in the case of any service – the infrastructure provider as well as the end user can both only protect themselves or the service they provide up to a certain extent. This means that, for example, there is not much use in the end user behaving securely if the e-service they are doing this with has not ensured the security of their IT systems.

Misunderstandings and hopes

Every day, at least 300 000 new malwares (viruses, ransomware, spyware, trojans etc.) are created, 30 000 websites are hacked, a new cyberattack is made every 39 seconds, new security issues are discovered in systems that have already been implemented, and the attack vectors become smarter. And amidst all that, finding and fixing a vulnerability takes 314 days on average – 7 months to find it and 4 to patch it. But with smarter attack vectors, the attackers can increasingly automate their work – the most common vulnerabilities are no longer exploited by people, but by automated bots created to scan huge amounts of IT systems 24/7 and to exploit any vulnerabilities they find immediately. Most times, the victims are not even aware of anything happening right away, since the freshly acquired system will be put on sale on the black market, added to a huge botnet or both. And so, it is not surprising that some vulnerabilities go unpatched for years.
These thought-provoking statistics do not just affect the big wealthy companies (in 2019, 47% of all cyberattacks were aimed at small companies) and they are showing a rising trend. Why is that the case, despite cybersecurity being a topic that is talked about more and more, along with new warning cases? In our experience, one of the most common causes of cyber incidents is a false understanding of the life cycle of IT solutions. Too often, we see companies setting up new solutions or ordering the development of one, and as soon as that solution has been implemented, the work just stops. But that is precisely the point when the work on security should start!
Sometimes, the life cycle of a solution gets stuck behind ignorance and other times, the desire to keep costs low. When it comes to the latter, people often do not account for the costs that will incur if the worst should happen. Unfortunately, those are always bigger than any investments made towards security would ever be. And generally speaking, the question is not whether something unexpected will happen at all. That is evident from the news and coverage published every day. The real question is when something will happen.

Behind the scenes work and constant maintenance

IT solutions should be looked at as a constant process that is not over until the system has fulfilled its purpose and is shut down. During that whole time, it must be ensured that the software, the content of the servers that host it, and the infrastructure around it all are all up to date and managed according to the latest best practices. While this is not costly or complicated, it does require an understanding of the environment that people and IT systems work in and how everything that surrounds us can affect us.
Risk management means that various methods must be considered, monitored, and implemented, starting with the infrastructure, which is an inseparable part of any IT system. Before implementing one, you should always consider, which infrastructure is the most low-risk one. It is clear that to be able to keep up with security threats, one must invest in both the hardware and software as well as systems management. This means that the first thing you should find out is whether an infrastructure provider is even capable of making such investments and are they able to prove it. For example, if an organisation is considering between setting up their own server (or a whole server park) and buying the service from an experienced and successfully proven service provider, then there is no doubting the fact that you cannot compete with the investment volumes made by huge global providers (e.g. Microsoft invests a billion dollars annually into its cloud technology security) and the strict security audits they conduct regularly.
Once an infrastructure has been chosen, then the next step is preparing it. Although the exact list of works involved depends on the infrastructure, then the list should also include setting up all processes in addition to the technical work: how the infrastructure and the solution it will be hosting will be monitored and managed in the future and who will be responsible for it. As with any other kind of security, it is important to know all the links in the chain and to ensure that they all work together.
Along with the infrastructure, you also need to inspect the software. Both packaged ready-made solutions as well as unique solutions created by a company need a sharp attention to detail. Of course, software is always chosen based on business needs, but that should never mean having to compromise on security. I recommend always doing your homework and comparing different software not only in terms of the functionality they offer, but also from the point of view of security. For example, it is worth looking at how active the technical support it, how and how quickly the creator of the software has reacted to cyber incidents previously, or whether the software has undergone a security audit conducted by a trustworthy third party. In the case of free software, you should also look into how active the community developing the software and its leaders are.
In the case of custom-made software, the cooperation with the development partner should not end with the software handover. Instead, the cooperation should continue in the form of regular maintenance work that encompasses both updating the software components as well as being ready to update the whole software if the infrastructure is updated. This is necessary because nearly all modern applications use tens and hundreds of third-party components, which, while useful in helping to create well-functioning software for a reasonable price, brings with it increased security risks – 98% of the world’s most popular content management software WordPress’s vulnerabilities come from its plugins. In other words, all third-party components that have been implemented in a solution must be constantly monitored and the company must be ready to react in an operative manner, if necessary.

Everything else is just as important

A whole row of other critically important things can be found between the two blocks that are infrastructure and software. And these things also need constant management: network layers, operating systems, various support services and applications that are necessary for a specific IT-solution to be able to work. Unfortunately, this part is often left unattended, although this was precisely the reason why in 2017, the ransomware WannaCry, which crippled the work done in large corporations, hospitals, state institutions etc., was able to spread so easily. The damages it caused have been estimated to be somewhere in the hundreds of millions, if not even billions.
To ensure security, the changing attack vectors must be monitored, and preventative defence measures must be implemented against them. Luckily, these activities can be partly automated, but the human factor involved is still very important – someone must be up to date on what is currently being done and what is planned for the future, they must monitor the infrastructure, services, and applications, and conduct activities that reduce security risks.
In the case of simpler solutions, the client can do this themselves. For example, WordPress, Magento and other freeware applications that have no special functionality are usually hosted in shared virtual servers (where the operating layer is also managed by the service provider) and it is the service provider who must put in the effort to ensure the security of the systems “below” the software. And the client is responsible for securing the application. The least that should be done in cases such as these, is to log in to the management interface on a regular basis and to ensure that everything has been updated.
In the case of more complex solutions, it should be agreed who will be responsible for regularly monitoring and managing the solution as well as being ready to react to any incidents 24/7. Of course, there are companies who do have the capacity to do this themselves, but that is the exception, not the rule. Creating that sort of capacity and then successfully keeping that ball rolling is generally not a reasonable economic decision. Additionally, in such cases, there is often the temptation to ignore any possible risks and hope that “surely, it will not happen to me”. Instead of relying on luck, companies should take these risks seriously and find a partner who will take on the management of the solution and help prevent and reduce risks.

Klemens Arro
CEO
ADM Cloudtech

Published: https://digipro.geenius.ee/rubriik/uudis/klemens-arro-turvahoiatus-suhtumine-et-vaevalt-et-see-minuga-juhtub-maksab-katte/