Blog

Visibility In The Era Of Multi-Cloud

Hanna Mathai

By Nikola Kukoljac, Head of Solutions Architecture

3 min to read
Visibility In The Era Of Multi-Cloud

For years industry professionals across our region were building environments like fortresses with many security walls, moats, and techniques to protect the most valuable assets. We, security teams, were comfortable in our data centers. We were equipped with a deep understanding of technological principles and had experience, extensive documentation and books to guide us through the best practices. We understood risks, we were achieving more and more visibility with every passing year and we had the time to develop incident response policies and procedures in case of need. The pandemic significantly changed the way UAE and GCC based organizations perceive cloud and cloud-native technology. For us in the region, we could say that 2020 and 2021 were the “cloud years”. Years when we adopted cloud overnight.

There is no doubt that legacy monolithic applications can run on cloud infrastructures. Nevertheless, organizations across the region recognized that the utilization of this approach cannot take advantage of the agility and flexibility of cloud resources and services. It is not surprising that, as per the various market research reports, in order to increase application resiliency and scalability and optimize the cost, almost three quarters (75%) of all enterprises are having a multi-cloud strategy, out of which at least half (50%) are planning the utilization of both public and private cloud environments simultaneously.

As customer cloud environments are rapidly expanding, solutions are inevitably generating a significant volume and diversity of data that requires consistent monitoring. Unfortunately, due to the fast adoption of cloud technologies in our region, many organizations underestimated the effort required to strategize, plan and implement monitoring across multi-cloud environments from a technology perspective, as well as from the perspective of developing internal skillsets. As we have learnt through the years, the incident response process does not only require notification about the incident. It also requires knowledge in working smartly with data in a way to perform successful forensic investigation – which systems and users are involved in the incident, when did it happen, what is the magnitude and duration of the incident and what was the impact that the incident left on the organization.

Let’s face it – achieving visibility over cloud environments is complex enough even when there is only one cloud. In hybrid or multi-cloud environments, organizations can typically only view each cloud’s workloads separately, usually with its own way of representing logs/alerts. Ideally, we should be able to view all cloud environments from a single pane of glass. In reality, many organizations are ending up with multiple monitoring solutions, each with its log storage, analytics and dashboards. The diversity of these tools and lack of unification of all events lead to difficulties in identifying attackers as well as significantly higher incident response time compared to traditional on-prem environments.

On the other side, the adoption of serverless technologies has brought in an additional level of complexity, since instances and microservices can be spun up and down dynamically, often across multiple cloud providers and hybrid environments, making the correlation of data during the incident response more challenging. The same serverless technologies (FaaS and containers, for example) changed the way malicious threat actors are strategizing attacks towards organizations. Put simply, due to the temporary nature and short lifespan of serverless environments, attackers have been facing difficulties in staying and hiding inside of the systems, leading them to change their tactics. A good example of a new methodology is the “Groundhog Day” attack – a short-lived attack that, for instance, exfiltrates only a few credit card numbers from a database every time a function is executed.

Only a handful of industry professionals can claim that their organizations have achieved a satisfying level of visibility across cloud environments and readiness to identify security threats in a timely manner. Having this in mind, it comes as no surprise that digital transformation and (cloud) security teams have identified achieving visibility across applications and data traffic in public clouds as the number one priority. It also does not surprise us that many organizations are choosing to offload these challenges to trusted security providers. Help AG is continuously investing significant effort in upskilling existing engineers, consultants and analysts as well as identifying and bringing new talents to organizations that understand cloud-native technologies. Understanding logs and alerts in addition to obtaining the skill set to comprehend how public/private cloud and container networking are operating, how functions are written and executed, where logs and alerts are stored, and which APIs and integration to utilize for extracting the logs/alerts are prerequisites for the identification of an appropriate monitoring strategy and solutions. The same is required for creating monitoring use cases, implementing technology integration, building incident response playbooks, as well as identifying technologies and methods for validating the effectiveness of implemented detection and protection technologies and processes.

Share this article

title
Upcoming event

Black Hat MEA 2024

  • KSA
  • Riyadh