Why do enterprises need Security operations center

By : |June 14, 2019 0

Gone are the days, when security experts identify the breach and then take appropriate action to quarantine. Nowadays, it requires a futuristic approach to identify the threat even before it occurs or as soon as possible. Also, automation is happening so that breaches can be medicated then and there, without impacting your database or real loss of data.

In earlier times, firewall came into the market and then IDEA (International Data Encryption Algorithm) came, the IDEA got evolved into IBS (Internet business Suite) and because of endpoint security, customers went ahead and studied investing into antivirus which became a comprehensive endpoint security solution. But if you see, there are multiple OEMs (Original Equipment Manufacturer), that are used to address these specific product lines, whether you talk about network security, application security or for that matter, endpoint security.

We spoke to Jyoti Prakash, Country Director – Enterprise Security Business, Micro Focus India, to understand current scenarios and the solutions to fight with them.

Security Operations Center Roles and Responsibilities

Everybody builds their own platform at a Console, but the problem what happened, later on, was the attack-ware were not focused only on the specific product line. It was combined into hybrid information that were being taken care of. E.g., if your old system is working on Microsoft and if you have invested in firewall and not on IPA, in that case, there is a vulnerability available in your Microsoft patch.

Can an injection be done in a way so that the breach cannot be detected? So, there came the need for a SOC (Security Operations Center), because the product, in isolation, will have their own rule. If it works in isolation, it will have an isolated dashboard – a dashboard that will connect between two product lines.

Later, the industry realised that breaches are not happening because of a gap in one product, but it is happening because hackers have become more sophisticated and they try to combine vulnerability reach hybrid between two technologies. Therefore, unless you have a SOC in place, which understands and collects log at one single dashboard, you’ll never be able to identify whether the breach is happening and if it’s happening, what is the root cause of that.

SOC was built with an objective to consolidate logs in one single dashboard and then start co-relating.

A log comes from a firewall, endpoint security. Is there a connecting link between both the logs? Is there a similar pattern which is being followed by one hacker to exploit the vulnerability, which is available in your firewall, IPA or for that matter? Is there any vulnerability available at your endpoint?

Hackers take and try multiple channels to interrupt your setup. So, SOC has a single co-relation engine, along with a single dashboard, which collects lots of technologies and gives a holistic and real-time view to any end customer. So, any organisation who has already invested in a SOC will be able to identify if there’s any data breach that is happening or is about to happen. This is the reason why SOC came into the market and over a period of time, it has become much more intelligent.

Organisations, today, not only talk about identifying the breach but also about how fast can the breach be identified. Moreover, they discuss on how to automate the system so that breaches can be medicated then and there, without really impacting your database or real loss of data. Also, it helps investigate in understating the root cause of data breach and what is the financial value of the data that has been compromised. So, it has become much more comprehensive, intelligent and sophisticated.

Transforming the legacy IT Infra to secure the future

A legacy platform which has been in the market for a long time, almost 40 plus years. There are a lot of mainframe customers who might be looking out for a different product that could parallelly run along with COBOL. The advantage is that, if the customer has invested in one of our technologies, we will give support to that particular product line as long as the customer wants to use it.

Secondly, we also create a scenario where our legacy product can co-exist with our new product or technology, that the customer invests in. Hence, we take a lot of pride in terms of conveying this message to the market, that we co-exist. The product line that we have at Micro Focus is a complete portfolio of Novell, NetIQ, Borland, Serena, along with security products like Fortify, Vertica, Mercury, Attachmate, ArcSight, Voltage, that we acquired in the past. With the combination of these, we have two really important things.

First, anybody who has invested in any of our legacy version or a platform will always keep up supporting that platform. Second, we make sure that we launch new versions and keep our features updated. So, the legacy platform along with the new product technology and the version that we launch, both co-exists. And, that’s where the value is derived. So, the investment at customer is done from day one and is absolutely secured throughout their usage of the product line.

Why SOC should be deployed?

There are four factors that the company will look into before thinking of investing in a SOC.

The very first thing that you need to understand from a customer’s perspective is, has the customer experienced any breach in the past? If the breach has happened, did they manage to identify the reason behind it? And, if the data has been compromised or lost, what was the financial impact attached to it? And the fourth is obviously in terms of compliance and the cost of being non-compliant.

When you derive and deploy security product, which works as a core platform and a backbone for any SOC, over a period of time if you manage to integrate all your critical devices and the security devices, and if these are coming to the platform, and if that platform is able to give you the right kind of input, on real time, because you have deployed the right policy, in that case, you will be able to identify any breach that happens. And, before the data loss happens, you will be able to trigger that call.

In terms of the identity of an individual, people who are accessing the data, are they able to log in or not? Are they able to create that log within the system or not? If somebody is trying to access data which he/she is not authorized to, is the solution good enough to trigger it as an anomaly, so that one can take an action on that? These are all matrix based I would say. The risk code that this dashboard will share with you can help fine-tune your policy and you can stop any potential breach that is bound to happen in future.

It’s a journey that you need to take and even if you invest in a SOC, it is much more important to figure out whether your SOC is functioning efficiently or not with an integration of your SOC with multiple products, coming from multiple source, the real-time analysis that you get, the visibility that you have, the compliance level and the policy that you have to enforce, these are the trigger points that define the efficiency of a SOC and will hence give the exact calculation.

There are few parts of the SOC offering which can give the exact calculation.

First, if the cost of the breach is X specific to a data if that data is not comprised and if that breach has been mitigated on time, this is the money that you have saved.

The second portion will be a cost of non-compliance or any guidance that you need to follow – it is clearly defined in all the regulations that you read. So, if you are compliant to that and if you be compliant on a quarterly basis, what is the money that you have saved?

The third portion is definitely in terms of how efficient you are becoming – in terms of making sure that your own environment is being monitored, tracked and secured and that, your data is also protected.

I think there are a lot of parameters which will help you understand that.

84% of the breaches have happened in the application layer. I think there’s a huge gap in SOC building owing to disconnection of logs from the application. So, we see that a lot of engagement happens from the customer’s side, where they build Secure DevOps practice, but they also try to monitor applications, the data flow and the action of that data within that application during their SOC. So, that’s the value which comes in from the SOC investment of any organisation.

How to identify internal and external threats?

There’s always a combination of few things. We need to understand when somebody is trying to access any of the data which is within your setup or might be in the cloud. Now, being a mobile user, I can use my laptop to access data while sitting in the office, some of the data while travelling using voice note also. Whether the data is getting in the cloud or anywhere, it doesn’t matter.

Now, here the most important thing that we need to understand is how are you establishing the identity of that individual. When I say identity, a mobile device can be an identity, a laptop user can be an identity. In fact, any user who is trying to access data from any of the public networks also, as long as you are able to make sure with certain technology and tools that, that particular identity of an individual is established, and is authorized to access data, is an identity and you can give access to that – the whole concept of BYOD works from that.

So, the solutions that we have in terms of IAM (Identity Access Management) solution, where we clearly define that who is that individual, we can also plug in two factors to multifactor authentication, just to make sure that the person is using a username and password and is also carrying sub-devices, whatever factors you want. We have this feature and we call it multifactor authentication where we authenticate the right individual – they should be granted access to the data.

Secondly, when we first establish the identity of an individual, we talk about Identity Management – it means we need to manage which data this person is authorised to access. Now, if you have defined your policies correctly, and if the identity of the individual is established, that individual will be able to access the data – whether its read, write, or modify options, all those are part of the policy that you can establish.

Thirdly, much more important is, can you send that data out of your network – it means, if I am working on a data and the identity has been established, I am authorised to use that data? Can I send this data to only two individuals within the organisation? Am I authorised to send it to another forum also, which are outside my network? Again, you can establish choosing multiple tools – that term comes from Micro Focus, that these are the things you can do and those are the policies that you can impose.

Now, considering the worst case scenario, if something goes wrong because of some malware in my device and because of that, some implant of malware has happened, and the data has moved to a destination where it was not supposed to go, in that case, we have a solution from Voltage, which makes sure that even at the time when data is breached or the data has gone to the wrong hand, that individual will not be able to access the data. So, basically, we manage the complete lifecycle of the flow of data – where the data is at rest, in motion or in transit – we make sure that the data integrity is maintained and is being accessed only if the right identity is established. Otherwise, if a data gets compromised, that data will be garbage for that individual.

No Comments so fars

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.