Cloud Crunch
Cloud Crunch

Episode · 3 weeks ago

S4E4: Cyber Security Risk Mitigation and Compliance Dashboard

ABOUT THIS EPISODE

In today’s episode, we intend to discuss the world of Cyber Security Analytics and the "Cyber Security Risk Mitigation and Compliance Dashboard". We are joined by our lead host and Director of Marketing Michael Elliott, and co-host Fred Bliss, CTO of all things data at 2nd Watch. Our honored guest is Joey Brown, Senior Consultant for 2nd Watch.

Involve, Solve Evolve. Welcome to cloud Crunch, the podcast for any large enterprise planning on moving to or is in the midst of moving to the cloud. Hosted by the cloud computing experts from Second Watch. Michael Elliott, Executive director of Marketing and Fred Bliss, CTO of all Things Data at second Watch and now here are your hosts of cloud Crunch. Welcome back to a new season of cloud Crunch. In this season, we're going to focus on AWS Reinvent, the biggest cloud conference in the world, kicking off November. Our intent is to enable you the viewer, the opportunity to immerse yourself and how cloud has evolved since last year on topics like preparing and building a center of excellence, extracting data insights with you know many refer to as data analytics, managing a cloud native environment, and data center evacuation. Now joining me today is Fred Bliss, CTO of Data Insights and Joey Brown, Senior consultant, both here at second Watch. Welcome to cloud Crunch, Fred and Joey. Thanks Michael, happy to be here. Thanks having alright great Joey. So today we're gonna focus on cybersecurity analytics, and I'll have to admit not a subject I'm strong at. So I'm happy to have both of you here to kind of discuss where this is evolving, because as we look at cybersecurity in my background, it is always an extreme interest, but not a lot of money being paid to actually do anything about it until there's an issue. So this should be a really interesting discussion. So one kind of start off with kind of questions around you know, what are we talking about here from a cybersecurity perspective that hasn't been talked about by every c so out there. What I'm seeing now is UM. You know, as part of every data project, we'd always have security part of it, governance part of it. UM. For highly regulated industries there be a component of risk and compliance, but you know, kind of checking the box. Uh. And that's changing. And a big component of that is we've got a lot of multinational global organizations. UH. If you look around the world, countries like UM, you know India for example. You know, we've already seen the effects of g d p R over the last couple of years UM in India. Now there's a lot of regulation kind of changing kind of by the day with data residency and data sovereignty laws. Um, you've got AI ethics, UM and transparency regulations coming out, executive orders coming out. Data privacy regulations are absolutely I...

...mean at the state and federal level. And now what we're seeing as organizations actually empowering and putting money behind UM some of these data privacy teams. So the same data that's being used for security purposes that might live in a single purpose UM sim tool uh now serve a lot of other purposes. Depending on which hat you're wearing, Joey, I'll let you kind of take it from what you're seeing on an I said, yeah, I think on the idea side, we see a lot of request for you know, we want to get a lot of security analytics who want to get the dashboards we want to get um, you know, visualization of things UM and UM uh. Sometimes it seems like they maybe want to skip some of the detective controls, like maybe can we just get the dashboard of at actually collecting any of the logs and kind of get the cart before the horse some of the time. Uh. Sometimes so you really want to make sure you know, cover your bases if you collect, if you're collecting all of the data that you can possibly get everywhere. Then you can be in a position where you can even switch different ways to do analytics. You can have different dashboards, you can try different products because the logs are all there. But if you're starting out with has quicksite compared to Tableau compared to you know, some other some other dashboard, You've got to get the data first, right. So, uh, and some of the areas for security complainants, you know, we I'd like to say that there we talked about them where they're kind of one thing, and then we try to tell our clients sometimes well you should really think about them separately. You know, compliance isn't really security. But some of those tools, like you're saying, Fred really starting to bleed together. Databs has ATBs Audit Manager now and it'll it'll check a lot of the your your audit logs and your I M logs and things like that, and it'll set you up for those compliance reports that you need. You know, an auditor shows up and says that we want to see these resources and want to see the history of these resources. They're already there. So some of the security logging things start to start to bleed into in the compliance. Yeah, the whole um idea of what tools should I use for dashboards is really the wrong question, right, I mean, it's all about what what data should we collect and what signals do we care about? Because um, I mean it's not just dashboards, right, There's there's a million different products that you can um that you can elaborate the same data for. I imagine, right, and are seeing customers that have more than one uh, that have more than one place where they're collecting security logs right, Honestly, not as often as I would like. I would I would like it where they where people had more more dashboards, more more more tools, like where somebody says, well, we picked open search or we picked you know this, you to this tool, so let's go with that. I wish people had more flexibility to they have all the logs in one place and they can try different products instead of focusing on when we picked this one and it doesn't have this feature. But that's what we're going that, you know, all the logs should be in one place and they should be able to be consumed by whatever they want.

And I think we've we've heard a lot and we've recorded in prior episodes Fred with you know, we picked the tool, now how do we apply it to all these things? But I want to kind of go back. What is all the data that you can collect and then you can bring into a central repository. Well when you um, you know it depending on the depending on the cloud and the use cases of of freach client. There's a lot of inbound and outbound you know, but um, it's not just all the application logging and inbound logs and network logs and things like that. But um, the um if you can get again them all in one one place you can start doing. You can you can look at authentication versus authorization. You know, you can see failed authentication attempts, or you can also see authenticating people are trying to access things that they're not necessarily authorized to do. Or you could another thing that people don't maybe bring in as often as they should, our DNS look ups resources they have in an environment that are trying to make calls outside. Do you see that you have some machine making road calls to some DNS server, some bitcoin wallet or something like that. That's that's pretty suspicious. So so you know, it's not just about network firewall logs and laugh logs and you don't want to look at c d M s and see why are we suddenly getting a lot of traffic in India or something like that. Yeah. And and from a data and analytics standpoint, UM, what we see business users UM, and a lot of these compliance and risk teams care about is who's accessing specifically what data? Right? UM, what do you think about healthcare and hip ip? Right? UM? You need you need to absolutely make sure that the right people are accessing the right data. And why right if you need to go investigate and look into you know, a couple of different records for a healthcare organization, UM, you need a reason why. You need a trace ability of of what that look like at a point in time. And so that means not just knowing, um, who access what, but what did the data structures look like? Because you might be looking back a year, two years UM at a point in time of what those data structures look like, and what those look like two years ago might not be what they look like today. And so that means when you're collecting all this data, UM, enjoy, I'm sure you see this as well. It's not about what tools do I use? At the end, it's how do I start to collect and model this data so that I can look at all of it at a point in time and see, these are all the DNS records coming in. It's a lot of data, right, But I think we're at a point where the technology can can handle that well. And and some you know, an older technology that everybody loved. When when you know it's all the rage elastic search now is you know it's going more towards the open search version. Um, that's that's what it promises. You get all your lodge there, and I just want to look at this time slice. Um. You know we have a security incident and we had something some blip at the time. Um, let's look at all...

...of the logs from that time and see what was going on, and then you'll find out like, are we looking at an outage? Here? Is a security incident? Incident? Um? So when you can really get it down to a specific times life and start looking at all the logs there, nobody wants to look at network flow log as well. Nobody I want to hang out with. But um, but when when you have it in the last search and you can just narrow it down to one one tiny times lives you can you can see here's the network traffic, everything that's happening in this one one or two minutes. That can really shorten the cycle for your response time and a detective, you know, all your incident response plans and stuff like that. Joey, let me let me ask you a quick question. I mean security, This is not a new topic, at least from a security perspective. So what's different with this capability that doesn't already exist? And how do you maybe to add on to that to UM? What uh are you seeing multiple like I guess, multiple different teams managing multiple systems? So are there different security response teams depending on which wind a business or application or Cleveland cloud? Uh? Sort of like you'll have people that do UM. You know, here's our instant response plan and they're thinking about it. You know, they go through a lot of exercise where we've got a VM somewhere and our incidet's response plan is, Okay, we're going to isolate this VM and we'll go through drills like that. Okay, we find out that we're gonta learned that there's something funny going on with this VM. It's making strange calls. We're going to isolate it, we're gonna investigate it, we're gonna find out what the root cause was. We're going to try to put it UH in a quarantine environment and that it's used to go through those exercises with um VM. But when it's UM data in your cloud storage, it's in S three or it's in cloud storage and it's there's an ex filtration event and now there's nothing to isolate. You're not you have if you're in VM mode and you're not thinking more like across services, it could be you want to try to apply those response plans to different things, and then the other you will have a different team or maybe the same team doing a different exercise for like UM disaster recovery over disaster recovery, here's my set of things, or we go through this checklift. So even if even sometimes it's the same team, it's they sort of think about it has two totally different activities. But those drills should be opportunities for you to look at to be captured all the logs, you know, not just here's here's the here's how we would isolate a VM, or here's how we would here's how we would look at some data expiltration event, but why wouldn't where would our logs be for these events? And then how would we how would we be able to do security analytics on a drill? I think that's spot on. And what I'm seeing and hearing UM from C suite and UH interactual lever level folks at enterprises is you've got a lot of these UM goal purpose sim tools and legacy ones that just...

...don't work for modern security teams. UM. You know, Joey, when you think about UM a big enterprise customer that's got I mean, you've got Azure with Sentinel, You've got a WS with their solution, You've got GCP now with their acquisition of Mania and there in their solution. All of that data is being ingested into those systems. But now you need to ingest the data from all those systems in your system, right. Well yeah, when you ask them, yes, if you ask UH Azure Sentinel, they'll say, no, exactly ingested all over here. We can take all of it. You know when we when we when we look at as your sentinel, they're like, here's how you ship your aw S looks, there's how you ship cloud trail to sentinel. That's how you ship everything, give it all the up. So, um, it's easy for us to say, hey, you should just record everything, you know, hey, get all your logs everywhere. But when you've got your ad A d over here, and then you've got your AWS environment over there, and then you've got a data team on g CP, Yeah, it does get into a situation. Well, now we kind of do have to pick one central pository for for logs maybe, and we do see unfortunately a lot of companies bok at the cost of log storage. You know, it's you can get into dozens of dollars that they didn't plan for, you know, or sometimes hundreds of dollars that they didn't plan to spend that quarter. It's but in the grand scheme of what their their cloud spend is log stories should be the least of their concerns. You know, it compresses. You really only need to know you can figure out your your life cycle of that data. If we're talking about security incidents, yeah, you might need to hang onto it for years, but you only need to keep it pretty hot for thirty days. So don't don't don't don't skimp on the on the log storage, please enjoy. Just to be clear, to be clear, when you say hundreds of dollars, there's a few more zeros after that. Well, not necessarily, it'll be like you'll have a calm with people. Well, you gotta turn on your flow logs and this this VPC or something so we can get inbound and outbound accepts and rejects and everything for the network thing. No no, no, no, no no, that's a sandbox. That's a devil. That's a deav environment. We don't want to pay, and we want to pay an extra hundred dollars. It's just dead environment. The people who take over your account don't care what kind of environment it is. They want to spend a bitcoin miners, they don't, they don't care for this. This is a deav account. Don't spend a bitcoin miners there, you know. So you you want all of that, all of the security logging in all of your environments, not just the not just the problem. I think that's a great point. Um. The cost of storage is something you know, I see that I think is kind of a left behind thought of you know, bad experiences of getting hit with huge spending bills because you've got geo, replicated data of the same data, copied multiple times, and bad governance policies, and you weren't really looking into it. But you're right at the end of the day, all of this. If you've got it in Calm their storage, and you've got it well modeled, it compresses and it becomes a tiny little fraction and one. And I thought the cost uh, and you know, shipping logs back and forth between clouds. The I heard an interesting anecdote that,...

...um, if you're using Sentinel, if you're using AWS solution and just you know, shipping them back and forth, the eat risk costs of doing that has got a significantly outweigh the costs of storing it within your own data lake and data warehouse solution for this purpose, right, Yeah, and that's why a lot of a lot of times the the ingestion is free. But yeah, but I gotta pay the other guy to ship it. Yeah, so I think they become really good ingestion tools. But now you've got this wealth of data that can be used again not just for the I T teams, um to unify this data and remove some of that alert fatigue in reactive responses. But you know, combine that with all the different regulatory data that you need to keep track of. It's it's changing quickly, right, Yeah, And and a lot of times you'll find that the I'm just gonna say a lot of times to find that, you know, a cloud cloud provider solution might be, you know, for analytics, you know GCP, it's just get everything into into big Query. I just just just let everything in the big quirre. It starts sounding pretty expensive because maybe you're not familiar with how easy it is to expire data in big Query, whereas in red shift just pull your lugs into red Shift. That sounds insane. It's it's it's it's really expensive. But you know, it's not necessarily the tool you would use in US. And that's kind of where I want to kind of start towards wrapping up. So it is taking all of this data, and that's where I want to understand, fred Enjoy, what is it. So it's bringing all this data into some type of repository that may or may not be on one of the cloud providers. Help me understand that just a little bit. Yeah, I mean it's at the end of the day, it's et L and it's the same thing we do, uh, day in and day out on all of our analytics projects. Instead of now bringing in data from the RP to understand, uh, what did our sales pipeline look like last month versus this month, it's now bringing all this telemetry data that Joey is talking all this log data. Um, you know a w S access logs. Um, if you've got a data warehouse or often an enterprises. I see multiple data warehouses who touched what? There's a significant human capital cost of digging into audit bugs and all these different things when they happen, but they need to be able to produce who touched what when, what were they using? And all of that infrastructure that gets built. Um, that's all data in telemetry data that can be brought into one place. You know, email logs in Officer sixty five logs, octalogs, you could go on and on. Right. Yeah, I think the biggest eye opener for me. What I tell people now is you know, I was trying out a new lake formation blueprint thing in my sandbox in AWS, um, my second watch sandbox, and um, you know it's gonna pull pull in some cloud trail logs and do some deucem antalymks on it because I don't have a ton of data to play with when I'm doing data lake type activities. But I've got a bunch of cloud trail logs in my sandbox. Well then when I forgot about it...

...for a week or two, somebody at second Watch Alergy that my sandbox seven uh in just a few days. So um, companies should should think about the amount of data that they have as you know, how do we get it all into one place. You use it as a data lake experiment, use it as you know, get it into you know it's some of the security stuff is sensitive. Great, now you have a perfect a perfect playground to start sanitizing data or giving this auditor access to this data, but not giving this person access to you know, some kind of p I I or people's addresses, stuff like that. So companies do have the data, and they do want those data lake tools. They want to get started with it. Start with your own security data, get it all in one place, do it as a data lake exercise and a security exercise at the same time. Analytics is analytics, right, I think that's a good point in Michael, you know, to add onto what Joey said, which a spot on UH. You're not going to every single use case and every single signal that you need right on day one, but you've got all these different tools and they're going to come and go. That's your data. Um. I mean, if you think about your you know, even just like you know Microsoft Defender, your mccafee lugs. UM, if you're migrating from let's say Officer sixty five to Google workspace, get all that Officer sixty five telemetry data out and then once you figure out how you want to use it, model it into you know, different data marks or different data warehouses, and then from there you can put the dashboards on in it. At that point, it doesn't matter what dashboard tool you're using, because your data is modeled and you can use quicksite, you can use whatever, you can use all of them. Right. I want to thank you Fred and Joey for joining us UH to discuss cybersecurity analytics. You know, kind of a tool that really doesn't exist out there and something that we are working on. It's not it's not necessarily a tool that we're going to build. It's a capability where we can build. So a lot of opportunity to really evolve your security posture and how you look at stuff. So any final words for the audience, Fred or Joey, keep in mind that it's your data treated just like you would treat application data or data LAK data, and that you already want to do analytics on. Don't I don't think of it as anything other than like you said, E t L it's it's it's a data pipeline. It gets processed at the end, or it gets analyzed at the end. The analytics and the dashboards and anything all happen at the end. Get the collection on the front and figure out how to how to store it as cheaply as possible and get whatever you TL you need on it, and then do the analytic Yeah. And then for for business teams that are spending up new governance privacy compliance teams within or even these capabilities within existing uh lbs, UM, don't reinvent the whale. UH. Don't go down the same path of buying very single purpose built tools UH that claim they can do everything. This is your data.

UM. There's a million different machine learning and AI use cases that you can do once you have all this data. UM, be creative and uh, and look at this as um, you've got a ton of data and a million different use cases that you can start finally unlocking. Awesome. Well, Fred, thank you for joining us, Joey as well, and to the audience, thank you for listening to our show. This podcast is intended to add value to any large enterprise that is planning on moving to or is currently focused on leveraging the value of the cloud. Send your comments or suggestions to cloud Crunch at second watch dot com. You've been listening to cloud Crunch with Michael Elliott and Fred Bliss. For more information, check out the blog second watch dot com, forward slash, cloud dash block, or reach out to second watch on Twitter and LinkedIn.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (43)