Cloud Crunch
Cloud Crunch

Episode · 6 months ago

S2E10: 5 Strategies to Maximize Your Cloud’s Value: Strategy 1 - Create Competitive Advantage from your Data

ABOUT THIS EPISODE

AWS Data Expert, Saunak Chandra, joins today's episode to break down the first of five strategies used to maximize your cloud's value - creating competitive advantage from your data. We look at tactics including Amazon Redshift, RA3 node type, best practices for performance, data warehouses, and varying data structures.

...involve Solve, evolve, welcome to cloudCrunch The podcast for any large enterprise planning on moving to or isin the midst of moving to the cloud hosted by the cloud computing expertsFrom Second Watch, Ian will be chief architect Cloud Solutions and SkipBerry, executive director of Cloud Enablement. And now here are your hostsof Cloud Crunch. Welcome back to Cloud Crunch. Seasontwo Today I have a couple of guests with me. Rob Whelan with second Watchand AWS data expert shown up Shandra. Welcome, guys. Very excited to haveboth of you here. Here. Absolutely. Last week we gave you the cliff notesversions of five strategies you should consider to maximize the value of beingin the cloud today. In the next few episodes, we're going to examine eachof these strategies and more detail, starting with creating competitiveadvantage from your data to add to our discussion. Today we have a veryspecial guest Chinook Shandra from AWS and I want to give a little bit of yourbackground of the audience so that that we know. So I'm going to read my littlecan speech here. Shaddock is a senior solution architect specialized in dataand analytics with AWS. He has over 15 years of experience in designing andbuilding scalable and secure data like data warehouse solutions. Chinook helpscustomers bill data strategy from proof of concept to the final architecturedesign using Big Data and AI and ML Technologies. He is an advocate of datalake and machine learning and have written several blog's and get hubcodes in the data area. Welcome again to the show. So we want to get intothis today. I'm really excited about both of you being here. Rob. Rob is ourpractice manager of our data analytics and big data practice and s Oh, this isgonna be great. So you all have tremendous experience in this area, andthis is this is gonna be really fun. So why don't we go ahead and open it up?Rob, I think you've got some questions, so let's just jump right in. Soundsgood. And thank you. And shut up. So great to be with you again. We'veworked together on so many projects you've always been so accessible on dsoknowledgeable, but let's get right to it. When it comes to red shift. We wantto use it to analyze large amounts of data, whether just querying and orvisualizing a dashboard. So what are some tips you have for us to minimizethe time between loading data into the red shift Quester and visualizing data?First of all, Thanks, Rob. Thanks, Young. Thanks for inviting me. It's apleasure to talking to you and share some of my best practices and myexperience that I have had with lot off customers and partners while talking tothem about data warehouse. Best practices were shipped to be veryspecific, but any AWS data technologies just in general. So when it comes todata visual ations on data warehousing...

...using I was on Red Ship, the very firstand most common approach off data ingestion point is Amazon s D. That'sthe defect of standard for data ingestion or landing zone. If we willfor data, you know, bringing in from your CSB or Excel file or maybe datacoming from traditional rd BMS databases. So s to being in the firstin the landing and landing zone. Ingestion point on the reason for SCbeing so prominent for data loading is it improves your object Transfer from astray into Amazon red shipped. Improve your s three read throughput. Um, aswell as you maximize your parallelism. Which red shift is very good at It alsoimproves your processing, especially around the data ingestion perspective.If you can spread out the files in Amazon s three in multiple objects,when we talk about multiple objects, you can think it off as you know,putting multiple files, whether it's the CSB or any other kind of textdelimited file. Or maybe it could be just on file as well as more advancedin a column. Reform and files like in the Park A or C so you can think. Thinkit off as uploading in all of these files off similar, you know, schemastructure into a folder, if you will. On this folder, we call it in Amazon sthree as prefix. So you upload all of these files in Amazon, prefix abouthonesty graphics and then run through a glue crawler, which is AWS Blue serviceto recognize the format of the data instead off you telling you know whatthe data structure look like? What are the different columns and data typesyou let Amazon blue to discover your data on that makes the next step, whichis the data loading in a much more, you know, simplified. So that's that'ssomething that we really, really talk our, you know, customer or partnersthat, hey, make your files spread across in as many files as possibleinstead of creating just one single file. One single in large file, whichmay get bottlenecked to load your data and I was understand, may not utilizethe wretched parallel processing. So what happens if you try thio loadedinto red shift without having crawled it first? So if you if you don't crawlthrough Amazon aws glue what you could, you could spend some time, especiallyif the data or the file is coming from your provider, which you're not awareof the structure, and it would be really, really painful if the filestructure is not something that you can read, which is essentially any kind ofcolumnar structure like, you know, like not character coded files such asparking and or so there's no point for...

...you to understand the structure. And ifit is provided by a third party, you don't know what the columns are, whatare the different data types are, and that's when inedible. This glue crawlercomes really, really handy for you or for the glue crawler to discover thedata structure behind the file. So if you don't have the blue collar, youneed to know the column types, the columns, the schema off your data, andyou need to manually create the table in Amazon. Red Shift and the Lord. Thedata from those fights in Amazon. Redshift, That's great. Now it's you'vereally kind of honed in on two services. Before you get the red shift, one isgonna be s three. Which amazing to me, how much it keeps improving every year.I mean, it's objects store right now. I would always think, How does it get?Any better? Seems to be keep getting faster, more capabilities, those typesof things so very excited about that. And I cannot wait to learn what'scoming out at reinvent on that one as well. Every year there's a little bitof a surprise and glue, so let's talk about a couple of things. One is howhard it is to use these services, so because what I hear is that you reallywant to split these files up. You wanna prefix? Um is that difficulty? Is therea strategy? You know, kind of where can people go for best practices with that?And secondly, how hard is glued to learn? You know, particularly. I guessthis is just the crawler side because there's a lot of capabilities inside ofglue as well, and some of the pricing models associated with that. That's agood question. So in terms off, ease of use in everybody's familiar with Amazonis three, right? So you could applaud. You can log intuitiveness console Firstservice, right. First service AWS launched the first service and itsobject storage. So you don't have to really think about structuring yourfolder. It just upload the file and it will. It would be placed somewhere inAmazon s three on. You can download the file. You can even launch a website andstatic website using Amazon history. But toe to this point into this regardsthis has bean really, really helpful or really, really easy to upload the fireindustry. You just need toe have a lot of this Management council access or itcould upload the files through CLI or FBI. There's a lot off options outthere if you want to do that. Programmatically right, um, on in termsavailable this group crawler One of the benefits of the group crawler is it canrecognize the format, and it can recognize formats of various differenttypes. For example, you know, if it is a CSB file. Yep, it will. It willunderstand out of the box that is the CSB file and will ignore the firstcolumn. Typically, that's the case for CSB file, which is the header file orheader. Grow on greed the hetero and then recognize the column names fromthere. Um, if it is any other format such as it takes deliberate file with,you know, tab delimited right, it will also recognize that start of the boxand more Maura in structured or semi structured data such as, you know,Parky or or see or Jason. It can do...

...that job seamlessly, so that's that'sthat's really, really easy. And for for a customer, for user, um, you just logintuitively. Glue create a glue crawler specifying the location of forestryObject on. Off you go. It's provide the name of the database that itrestraining it. Let's look at look, Andi, run the crawler on. Did yourecognize the structure and schema on? Register your table inedible, excludeda catalog and once it is included, a catalog you can bring in your host ofservices to use your secret skill to create that data, whether it's anAmazon Latina with AWS glue, spectrum, EMR or Quicksight. It's great becauseyou a lot of flexibility there. So there are different node types, andthings along those lines is just different ways to kind of launched ared shift cluster. And can you tell us about Well, obviously there's differentnode types, but we're gonna hone in on an R A. Three here. So what are some ofthe recommended patterns of using an R, a three node type compared to othercloud data warehouses? Sure, I think we need to understand why either threecame in the first place and one of the things that our product in Red Shiftteam recognized talking different customers that a lot of the customerstrying to modernize their existing data warehouse what we have experiencedtalking to these customers that they host a large volume of data, Uh,usually in the form of effective ALS that does not get created that often onthat constitute a huge volume off their storage. Um, in their data house, whichcould contribute to somewhere between, you know, 60 to 90% off that volume ofdata on one of the things that boy in redshift that it provided is spectrums. If you have in a lot of these historicaldata that you don't quit that frequently, just offload their datainto spectrum. But there are some some overhead associated with maintainingand managing spectrum. For example, you need to create us external schema. Youneed to off load the data into S three. You need to partition it. Andi, createa view union, all of you, to be able for your end users or business users tocreate the data seamless developed a historical leader or in a more currentdata or hard data, as you call it. So that's when you know our three, youknow, comes into picture and our our service team and our product enjoyeating, you know, part outside the box and came up with a novel solution,which is a new instance type, which is called our three Great. So once you'reusing these nodes and you've got a lot of people using your red shift clusterUm, but my question next questions around concurrency And how do you keepthings efficient? Basically, when we talk to customers, it's pretty commonfor them to ask how to reduce sort of...

...query wait times. And they sense thatthere is getting this collision between multiple parties using the cluster. Wealways tell them to just, you know, start, let's start by taking a look atwhat workload management accuse. We generally say, Hey, you should have atleast three or four different ones. Just one is reserved for an admin. Butwhat else do you have any other thoughts beyond workload management for?For more advanced ways, Thio manage concurrency. Yeah, So let me let me goto that point. But let me finish up or thought on the rt no type I think wejust mentioned about why aren't you came down in the first place? On whatare the different patterns that we have been seeing our customers are using,especially for the ones where they had being already using Amazon. That shiftin one of the legacy formats instance types, which is, you know, DS two and DC. Two on for For the customers were already running on DS two instancetypes. Um, it's been a natural choice for them to move to our three becausethere's nothing to lose. There are three runs on SST, which de esto theprevious generation was hard, right? So obviously you get in a much higher,higher throughput there. And Andi on our three comes within a 64 terabytedipshit manage storage, which is, you know, quite quite a large amount ofstorage. The other benefit with our three is customer does not need to payfor the additional storage that it comes with each note, which is 64terabyte partnered, they only charged, are build for the amount of stories thatthey occupy. So let's say the customer has, you know, 20 terabytes up offactive space data space on. They have to to note in our three cluster they'renot paying for the 1 28 terabyte off storage. Instead, they're paying onlyfor the 20 terabytes storage. What benefit this brings is if theirperformance gets in a bottle net, for example, they have, you know, reach thecap off Cebu dilation. They can add additional, not without paying foradditional storage that's completely separate. So eso that's the That's theone. Sort off, you know, difference from the Legacy Incidents types, whichis the S, P and D C to when it comes to D. C. To customers like the customershas been already running on discipline stands type Andi have very steady in asafe utilization, which probably under in the 50 secure stipulation most ofthe time what we have founded for them, it's best to move to our three as acost saving measure, obviously, in a D. C. Two is done by Isis the artisanbiases de eso. There's not really in a much benefit if you move to our threefrom D C to, but if your circulation is below, you know, 50 50%. Maybe maybethere's a good possibility for you toe. Make some cost saving moving on to ourthree. However, if you have a sip...

...utilization, you know at the highersite, close to, let's say, 80 or 90% most of the time. It does not makesense for you Toe move to our three on, um, required to consider some otherfactors there. Yeah, now coming back to the original question on you know, autoW L M or W L m for measurement or best practices for performance under heavyconference. See what we have. We have seen auto W l m is suitable for newcustomers who are, you know, new user off Amazon. That ship they haven't usedAmazon achieve before. Maybe they haven't used any kind of data housebefore, right with very little knowledge on their workload types.Whether it's a et l lt heavy or it's mostly in regional inquiries like, youknow, be I visual ations and stuff. Um, after certain days off, setting up theauto W L, um, they create a priority queues and the private accuser way formaking some kind of workloads to get, you know, prioritized before otherworkers. Because in auto w l. And we have just one in a flat out in theslots that I did not have any any any slots that you can create on your own.Instead, AWS or Amazon Red ship creates those slots for you. Andi, after, youknow, running your data warehouse for a certain time. Um, customers tend tocreate certain pride because because they see there's a there's a conferenceissues across different workload types, whether it's the L, T and B I. Andcertain processes, such as being queries, you know, started to strugglethere because there's a lot of detail jobs running on. At that point, theyset out a priority queue on this priority queues obviously. But as thename suggests, you know certain cues are get more priorities over other cues.And in this particular case, if you know, be agreed is a struggling becauseoff in a heavy ET AL processes as and when the bike please, you know, coming,they will get prioritized on the biggest advantage of red shipped toefficiently use the cluster under heavy concurrency is the use off conferencesscaling, which is a feature that we launched around, I think 2019, that hasbeen very effectively being used by a lot of our customers on the benefit offconferences. Scaling is it's really, really applicable when you have, youknow, kind off very, you know, in the spiky work Lord, but the spike of waterdoes not in a state for longer period of time. Or maybe it's, uh, you know,one hour, a couple of hours, a week or month has been really effective forcustomers with the conferences killing usage for, you know, heavy conferencework, Lord. So how does that work on concurrency Scaling? Is that just anoption? You take in it and it sort of works for you and what's happeningbehind the scenes? Sure. So contentious scaling does create a brand new clusterbehind the scene without you knowing that the cluster needs to be launched.Right, So you set up a conference Is...

...killing in your men cluster assed partof the W L, um, Set up Andi Red ship will determine went to launch the thisparallel transient cluster that as we call it, which is the conferencesgetting cluster. It will launch the conferences scaling cluster. It will,before launching the cluster will take a quick, quick snapshot off yourcurrent workload and it will launch the conferences scaling cluster for thetables that these Cui's which are, you know, waiting for a cue slots. Theywill those tables air getting loaded into the conference is killing Clusterand York, we will get, um, in. Acquitted from that conference isgetting cluster instead of the main cluster, so it's completely transparent.As the Indians. I would not see me. You know, when conferences getting clusterscreated as long as you are running the query, but as an administrator. Or ifyou have access to your AWS management console, there's a plenty off metricsthat you can watch on AWS red shipped Amazon red shipped console or those areavailable as your cloudwatch metrics so that you can create your own dashboardor events and kind of monitor. Set up a monitoring job, too. Do auditing or,you know, set out your admits that there are some Use it, you know, spikehappening, and you may need to look at it right. So for handling concurrency,the recommendation is be sure tohave workload management cues set up thatmakes sense for your usage patterns. And also look at concurrency scaling,which which is actually interesting, that it's the concurrency benefit is isperk you right? It's not like all across the entire cluster. Is it perqueue Yeah, so right now the conferences killing is applicable toread a liquid, so it's not applicable for retail. Quit s. So that means ifyou're running any any et al queries, I mean, obviously, those will not beeligible for conferences scaling. But at the same time, if you're running anyread only queries of the queries that would required toe relight your queriesbehind the scene by red shift, uh, in a certain format that would otherwiseexecute much faster for, for example, if it requires to create attempt table,right. So those are also not going to be eligible for conferences killing.And in other words, if you have multiple accuse set up, those are quiteindependent off confiscating. As long as there you're quit is eligible andyour cue has been set up for conferences, can you have a not optionfor each? You you can set the a que as conferences killing, eligible or not.That's ah like administrator level set up that you set up at the W. L M. Soonce Q is eligible, you have set up the queue for eligible for conferences.Killing on a quid in that queue is eligible then it will. It will beleveraging. The conference is killing.

So we have seen, like most of theconference is killing Crees. Are you know, be I grease, you know, readingdegrees, You know, mostly using the select queries and simple slick grease.If it is a very complicated, select ways involving creation off, um, youknow, temp tables just to simplify the query processing, it will not beeligible for conferences getting, but most of the time, uh, conferencesscaling kicks off for B i queries and we have seen that customers almost 97to 98% of the customers do not pay any additional fees for conferences Scalingbecause you get a one hour credit off your main cluster, use it every dayfree of cost. That's that's pretty amazing. I started using Red Shift alittle over five years ago, and it is absolutely amazing how much much it'sprogressed, and particularly in the workload management side. I was talkingthio friend of mine at a customer, and, uh, one of the nice things about allthe set up to is that now you can put people who don't understand how to dogood queries into a particular Q, and it protects everybody else. So, uh, itreally does protect that. That's you know, we don't talk about that featurea whole lot, but you could put kind of people who don't really understand howthe query data put him over there, and they kind of isolate themselves and notimpact the whole organization. So that's until they learn how toe do itmuch better. So thank you for sharing all that. So you've got a bunch ofdifferent kind of things going on here when we talk about Post Chris Athenaand other CSP data warehouses. They all support semi structured data, thingslike structure maps in a race. What is AWS is recommended pattern on how touse red shift for such data structures. Let ship does support. So in structuredata, a such such destructs map on there is, as you mentioned Onleythrough the external table. But before going into that, let me tell you thatlocal tables also does support Jason formatted in a structure data as asource on that that that support is done through the copy command, which isthe common way to load the data into rich local table instead of external level, which ispretty much in S three. You don't require to load data, but in coffeecommand you load the data into Amazon restive local storage. With that say is,did he hard drive or, you know, SS Depart drive. You can also have adjacent column inthe table. For example, if you have a you know, 10 different columns.Timestamp, you know, sores and all those things. And one particular columnoff data contains adjacent structure data set. For example, if it's silly,you know, sparse kind of metric, and you capture that in a Jason in thestructure that can be part of your table column as well. Andi. It can beparsed using Jason functions that particular column, which is containingthe Jason Day that can be parsed using Jason functions like Jason Extract AreaElement text or just an extract part...

...text. Now let's talk about the same Istructure data, which is applicable for data generated by, you know, tellingmetric systems such a Several logs sometimes also like gaming platformsand applications, generate those kind of data click streams very common wayto generate this kind of data Andi Gillette. Those that are in enormousvolume with a sporadic frequency on this kind of data are common. Tobyrepresented Ingesson format most of the time, which consist off. You know, thestructures that dimension extract, array and map kind of data leadershipsupport. In just off this kind of data, we are the spectrum table. So when whenis when it is, except or used through the spectrum and external table? Therecould be different forms off the data, like if it is an instruct form, it is astruck. From then read Ship does support accessing those struck columnusing a dot notation. For example. If you have a customer table, for example,and customer has, you know, many different, um, in orders placed by thecustomer, and you want to represent all of those different orders in thedistance structure. And some orders might have certain columns. Some someorders might not have those columns, so it's very common to represent those inorders that up par customer in India. Since structure and somecustomers have, you know no order, some customers have, you know, one or twoother. Some customers have hundreds of orders. They are, well, tweeted forrepresenting ingestion in terms off array, right? And the way that externaltable or spectrum can quit those data is very simple. You do not have torepresent you know those structure out of the time instead of create acustomer table, Um, in external schema, Andi, start clearing the data usingregular Jason or regular you know, sequel syntax on. It's very simple. Allthe sub structure, all the, you know, structure behind the main main. Youknow, Jason structure. Just access those to the dark notation it is createunder your sled query. Use the table name in the front claws and any substructure within that within that Jason, use it as an alias on then access thesub structure within dark magician. So that's as simple as that. So if if youhave in a more complex structure like areas I mentioned, our ladies havemultiple in orders for customer on. You want to capture all the customers. Evenif the customer has an order or customers multiple orders you couldprobably use in a giants, for example, um, if you want to capture all theorders for customers who has orders, so you can use the inner giants using theJason data the same way the dark protection works. Andi, if you wanttohave capture all the customers, even if they don't have any orders, you canuse the left Giant. It's the very common use case for in a giantprinciples, right? So all those kind off the left giants in a giants out ofgiants? Um, can we applied on this on...

...structured and and very kind of data?So, yeah, so we could weigh, do see in the customers using an external tables,especially with the data coming from tele metric systems or, you know,gaming systems to access those data from from external level. That's great.So, yeah, one of a kind of explore some technical tips to better understand thenuances of locking, blocking a deadlock, deadlock operations and red shift,which can have performance ramifications. What can you tell us?Kind of like, you know, high level. What can we do to kinda make sure wedon't get into any bad situations with those things? Sure. I think the good point is Redshift has been very friendly in terms off locking. If you compare with anyother R T B. M s, where the table locks or the roadblocks are very common. Redship does give you a lot of flexibility. Andi. It does apply locks, paying thescene without letting you know that there is some lock happening becauseit's so super fast on immutable. No data storage or block storage. SoAmazon ritual allows tables to be read while they're incrementally beingloaded or modified quickly. Simply see the latest committed version, which wecall in a snapshot of the data, rather than waiting for the next version to becommitted. Some applications require not only concurrent quitting andloading, but also the ability to write to multiple tables off the same tableconcurrently. And this mechanism by which wretched, allows content writinginto a single table. We call it serialize obliged relation, whichessentially preserves the illusion that a transaction running against the tableis the only transaction that is running against that table. And it could alsolead to periodic deadlock situation for content right transactions. Whenever atransaction involved updates off more than one table, there's always thepossibility of concurrently running transactions becoming deadlock whenthey both try to write toe the same table our same set up tables, and atransaction releases all of its stable locks at once. When it either commit orroll back, it does not relinquish locks one at a time. For example, supposethere is. There are a set of transactions T one and T to start atroughly the same time. If Taiwan starts writing to a table A and T two startswriting to Table B, both transactions can proceed without conflict. However,if Taiwan finishes writing to Table A and needs to start writing to Table B,it will not be able to do because Tito still holds a lock on be it stillwriting on Table B. Conversely, if Tito finishes writing to Table B and needsto start writing to table A to not be able to proceed either because Taiwanstill holds lock on it because neither transaction can release its lock untilall its light operations committed, neither transaction can proceed. Sothat's kind of very common kind of...

...deadlock situation. That may happen,and it's one of the worst practice it could involve in your you know, it'llprocessing. So how do we avoid this kind off deadlock, you need to schedule contained rightoperations very carefully. You should always update tables in the same orderin transactions. And if specifying, locks lock tables in the same orderbefore you perform any Deimel operations. And there's several waysthat you can find. Locking happens at the transaction level. And which tablesare, you know, creating locks. Which tables are, you know, being blocked for?You know, being a lot has been applied on the table. You can identify those locks on thetable and type of locks by session following tables. The tables that areneeded to be created is with the transactions and PG locks. From thesetable, you can find lock mode, you know, blocking P I D table I d. And granted.Um, if granted colonies false, it means they're transaction in another sessionis holding the law. And if that's the case, you don't have any other choicebecause everything is in deadlock. Um, you may end up, you know, terminating the session onDFO terminating the session you use PG terminate back end with the p I D. On.That's very unfortunate. If everything you know gets you know, blocked.Nobody's moving. That's the ultimate step you need to take. Great. Great. Well, yeah, we've covereda lot today, and I do encourage our listeners. If you have looked at redshift in the past, maybe 23 years ago. Look at it again. It is really, reallyevolved. And it is solving a lot of data warehousing situations. It isextremely vibrant product, I would say, and particularly the way it auto tuneson the back end with the workload management, all those types of things.It just keeps getting smarter and easier to use. So before we go, I wantto ask both Rob and Chinook. What is the best way for somebody who'sinterested in learning more about red shift? How can they get started intheir knowledge? I'll try that. So to get started with red Shift, you needtwo things. You need a data set, and you need a goal for what you want toaccomplish with that data set, and it's from there. It's very simple, And thatdata set, you know, really, the bigger the better. And in terms of youranalysis, just be open minded as to what you can find in that data set. Ithink if you just have a data set, but no goal, then you could you know,you're not going to get across the finish line if you have a goal. But nodata set. Obviously, you can't get going. So those two things togetherwith Red Shift will will be a fantastic way to get started. Sure on. Yeah, I Iagree that you have to have the use case, right? And Richard being thefastest, you know, data warehouse product on the cloud you can leverageyour data Analysts is much faster if...

...you have, you know, tens off, you know,gigabytes of data or if you have, you know, hundreds off terabyte or evenpetabytes skill of data you can use in Arctic cluster north type if you havebig data sets. So if you want experiment, you can get started bylaunching a D c tu rds to instance type. We don't increase to use DST anymore,But you can get started with the C two and start loading the data from histhree right away or the thousands off other it'll tool. If you are bringingdata from you know our D B. M s databases on duh you can bring in in aquick site if you want to quickly analyze the data. Quicksight has datadiscovery or red shirt discovery option. If it is in the same account, it willquickly, you know, identify your achieve cluster on it will. You know,start importing your data sets from the tables that you want to log or analyzethe data Onda even if you want to use in the Jupiter notebooks Sudden like alot off ML users nowadays are very familiar with in a visualizing andanalyzing that I use in Jupiter notebook because you have been a lotmany different, you know, packages that you can use to do a lot of visuals. Soeven now we do have a wretched data FBI, which is very flexible way toe plug inyour Jupiter notebook to redshift cluster. You don't need any driver. Youdo not need any networking, set up like security groups or even manage anycredential. You can unload the data in your ste and the lower into pandas datafrom or you can directly load into your pandas data from from your wretchedclusters. That's very flexible. Great. Well, I want to thank you for your time.Chinook And thanks again for joining us, Rob. It's always good to see yoursmiling faces. Well, next week we'll look at the second strategy toincreasing your clouds. Value increasing application development withDev Ops. Thanks again for joining us, and if you have any feedback orcomments, we welcome them. Please email is that cloud crunch at second watchdot com? Talk to you said you've been listening to Cloud Crunchwith Ian Willoughby and Skip Very. For more information, check out the blogged.Second watch dot com slash company slash vlog or reach out to second watchon Twitter.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (30)