Art of the possible- Leveraging Machine Learning to Improve Forecasting and Governance



good morning everybody welcome to day two in our final day of the AWS public sector summit thanks for kicking it off with us this morning just a few reminders the emergency exit for this room looks like it's only in the back please don't go in the elevators in the event of the emergency please turn off your cell phones we are recording all of these sessions they will be available online two slides and the recording is about two to four weeks after and at this point I'll turn it over to our Steve panel who will tell us about leveraging machine learning for forecasting and government governance thank you so much good morning everyone appreciate coming out for the early morning 9:00 a.m. for those of you that were here yesterday really felt like this is probably one of the best summits that I've ever been to so what we would like to do today is just walk through lots and lots of our customers are asking about machine learning and ECS is one of the few companies in the world to hold the machine learning competency so we thought it'd be an interesting topic and our panel is going to walk through just a little bit more information and practical use cases which we think are always good our overall agenda is we're gonna walk through just a very brief ECS introduction talk about some common challenges that we're hearing from customers really highlight the AWS ml solutions which is that are really the most innovative platform for machine learning talk a little bit about some solutions we've developed and then some of the value and benefits and then overall gist lessons learned so we want to make it very practical and then we'll leave a little bit of time for some Q&A so any questions you might have on machine learning this panel is one of the most experienced in the country so we that give you that opportunity to talk at that point just as far as speakers Ross Sereno is our director of cloud transformation we'll talk about some of the use cases Karen by she will talk about some of that great platform AWS offers em runs going to go through some of those use cases and I'm just going to give a little bit of an overview of who your ECS is so high level UCS is a leading provider really focused on the AWS platform also on cybersecurity and software engineering we are out of the DC area so if you're local this is a kind of our DC metros where we're focused but also nationally around 750 million in revenue so big enough to handle almost any of the the challenges you might be coming up with and around 2500 employees and also one of the top workplaces in the u.s. so if you happen to be looking for any kind of employment opportunities please also come see us in terms of our AWS expertise we are a premier consulting company that's the highest level of partnership for AWS and really in addition to the machine learning managed services are a very big part of what we do so what we find is once customers develop these great solutions that we can assist with a lot of times they're looking for companies to manage them and that's an area that we've really over the last five years put a lot of energy into so as an audited MSP and then also DevOps competencies and the Microsoft workload competencies this we'll walk through some of those a little bit but I think the focus here is we've been doing AWS since roughly 2010 and watched as its developed watched as customers have increased their usage and across the different challenges you might be facing ECS is one of the leading providers in the country so we'll focus today a little bit on machine learning but in general across all of the different workloads you might be looking to move to the cloud the AWS cloud ECS has you know one of the most experienced teams in the country this machine learning competency was something that came out last year we were very focused we had been doing a lot of machine learning specifically around the Defense Department which has some pretty interesting use cases specifically around taking video data and transforming that into more intelligence and so if you think about some of the the amount of drone video information out there that was really where we had initially focused and since then have been doing a variety of different workloads for customers across the country turned over to Enron who's going to walk through some of the the challenges we've been seeing with our customers Thank You Jon hey good morning everyone everybody will do hear me in the back okay great so one of the things is most of the people here you have at least one or more yes and as part of that you've got the pleasure of looking at your bill and in your experience and in our experience with many of our customers that we have been talking to some of the questions that often come up is are there any highs how's my build so hi can I get some details behind my bill and if you have looked at the detailed billing file as it'll just move from an hourly unit charge to minutes to seconds the volume of that number of lines in your detailed billing file keeps on increasing so what are the details behind that are there any patterns that I can see that will help me improve my cost I'm as overspending understanding and all that so all of these things you know really boil down to is how am i managing the cost of my utilization of AWS or my cloud environment and since it gives me an opportunity to innovate it's a it's an elastic environment I can start things out at just I don't have in my provisioning is like in seconds the cost can be pretty big how do I manage that so there are some of the biggest challenges about cost and then once I'm started utilizing those assets is this are these assets necessary am i overspending and understanding where can I find you where my cost is manageable or even more importantly predictable from a budgeting standpoint and even when I start using these assets are they compliant with my organizational policies policies from a security standpoint governance standpoint is it allowing me to keep my data safe and secure are there any vulnerabilities in the assets that I'm using are there any gaps which will which would pose a risk to the data that I'm storing in the cloud environment so for some of these cost related questions and some of the other governance related question is there an easy button for that and this is where II see us with our experience in many of the customers over last seven or eight years that we have been working in this multiple cloud environments we have come up with a strategy to help solve not some of these cost related and other issues that we the customers keep bringing to us and these what UCS has developed is what we call a common cloud framework and it's a set of policies and tools and automated scripts that can help solve our customers problems as they as they start adopting cloud as they operationalize their cloud and as they want to optimize their cloud usage and in the cloud environment our cloud platform is this this calm cloud platform or framework is is we have broken it down along five different axes because that's where the most of the categories that customers are interested in as we as a Ross and John that current we were talking this morning we were talking to one of our a very large telecommunication customer yesterday and they said yes we have somebody managing our accounts for us but and we have got our DevOps under control but operationally we are not sure how are we going to manage our data how are we going to do our patching so this framework not only helps us optimize our customers environment it's a starting point to identify what kind of issues they are running into and help us customize and focus this platform specific to this organizational and customer needs so as I said these I'm going to go back sorry so hard to read over here safe anyway so for example in in terms of accounts management right do you want to manage your accounts some people like to manage it by they break it their accounts by environments some people like to break it down by departments some people like to break it down by workloads so depending on your organizational policies we provide automated tools for monitoring establishing deploying these accounts and then making sure leveraging AWS s best practices and AWS services like organizations so that organization policies can flow down from top level accounts to sub accounts and accounts as well similarly for migration we have developed tools where we can discover your on-prem environment then help you build a secure and this 853 compliant infrastructure and then help migrate those on tram workloads into into your AWS compliant cloud in an automated structured manner since we were talking about cost I just wanted to focus a little bit more because our discussion today is a little bit more focused on cost so what is our cost control approach as you start migrating at adopting cloud and taking your workloads into the cloud you start seeing what kind of assets you are using how you using compute are you using storage how you're using any of the elastic services along with that and that gives us insights into what kind of patterns you how are you how do you want to govern your entire environment to help us optimize and that insight and the governance model helps us optimize your cloud savings and optimization in the cloud we can identify things like are you using any underutilized ec2 instances are there ec2 instances the much bigger than your needs are are there you have purchased reserved instance capacity but your coverage on your reserved instances capacity is much lower than it should be are you leaving some of the money on the table are you you can you leverage polish lifecycle policies on storage so you're not paying for the top end storage but you can move that into glacier or you know things like that so what it does it helps us by enhancing our the forecasting of prediction if we can taking the insights into your cloud usage we can identify the opportunities where whether it's reserved instances where there's under light until I under utilized our eyes or they're unused unused storage environments like the backup of your snapshots that you're taking are you using some kind of a life cycle mechanism so that you don't paying for those snapshots that you're not going to be using so that helps us forecast and predict your usage and help optimize your cloud environment and not only that it only it also helps us leveraging again AWS machine learning native tools and services as well as our cloud common cloud framework it helps us identify any anomalies that you may have in your cloud have you started some 20x instances that you in the development environment that you don't need are there any somebody has broken into one of your accounts and started you know started all soft up to all the soft limits in all regions that this is has we have come across that as well that one of our customers their account was broken into and somebody started using started in all regions they started is 10x large instances in all of their but we in using our cloud comma cloud framework we were able to identify it very quickly and then help clean up that environment and cloud environment similarly this so that will help us the predictability of your critical workloads that's where real the cost saving is or your environment because that's where the most of your cloud spending is so how do we use all of these common cloud framework as well as AWS native tools I'm going to invite Karen to talk about aw earth tools first and then Ross is going to guess give you some of the specifics or some the case studies for cost saving using both common clock framework as well as AWS native tools okay thank you Ron can you guys hear me in the back awesome so I was explaining it to some folks yesterday that my voice has a hard time projecting so please raise your hand and let me know at any point if what I'm saying is not making sense or if you can't hear me so I'm here today from professional services at AWS and part of the world by public sector federal civilian team and my role generally is to work with customers to solve problems regarding a IML data analytics and anything to do with their cloud migration approach so what I'm gonna do in the next five to ten minutes of your time is talk to you about machine learning at Amazon what are our services what geniux expect from those services and how I'm gonna pass it on to talk to send it to Ross to talk about how easiest is using these services for their common use cases that they're seeing during their common cloud framework and approach so what is our mission at AWS we've been using machine learning for some time now at Amazon we have create experience we're highly expert expert a lot of expertise there that we are using and implementing in our machine learning services side the basic construct behind all of this is to make machine learning accessible and make it simplified enough for anybody to start using and developing their own applications their own services and implementing them in their integrating them in their existing applications as well so putting machine learning in the hands of every developer that's what our mission is this is the Amazon machine learning stack three layers AI services machine learning services ml frameworks and the infrastructure the bottom layer is the one that we expect is something that the machine learn learning expert practitioners – need someone who wants to develop build train complete through their model modeling on their own we provide the infrastructure different ec2 instances to help them in that journey the second or the middle layer the ml services layer is the layer that kind of makes the life of those data scientists and machine learning practitioners easy because it's giving the right tools it's making dough that building deploying the training of the models much easier for them and it's making it easy for them to scale their algorithms everything the AI services which is the topmost layer this is the layer where the least amount of experience is needed our purpose here is to make sure that anybody any developer can start developing creating AI services integrating them in their existing or old applications as of 2019 April 2019 we have 11 services in this particular bucket and we call it the AI services bucket because AI – a lot of people means in cognition this is the closest to human cognition that we can get and we have vision speech language chat BOTS forecasting and recommendations that are available with this TAC a stack that anybody can start using out of the box tomorrow or today so one of the services that I wanted to just talk a little bit more about is Amazon comprehend and the reason why I picked this service is because Ross later on is going to talk about how ECS is using comprehend for a couple of use cases like sentiment analysis so comprehend is a natural language processing service that helps you find insights from your text it helps find sentiment whether it's positive negative mixed neutral what does that text carry entity extraction places people contacts location that kind of extraction from your data can be done with the entity extraction languages we at this point if I'm not wrong we have around 100 languages supported that we can detect in a text keyphrases what are the key phrases that are cropping up what is the construct behind the data that you're reading topic modeling something very close to me because I have been working a lot with my customers on talking topic modeling the topic modeling you can find the relevant topics and terms that are in that text when you have medical data when you have documents that consists of petabyte scale data that you have to manually go through that's when you realize that finding common topics becomes a pain when you have news articles finding the themes between from them is something that topic modeling really helps so this is a place where we are seeing a lot of advancement with our customers that want to use comprehend for various things because they're saving not only time they're saving cost that are saving resources the middle layer the machine learning services layer which consists of Amazon sage maker who here has not heard about Amazon sage maker okay so Amazon sage maker is a fully managed machine learning service that AWS is providing for making or building deploying training models easier factor scalable and with Allyson sage maker and under that bucket we've got ground truth neo all of these new features that came out last year with ground truth you can start labeling your data at a much faster pace again lot of cost reduction this Emily person cost reduction with using ground truth for labeling your training set so what does H make a consistent yes it helps us build our models faster easily but it also provides that performance that is not available in other sources and the reason why bring that up is because out of the box we are getting a lot of algorithms that are ten times faster in performance than on any other platform and this is because they are already pre trained pre coached to work with the best infrastructure and they can scale based on the needs of the development as it's going for that particular model training your model one-click training and tuning I mean all the expert machine learning practitioners or to be PEC practitioners who have been used to doing training they realize how time consuming it can be it's an iterative process you keep going back and forth before you can actually get to a point very ready to deploy your model so using sage maker to train it's something that's very good easy and scalable because the infrastructure underneath this underneath sage maker is growing up and down based on your needs and your paying for what you're using deploying one-click deployment deployment to our production systems much easier to manage we're using Jupiter notebooks within Sage Maker to create our models so I'm going to hand it over to Ross to talk about how they how easy is is using these services within their portfolio thank you Karen hello everybody Ross arena director of cloud transformation with ECS thanks everybody for giving a great intro on this and now I want to dive into you know what we're really doing to solve some of these challenges we wanted to frame this in the sense of we see the same thing everywhere we go and it doesn't matter if it's the government or commercial customers see the same challenges everywhere and this platform that we've been talking about really started five or six years ago and the first build of this was actually a dotnet desktop application for our billing team for them to download detailed billing records for s3 because it was too complex at that time to try to walk our accounting team through logging into the AWS console and then going to s3 and pulling files so that was the genesis of this platform and at that time when we were working with the api's one of the things that we had brought up was why do we have a database we were a dotnet development shop we used dotnet entity framework sequel server we're very used to you know building systems with databases and one of the first questions that I asked back then was why is s3 just not our database and there were lots of answers back then as to why but now it is what we've done is we've taken the power of AWS that we've been outlining and we're now able to talk about data in a very loose sense so data no longer means records in a database to us it means files that are sitting in s3 and we've really followed the lead on Amazon on this and I would say it started with cloud trail where when we talked about now that you're transformed into the cloud you have data for free so to speak that Amazon is generating for you and I bring up cloud trail first and foremost because if you've ever seen cloud trail logs in s3 they're almost incomprehensible to actually open up and digest and yes there's a cloud trail console but you know what if you need more history and that's where Amazon Athena comes in so when we say here that we are now data warehousing into s3 and analyzing with Amazon Athena I like to tie in that use case of query your cloud trail logs with Amazon athena if you've never done it it's very easy there's a blog post that walks you through exactly how to run a create external table statement and Athena and you can query your full history of cloud trail logs and by the way it works if you aggregate multiple accounts just by the nature of how Amazon is storing data in s3 the other data you get for free or things like alb logs NLB logs VPC flow logs this is all data that's just objects in s3 and it does it can be a CSV file it can be a log that you read with regular expressions but we've really started leveraging that as our platform and we've become less focused on writing things like dotnet code and we're now more focused on answering the questions the customers asked us to give you a visual of what I'm describing we basically have various data sources and just to make it simple we're calling it CSV files in this architecture so when we gather analytics about an environment we'll gather that information from AWS api's above and beyond some of that data that I was referring to and through that we're able to data warehouse all of that into s3 and no matter what format it is as long as we can create a schema in Athena based on it we can then prepare that data for machine learning and the number one challenge that we have we work with customers on ML use cases is working through their data what is your data where is your data how is your data collected what rate is your new data coming in and then you you open up a whole new area of exploration depending on whether streams are necessary for processing or can your processing be done in batch but it involves understanding and visualizing your data here's an example of visualizing data so this is a AWS bill for one of our customers that experiences what can be called seasonality and what will happen is with certain use cases certain days of the week months have increased load compared to other times which makes your cost very variable so this particular customer was in the education space and during examination time costs go up when you try to digest what's happening it gets very hard because this is a worldwide application the Australian summer is different than the US summer and everything's in flux because new customers are coming on and new customers are and old customers are leaving and it's all over the place so the model that we have that has statistical model under it is a visualization we had done an excel and then tried to do curve fitting in Excel and you can certainly make more advanced statistical models than Excel offers but I'm just trying to illustrate when you try to use just statistical averages you're gonna end up with some kind of curve that doesn't follow your data when you use algorithms your learning follows your data so when you see those two lines there that is a prediction on the same data set from Amazon Sage maker with the actual line in blue and our prediction in orange so we try to visualize this data to say yes you have a machine learning problem we try to educate people to say this is what we want to actually tackle because our customers question was you know what will my annual spend be and to calculate annual spend we realize we had to go down the rabbit hole very very deep and by developing models like this you can get much more insight into what all of this looks like and now you see services that Amazon is producing like Amazon forecast and now the same analytics that we were running on Sage it's going to be available to everybody as a service so what we do a TCS when Amazon develops these new capabilities we don't try to compete with them or fight them we absorb those benefits and those new technologies into our approach because our approach is customer centric to benefit the questions and the needs that our customers have now I'm going to get a little bit deeper into some of those things that we've worked with for customers and one of those first challenges is predictive spend so I want to show you that visualization first that you can all understand what it is our customer was trying to do and basically they this customer has over 80 AWS accounts and various products have different seasonality and they wanted analysis for both the entire portfolio as well as giving them insights at the lowest level based on how this customer structures themselves so when we talked about the common cloud framework earlier that that Imran was referring to and he said you know when we talk to customers about account management we really want to understand what what are you doing with your accounts because then we can tie those things into models to then say you know your finance department is spending this much which may be 10 AWS accounts and maybe one AWS account it doesn't matter because we try to extract that to then make it easy going back to that theme of where is the easy button so to do this we leverage that technology that I have been referring to with Amazon Sage maker and Athena analyzing detailed billing to support the advance forecasting and prediction model for these monthly costs and the benefit that we realize by running this analysis is we purchased over ten million dollars in reserved instances on ec2 because we were confident based on these workloads that these investments would be worth it and this involved the analytics that we did as well as working directly with the teams that were owning the applications running on this infrastructure to validate these large purchases you know we see over 25% increased accuracy on our forecasting prediction once we move to an ml based as compared to just that statistical curve or or line that we were trying to follow and we take all of these things together to give both executive level insights that let them plan out you know how do you spend 75 million dollars over five years as well as detailed individual analysis for developers so that we can share with them you know you had a really increased spike at this time and typically when you spike like that you come down but but this time you spiked and you didn't come down as much as you usually did why and with some of our customers we found that things didn't get cleaned up automation happened to fail well this is what it was supposed to do but it did and when we when we talk about anomaly detection that's a type of anomaly that we're referring to you know these are the machine learning use cases don't have to be overly complex this is this is your day to day business and we're just talking about using algorithms to greater answer your questions and then the final thing that we had done with this customer is the way that our eyes work are based on regions and we had developed a very comprehensive strategy on how they can forecast and map across their entire their portfolio based on region and then they can also do global prediction based on an individual product so with that we find that as you expand your scope in AWS it becomes more and more important to have a very clear governance model in place up front and the reason for that is if you have a good governance place in mountain in place you know what normal looks like so if at some point things were normal that can be your baseline so it's very important to have that establishment of here's what normal looks like but when you start then deviating from normal is when we get anomaly detection so this is somewhat related to that previous use case but the question our customer asked us is the 1 million dollars that we just spent on RDS was that a good investment because they weren't sure how to think about plow so this customer was very used to buying hardware they're very used to making capital investments and amateur izing costs over time for for hardware that they buy and when we shifted this to AWS ec2 made sense but when you look at something like RDS and you say you know I just spent it $1,000,000 on databases did I get a great deal or did I get ripped off I mean from a you know CFO perspective you know they just see the million dollars and what we're able to do through our analysis is verify you know based on your small scale compared to your large scale and compared to our other customers you're not spending too much on RDS you are spending about how much you should have and then that allowed us to establish that baseline so now all of our other purchases that we make we're looking at these things to make sure that it is a valid purchase and then we look at other metrics like utilization coverage and we're constantly tracking these things and then looking at our predictions and models and course-correcting as necessary but the only way that we can course correct based on our analysis is if we have that cleared governance structure set up at the beginning because when you look at something like purchasing our eyes or utilizing new services you have to really understand what it is you're you're working with and you have to you know be very deliberate in those changes that you're making two more use cases that I want to get through we had talked a lot about the financials of all of this but we do also want to highlight you can't you can look at more than just linear data so a lot of our early models were based on time series data so anything that's time series data we have an approach to model learn and predict but to take it beyond just time series data like billing everybody has web server logs whether you're getting those from alb logs or whether they're Apache or I is everybody has logs and one of the questions we're seeing more and more people ask is was there malicious activity being attempted against my web server and there's a lot of advanced products that try to answer that question but in in the most basic sense if you just look at your logs you might find something and if anybody sees in their logs and parentheses space Union space select you probably want to block that traffic and if you've never looked at your logs to be able to say hey these URLs look very different than every other request everybody asks for you might find some surprises if you start looking at your data because people that click on your UI happen to follow a very predictable pattern when you look at your logs and the people that are not actually using your user interface but are manipulating URLs will stick out like a sore thumb if you start actually visualizing or modeling your data and you know above and beyond using sage maker and Athena you can you can just you know look at this data flag this data train on this data and then start getting reports around these you know URL headers don't match what we're used to seeing and then you can look at the IPS that it came from and it's like oh well this is a foreign nation state we should probably block them and if you have something like Amazon Waffen plays you can now take that intelligence and then turn it into rules so that your server never even sees that traffic if it starts matching those patterns but you're not going to be blocking out legitimate traffic because we've already incorporated that into the model that we're looking at oh and the sorry the benefit that we had achieved on that was you know over 90% detection of some of those anomalous activities and like I said some of those juice you know they very much stick out like a sore thumb so that was a lot about sage maker a lot about training a lot about you know some of the analytics that we do to do what we do but when we look at services like comprehend and I'm gonna highlight comprehend in particular there's no reason that everybody doesn't just you this service besides this customer use case that I'm gonna dive into I actually used comprehend earlier this year for performance reviews I took emails that I had exchanged with my colleagues and processed them through comprehend to see what it was we were doing in the first quarter because three weeks into the second quarter I couldn't remember the first quarter and through one of the topic modeling exercises that I did I found that we had worked on an RFI that I had completely forgotten about and wrote up a recommendation and an you know kudos to one of my colleagues for supporting that RFI and when I mentioned it to him he said I forgot about that RFI and it was very very joyful getting that type of insight by going through that because I wanted to read all of my old emails but I said I don't have time to do that based on how busy I am isn't there some kind of thing that can read all this for me that's comprehend so what we had done with our customer was around support cases so what our customer asked us was what what is it in my support case data that that I should care about just to just to oversimplify what they were asking for what what are all of these support cases being opened for and this and this analytics covered both AWS support cases as well as internal support cases that were in service now one of the challenges was getting all of that data prepped to be able to be analyzed by comprehend but we followed our standard process of how we prepare data and once Athena could see our data we knew comprehend quick and what we did by going through this comprehend analysis was topic model all of the discussions that have been going through support cases and we learned that two teams were having the same issues with RDS and neither of them knew they were working on the same problem and it's it's very interesting when you start seeing this as long as you can understand the data that you're feeding into these algorithms they will give you some very useful insight just just out of the box just in the report that it natively generates and the other thing that we had done was look into sentiment around our support cases because one of our customers had a concern that teams were potentially being less than professional and we were curious if that had actually trickled into the support cases or if you know there were just a few heated meetings and when we ran the sentiment analysis we found that while everything was overwhelmingly positive you can also start to see neutral sentiment which isn't necessarily negative but certain groups when frustrated will change from positive sentiment to neutral sentiment in how they are asking for support and if you start looking at that you can find areas of concern within your organization just based on the type of language that they are using no matter what they're talking about but you have to understand what data you have how to make it in a format that Amazon and these algorithms know how to read but then you get to do the fun part because then you're at the inference layer and you're just extracting insights from your data and as Amazon releases more and more features around this you're only going to have easier services to be able to find these insights just to recap we wanted to highlight some of these values and benefits that we've been discussing you know move beyond statistical analysis algorithms are your friend they are world class today and very easy to use utilize a process where you continuously optimize so that you're always getting better and we find that you know by doing this and having this governance model up front it really just empowers everything else that comes downstream from that serverless is also your friend where possible don't run servers because if you don't run servers you don't have to maintain them and especially for managing your your data warehouse to have a completely server less data warehouse is is very very beneficial for everybody for for any reason you might have even if you already have another data warehouse still put all your data in s3 because you might want to ask something against your data and when you start reeving all of these AWS services together you really get the the best of her for everything and then finally we wanted to share some lessons learned data prep is essential bad data gives bad results if all of your data looks the same because of timestamps your algorithms gonna say all your data looks the same because of timestamps so you'll need a way to maybe trim that and and we have techniques and approaches that we've developed that we then load into our common cloud framework so that each customer we engaged with gets the benefit of all the lessons learned from every other customer we've worked with ml is a tool it's an analysis but you have to understand your data and what you're looking at to be able to truly make decisions based on what you see in your data large data is very good and having large data that you are confident in is also very good when you're doing learning this is no longer an ml challenge as we have AWS this is a data challenge keep going back to that data theme because where is your data how can you access your data and then how can you feed your data to these other services and technologies and the same challenges that we had in traditional development exist in ml you know a lot of ml is Python based Python two died I don't know two years ago still in use in production today because so many things are dependent on Python 2 but everybody says use Python 3 same exact developer challenges that have existed forever and when you start looking at you know petabyte scale think something like Amazon Athena because s3 will infinitely scale and Amazon Athena if configured correctly will only query the data that's relevant for the query that you're running so as these things grow and grow together it doesn't matter how high you scale you have a model in place to be able to absorb the the benefits that are coming out in the future and then finally you know don't don't feel like you have to do deep learning out of the gate do shallow learning do something simple just so that you can start understanding what inference looks like because even if you don't run your own models if you start looking at inference from other models you need to know what they're actually saying or have a language to be able to ask somebody else to develop something that gives you what you're looking for Orion if you're evaluating a cots product or something if you understand all of these things that go into these algorithms and then you look at something like an f5 that's running machine learning on applications you'll it'll mean something it's not just a magic black box these are industry standard algorithms that everybody is using and they're available to everybody with a click of a button on AWS now and ECS is happy to help all of our customers through all of this and with that thank you very much we are also on the expo floor at 7:24 if you want to keep following up with us we'll be down at the expo hall the rest of the day and with that I did want to open it up for questions on the weblog analysis let's go back so we were able to filter out over 90% of data so we saw a pretty high accuracy around our model that we had done we had done a supervised training model so our next level is to look at unsupervised training across this data we actually worked based on a initial analysis of a known baseline of data and then we did the classification of good data bad data and then we put together a system to feed updates to that model and that's how we got that accuracy but we are looking at deep learning models that wouldn't require supervised learning for that and I mentioned the f5 because the f5 does that and we're trying to figure out how it doesn't exactly know we have not transitioned to forecast yet so we're excited to begin using forecast as it comes out because we've been doing all of our our stuff kind of ourselves and that's how it's been over the last five six years we'll kind of figure something out ourselves Amazon is for some reason working on the same problem we are and then they solve it at a mega global scale and then we end up using their solution but to solve our customers needs we have our internal tools that we can use as stopgaps before new features come out all right thank you very much a round of applause please for our speakers Schofield

Maurice Vega

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment