This course takes an introductory look at using the SageMaker platform, specifically within the context of preparing data, building and deploying machine learning models.
During this course, you'll gain a practical understanding of the steps required to build and deploy these models along with learning how SageMaker can simplify this process by handling a lot of the heavy lifting both on the model management side, data manipulation side, and other general quality of life tools.
If you have any feedback relating to this course, feel free to contact us at support@cloudacademy.com.
Learning Objectives
- Obtain a foundational understanding of SageMaker
- Prepare data for use in a machine learning project
- Build, train, and deploy a machine learning model using SageMaker
Intended Audience
This course is ideal for data scientists, data engineers, or anyone who wants to get started with Amazon SageMaker.
Prerequisites
To get the most out of this course, you should have some experience with data engineering and machine learning concepts, as well as familiarity with the AWS platform.
And finally, after all this effort, after all of this training and creation and labeling, we're actually able to start using the model AKA inferencing it. Of course, SageMaker has a section for this. At its core, what you're going to do is make an endpoint in which you can call the model. But let's check that out through clicking into the inference section of SageMaker.
Once again, of course, assessable through the hub, AKA the dashboard. The dive into this, basically the way Amazon encourages you to interact with your SageMaker model is through basically an HTTP slash HTTPS endpoint. You're able to make calls to this, send it data and get a response. Of course, coming out of the training job and especially coming out of the notebook, you're also able to use the Python objects or rather the framework's objects and import those into other machine learning frameworks.
So don't think you need to use SageMaker's end point system in order to use your models. It could actually be directly coded into your existing applications, especially after an export. But this is a really strong way of decoupling your machine learning and your production instance, in that your application developers and coders are able to just make an arrest call and get an inference result.
So basically when designing an endpoint, think if you want it to be an endpoint, completely detached and just kind of segregated like a microservice, or if you wanna export the model and have it act more like a library within an application.
Assuming you wanna go with an endpoint, the configuration can be set with an Amazon, of course, just not too many options, but you can configure how it will behave, but more importantly, you need to select, are you going to capture data? And what I mean by if you wanna capture data and enabling it pretty much as a data scientist or machine learning creator, you're going to want to capture it. You're going to wanna capture both the incoming predictions and the response you gave them. You can of course increase the sampling percentage and make sure it all gets stored depending on the volume or just a percentage of it. But this allows you to tell how your consumers are interacting with your model in the real world.
But there's a huge caveat. And I'm speaking from personal experience here. Some applications are extremely sensitive, and you're not allowed to either see the incoming data or the outgoing prediction SageMaker allows you to store. Maybe you're only giving them a prediction result so you could test for prediction skew over time. But assuming privacy security and permission-wise, you're allowed to capture both the incoming and outgoing result. I really encourage you to do this at least at first, because it allows you to debug your model and very importantly, look for edge conditions you might not have seen in testing.
Calculated Systems was founded by experts in Hadoop, Google Cloud and AWS. Calculated Systems enables code-free capture, mapping and transformation of data in the cloud based on Apache NiFi, an open source project originally developed within the NSA. Calculated Systems accelerates time to market for new innovations while maintaining data integrity. With cloud automation tools, deep industry expertise, and experience productionalizing workloads development cycles are cut down to a fraction of their normal time. The ability to quickly develop large scale data ingestion and processing decreases the risk companies face in long development cycles. Calculated Systems is one of the industry leaders in Big Data transformation and education of these complex technologies.