1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS- Domain Three - Deployment Management

Meeting Requirements

play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 44m
Students1243

Description

Phase 1 of our solutions requires that we remove as much of the manual process as possible, without impacting customers. There are several ways we can go about this. Let's examine one possible solution. The current process involves a lot of emails that require a person to act upon, especially the CSR. Can we improve this? We could automate the start of the workflows.

To accomplish this, we could create a cron job that connects to an inbox, to retrieve messages in the CSR's inbox. Upon retrieving the message, it could trigger a function that would extract the plans, message the email address to the sender, and insert them into a print job cable, with a status to indicate the job is pending, account verification. Once extracted, the function would look up the senders email address in the account table, and if no account exists, the CSR is notified, and he or she will create a new worksheet with the account information, and save it to shared storage. A function will then be invoked, which would extract the account information from the spreadsheet, and insert it into an account table. If an account exists, but needs updated information, the CSR is notified and he or she will update the existing spreadsheet and save it back to shared storage. The function would update the account information stored in the database. In both cases, once the data is updated, we would change all the jobs associated with the customer, from a status of pending, account verification, to awaiting quote status. If there is no account exception, when the account is checked, the job will immediately be updated to a waiting queue status. So far, so good! Minimal impact to both the customer, and CSR.

Next in the workflow, we need to send the job to the print operator for quotes, based on the plan, and quantities required. The job changing tool, awaiting quote status, will trigger the process to notify the print operator, who should receive the plans, and any messages sent with the plans itself. After reviewing the plan, the print operator will complete a quote, and question spreadsheet. If the quote cannot be completed because there are questions that must be answered, the quote field in the spreadsheet is left blank, and questions are entered in the questions section. If the print job cannot be fulfilled, the rejected section is completed, with a reason for why. If the quote can be completed, the quote information is filled in. And then finally, the print operator saves the spreadsheet to shared storage. Like our other steps, saving to saved storage will trigger a function, the function will extract the data and either insert or update into some of our table. Then it will take several actions, depending on what is contained in the print job spreadsheet. If there are questions, the print job status is changed to awaiting answers, and the CSR is notified to speak with the customer. The CSR will update the spreadsheet with answers, and save the spreadsheet to shared storage. Again, a function will be triggered to insert the answers to the questions into the database. Change the print job status to awaiting quote, and notify the print operator to provide that quote. If a print job cannot be completed, the print job status is updated to canceled, and the CSR is notified to contact the customer. In the last scenario, the function will add the quote to the print job table. Change the status to awaiting customer approval, and notify the CSR to get approval from the customer. After the customer responds, the CSR updates the print job spreadsheet, and saves it back to the shared storage. Yet another function will pass the job sheet spreadsheet, and flip the status of the print job, based on the approval status. If the approval status is changed to not approved, the workflow ends. If a new quote is requested, the previous workflow is triggered, which sends the print job back to the operator, in a waiting quote status, with additional customer comments. If the customer approves the quote, the print job status is changed to approved, and the print operator is notified to begin the print job.

On completion of the print job, the print operator will update the print job spreadsheet to indicate that the job is completed, and save it to shared storage. Another function is triggered to update the print job table with awaiting delivery, and the courier is notified of a pickup. The courier picks up the print job, delivers it to the customer, and updates the print job spreadsheet to indicate the print job was delivered to the customer. When the spreadsheet is saved, the print job table is updated with the delivered status. If for some reason the customer decides to reject the delivery, the spreadsheet is updated to rejected delivery status, and a reason is included. When saved to shared storage, the print job table will be updated with the status and reason. The CSR is notified about the issue in order to resolve the customer complaint. This may trigger the print job to be performed again, or the print job may be canceled all together. Either way, the previous workflow steps are followed. How would we implement a solution like this? For starters, there's no managed AWS email receiving impoint which triggers the start of the workflow. In order to get this functionality, we might have to use and EC2 instance for the cron job that regularly checks the mail box for a new message. When a message arrives, the job could then invoke a lambda function. For the shared storage, we would use S3 with events that execute lambda functions. For database storage, we have many, many managed options. Our best option is probably to choose DynamoDB based on the update stream functionality, that would let us trigger lambda functions when data is inserted or updated. Now lastly, we can use simple notification service to trigger notifications to the CSR, print operator, and courier. For combination of these AWS services helps us build our workflow system, that can be used in future phases. So for example, if a web application inserts data into a print job table, our update streams will execute just as they did previously, using a proven workflow. By using DynamoDB early in our design, we had the benefit of storing customer history that can be used later. How does this solution comply with the constraints we were presented with by Acme Printing? If we were recall from earlier, the restraints were for minimal costs, little or no downtime, and best security practices.

So cost-wise, we're using one EC2 instance. Since the customer has not created an AWS account previously, we can take advantage of the free tier by running a micro instance that falls under the free tier message. The EC2 instance doesn't require much computer power, because all it is doing is running a cron job that in turn triggers a lambda function. The execution of lambda functions is also covered under the free tier up to a certain point, after that, the cost is fractions, or fractions of pennies per execution. With Amazon Simple Notification Service, that's also covered under the free tier, and is low cost. We can get around data transfer costs by linking to Amazon S3 spreadsheets instead of filling up the message with the data. S3 and DynamoDB storage is inexpensive. Our solution also needs to be designed to experience little or no downtime. The managed services have high availability as part of their SLA. The weakest link in our chain is our EC2 instance. So, to build around this, we can create an Amazon machine image from our instance, and configure a launch configuration to use it. Then we can create an order scaling group with our minimum and maximum and desired settings to one. We cover all availabilityies in our selected region. Optionally, we could purchase reserved instances that not only lower the total cost, but also mean that our EC2 instances will not have any issues if AWS has capacity restraints. When it comes to security, we rely heavily on identity and access management to implement roles and policies. Lambda functions will only granted access to the resources they need, such as the update stream, DynamoDB table, and S3 bucket. User access to S3 will be limited to our internal roles for the CSR, print operator, and courier. The EC2 instance will run under a role that is allowed to invoke only the first lambda function that triggers the rest of the workflow. Amazon Simple Notification Service topics will be limited to roles-only as well. So as we can see, we can easily meet all of the constraints. This is just one possible solution to the problem. Other solutions might utilize simple workflow, or simple queue services, instead of how we have reposed to implement the workflow. That solution would require additional work to be performed on our EC2 instance, possibly making our choice of a micro instance inadequate, which would then increase the cost. It also pushes the burden of the work away from the managed services into something we have to manage on our own, which is probably not ideal for the long-term. Now that we have Phase 1 completed, we can present these options to the customer, and get their feedback, and decide which design we're going forth with.

Transcript

Phase 1 of our solutions requires that we remove as much of the manual process as possible, without impacting customers. There are several ways we can go about this. Let's examine one possible solution. The current process involves a lot of emails that require a person to act upon, especially the CSR. Can we improve this? We could automate the start of the workflows. To accomplish this, we could create a cron job that connects to an inbox, to retrieve messages in the CSR's inbox. Upon retrieving the message, it could trigger a function that would extract the plans, message the email address to the sender, and insert them into a print job cable, with a status to indicate the job is pending, account verification. Once extracted, the function would look up the senders email address in the account table, and if no account exists, the CSR is notified, and he or she will create a new worksheet with the account information, and save it to shared storage. A function will then be invoked, which would extract the account information from the spreadsheet, and insert it into an account table. If an account exists, but needs updated information, the CSR is notified and he or she will update the existing spreadsheet and save it back to shared storage. The function would update the account information stored in the database. In both cases, once the data is updated, we would change all the jobs associated with the customer, from a status of pending, account verification, to awaiting quote status. If there is no account exception, when the account is checked, the job will immediately be updated to a waiting queue status. So far, so good! Minimal impact to both the customer, and CSR. Next in the workflow, we need to send the job to the print operator for quotes, based on the plan, and quantities required. The job changing tool, awaiting quote status, will trigger the process to notify the print operator, who should receive the plans, and any messages sent with the plans itself. After reviewing the plan, the print operator will complete a quote, and question spreadsheet. If the quote cannot be completed because there are questions that must be answered, the quote field in the spreadsheet is left blank, and questions are entered in the questions section. If the print job cannot be fulfilled, the rejected section is completed, with a reason for why. If the quote can be completed, the quote information is filled in. And then finally, the print operator saves the spreadsheet to shared storage. Like our other steps, saving to saved storage will trigger a function, the function will extract the data and either insert or update into some of our table. Then it will take several actions, depending on what is contained in the print job spreadsheet. If there are questions, the print job status is changed to awaiting answers, and the CSR is notified to speak with the customer. The CSR will update the spreadsheet with answers, and save the spreadsheet to shared storage. Again, a function will be triggered to insert the answers to the questions into the database. Change the print job status to awaiting quote, and notify the print operator to provide that quote. If a print job cannot be completed, the print job status is updated to canceled, and the CSR is notified to contact the customer. In the last scenario, the function will add the quote to the print job table. Change the status to awaiting customer approval, and notify the CSR to get approval from the customer. After the customer responds, the CSR updates the print job spreadsheet, and saves it back to the shared storage. Yet another function will pass the job sheet spreadsheet, and flip the status of the print job, based on the approval status. If the approval status is changed to not approved, the workflow ends. If a new quote is requested, the previous workflow is triggered, which sends the print job back to the operator, in a waiting quote status, with additional customer comments. If the customer approves the quote, the print job status is changed to approved, and the print operator is notified to begin the print job. On completion of the print job, the print operator will update the print job spreadsheet to indicate that the job is completed, and save it to shared storage. Another function is triggered to update the print job table with awaiting delivery, and the courier is notified of a pickup. The courier picks up the print job, delivers it to the customer, and updates the print job spreadsheet to indicate the print job was delivered to the customer. When the spreadsheet is saved, the print job table is updated with the delivered status. If for some reason the customer decides to reject the delivery, the spreadsheet is updated to rejected delivery status, and a reason is included. When saved to shared storage, the print job table will be updated with the status and reason. The CSR is notified about the issue in order to resolve the customer complaint. This may trigger the print job to be performed again, or the print job may be canceled all together. Either way, the previous workflow steps are followed. How would we implement a solution like this? For starters, there's no managed AWS email receiving impoint which triggers the start of the workflow. In order to get this functionality, we might have to use and EC2 instance for the cron job that regularly checks the mail box for a new message. When a message arrives, the job could then invoke a lambda function. For the shared storage, we would use S3 with events that execute lambda functions. For database storage, we have many, many managed options. Our best option is probably to choose DynamoDB based on the update stream functionality, that would let us trigger lambda functions when data is inserted or updated. Now lastly, we can use simple notification service to trigger notifications to the CSR, print operator, and courier. For combination of these AWS services helps us build our workflow system, that can be used in future phases. So for example, if a web application inserts data into a print job table, our update streams will execute just as they did previously, using a proven workflow. By using DynamoDB early in our design, we had the benefit of storing customer history that can be used later. How does this solution comply with the constraints we were presented with by Acme Printing? If we were recall from earlier, the restraints were for minimal costs, little or no downtime, and best security practices. So cost-wise, we're using one EC2 instance. Since the customer has not created an AWS account previously, we can take advantage of the free tier by running a micro instance that falls under the free tier message. The EC2 instance doesn't require much computer power, because all it is doing is running a cron job that in turn triggers a lambda function. The execution of lambda functions is also covered under the free tier up to a certain point, after that, the cost is fractions, or fractions of pennies per execution. With Amazon Simple Notification Service, that's also covered under the free tier, and is low cost. We can get around data transfer costs by linking to Amazon S3 spreadsheets instead of filling up the message with the data. S3 and DynamoDB storage is inexpensive. Our solution also needs to be designed to experience little or no downtime. The managed services have high availability as part of their SLA. The weakest link in our chain is our EC2 instance. So, to build around this, we can create an Amazon machine image from our instance, and configure a launch configuration to use it. Then we can create an order scaling group with our minimum and maximum and desired settings to one. We cover all availabilityies in our selected region. Optionally, we could purchase reserved instances that not only lower the total cost, but also mean that our EC2 instances will not have any issues if AWS has capacity restraints. When it comes to security, we rely heavily on identity and access management to implement roles and policies. Lambda functions will only granted access to the resources they need, such as the update stream, DynamoDB table, and S3 bucket. User access to S3 will be limited to our internal roles for the CSR, print operator, and courier. The EC2 instance will run under a role that is allowed to invoke only the first lambda function that triggers the rest of the workflow. Amazon Simple Notification Service topics will be limited to roles-only as well. So as we can see, we can easily meet all of the constraints. This is just one possible solution to the problem. Other solutions might utilize simple workflow, or simple queue services, instead of how we have reposed to implement the workflow. That solution would require additional work to be performed on our EC2 instance, possibly making our choice of a micro instance inadequate, which would then increase the cost. It also pushes the burden of the work away from the managed services into something we have to manage on our own, which is probably not ideal for the long-term. Now that we have Phase 1 completed, we can present these options to the customer, and get their feedback, and decide which design we're going forth with.

About the Author

Students58016
Courses94
Learning paths36

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.