DEMO: the Lifecycle of a Bucket in OSS
Alibaba Object Storage Service
The course is part of this learning path
This course is an introduction to the fundamental aspects of Alibaba’s Object Storage Service (OSS). It starts off by explaining the features and advantages of the service, before moving on to the concepts of OSS and security. You will then watch two demos that use real-life examples from the Alibaba Cloud platform to guide you through storage buckets and object operations.
If you have any feedback about this course, please contact us at email@example.com.
- Become familiar with buckets, regions, objects, and object lifecycle management in OSS
- Understand the advantages and billing models of OSS products
- Learn about the management, use, and operation of OSS buckets and objects
- Those who are starting out on their journey into Alibaba Cloud and who want to learn more about OSS
- Security engineers who secure and safeguard data within Alibaba
- Beginners who want to get certified in Alibaba
To get the most from this course, you should already have some basic knowledge of cloud computing. If you would like to brush up on your cloud knowledge before taking this course, please consider taking our What is Cloud Computing? course.
Hello, and welcome to this demonstration on the Object Storage Service. In this demonstration, we will look at the lifecycle of a bucket in OSS. The lifecycle consists of creating a bucket, modifying its settings, and when no longer required, deleting the bucket. So let's get started.
First, I'm gonna open the OSS panel from the menu list. If the Object Storage Service is not linked to the menu list, this then can be achieved by opening the Products list at the top, scrolling down, and finding, under Storage & CDN, he Object Storage Service, and click on the star, which will link it into the menu system. So I'm gonna go from there and open up the Storage Service.
We're now into the Overview page of the Object Storage Service, and you can see at the top of the pane, we have the Basic Statistics, where we can see Storage Used, Traffic This Month, and Requests This Month. It is worth pointing out the note at the top, it says this page is not updated in real time. There is a lag. In the middle and the bottom of the page, we basically have overview links of getting into help systems for things like Basic Settings, Transfer Acceleration, and so on and so forth. And on the right hand side, you'll see we have the option to create a bucket and view a list of buckets.
Underneath the Bucket Management section, we then have Frequently Accessed buttons, where we can get things like AccessKeys, access to the RAM Console, yeah, and the Learning Path goes out to the document site. Also at the top, you'll see under Getting Started with OSS, when I put my mouse over it, we then have the Upload objects, Create a bucket, Quick start, Billing items and methods.
Okay, these all link out to the document site. Underneath the overview on the left hand side, we then have the list for Buckets. If I click on the list for Buckets, you'll see the available buckets or existing buckets that you have at the moment and the option again to create a bucket. So let's go ahead and create our first bucket. You can see, in the Create Bucket menu system that's come up, the first thing we need to do is set the name of the bucket. The name has to be globally unique. It can only contain lowercase letters, digits and hyphens, and must be between three and 63 characters in length.
Once the bucket is created, the name cannot be changed. If I type in the word test, for example, you'll see that it will come up and say that this already exists. So if I stick in a couple of digits in there, the idea, we get a green tick, we're good to go. Next, we need to select the region. Like the name, the region can't be changed after the bucket is created. You can see in the drop down list all the different regions. I'll leave it on the default setting. It's worth noting that if you require an ECS instance to access the contents of a bucket, then the bucket has to be in the same region as its ECS instance.
Next, we now need to choose the Storage Class. You see the default is set to Standard. The Storage Class defines the billing level for storing objects in the bucket. Standard is suitable for frequently accessed data. The storage cost is more than the other classes but there is no charge for retrieving data from the bucket. If we select Infrequent Access, this has lower storage costs than Standard and is more suitable for less frequent access data. However, it's got a minimum storage period of 30 days. Extra costs are incurred if objects are deleted before 30 days, and retrieving data of this storage class does incur costs.
We also have Archive. Archive is suitable for long-term storage, at least six months. Archive data has to be restored before it can be read, and it can take up to a minute to restore a file. And like Infrequent Access, retrieving data with this storage class also incurs costs. There's a new storage class in preview called Cold Archive. As you can see, it's not listed here. At the time of this recording, it's only available in the Australia region.
A Cold Archive is suitable for storing extremely cold data with ultra-long lifecycles, at least a year. Now depending on the region that's been selected, Zone-redundant Storage, or ZRS, can optionally be switched on. Currently, this is only available in four regions, there's three in China and the fourth in Singapore. And by default, Local Redundant Storage, or LRS, is enabled, and this stores three copies of your data on different devices in one data center, and to support ZRS, a region requires a minimum of three data centers, and when enabled, OSS backs up your data to three different data centers within the same region. I'm gonna build this bucket in the UK, and London only has two data centers. So you'll see when I change the selected region, the option for ZRS will disappear.
So next, optionally, we can choose to turn on Versioning. When Versioning is enabled, an object that is overwritten or deleted is saved as a previous version of the object. Versioning allows us to restore objects in the bucket to any previous point in time, and it protects the data from being accidentally overwritten or deleted.
Next, we have to set the Access Control List. The default setting is Private. With Private, only the bucket owner can perform read and write operations on objects in the bucket. Other users cannot access the objects in the bucket. With Public Read, only the bucket owner can perform write operations on the objects in the bucket. Any other users, including anonymous users, can perform only read operations on objects in the bucket. And by selecting Public Read/Write, all users, including anonymous users, can perform read and write operations on objects in the bucket. I think an important point to be aware of, though, is by setting Public Read, all internet users can access objects in the bucket, and with Public Read/Write, write data to the bucket, and this may cause unexpected access to data in the bucket and also cause an increase in your costs.
Optionally, we can turn on Server-side Encryption. By default, it's disabled, and the options are AES-256 or KMS, or Key Management Server. Now KMS has to be activated before it can be used. The next option we have is Real-time Log Query. When enabled, you can query and analyze records of access to objects in the bucket by using the OSS console in real time. The first seven days are free. After that, you will incur additional costs. And the last thing that can potentially be set is a scheduled backup.
Now you'll see, at the moment, I don't have access to a scheduled backup. If I go and change the region to US and then scroll back down, you'll see at the bottom, I now have the option to do a Scheduled Backup. After Scheduled Backup is enabled, OSS creates a backup plan to back up data once a day and retain the backup files for one week. I should go put that back to London. So that's all the details filled in. All we have to do now is click OK and create a bucket.
Now the bucket's been created, and we are currently on the Overview section. From here, we can see the Basic Statistics. We can view the endpoints and domain names that allow us to access the bucket either through the internal network or over the internet, and at the bottom of the page, we can access most of the Basic Settings, where we can then reconfigure some of the settings.
As this is a fundamental course, I will give a high-level overview of the bucket settings. From the menu list on the left hand side, scroll up the top, we can access Files. This primarily allows us to upload files directly from the portal. I'll be covering this in the next demonstration. From Access Control, you'll see we can configure the Access Control List and change the bucket tier settings, either Private, Public Read, or Public Read/Write. With Bucket Policy, you can authorize access to the whole bucket for Resource Access Management users, or other accounts, or anonymous users, or individual files within a bucket.
With Basic Settings, you can configure Server-side Encryption, you can set up Static Pages, and you can use the bucket to create a static website, you can configure the Lifecycle policies. This is the setting Lifecycle policies for objects in the bucket. So either deleting or changing the storage class after a period of days or based on an expiration date. We can set Bucket Tagging. Tags are a key pair value that can be added for things like auditing. Up to 20 tags can be added to a bucket.
We can configure Back-to-Origin. This allows you to configure rules to redirect requests for objects if they do not exist in the bucket when called. Two modes can be used, mirroring or redirection. Event Notification can be used to configure rules that trigger notifications when specific operations are performed on specific objects. And Pay by Requester can be selected so that charges for access to objects in a bucket are paid by the requester.
By default, the data owner is charged for storage requests. Attention Policy can be configured to prevent an object from being deleted or overwritten for a specific period of time. And the last item on the list is the Delete Bucket option. A bucket cannot be deleted if it has objects in it, and after it's deleted, it cannot be recovered. For Redundancy for Fault Tolerance, you can enable Cross-Region Replication. This allows you to synchronize objects in your bucket with a bucket in another region, and also configure Versioning. If you're gonna use Cross-Region Replication, however, both buckets must have Versioning either enabled or disabled.
In the Transmission settings, this allows you to bind the custom Domain Name to the bucket. You can also turn on Transfer Acceleration. This provides faster access to the bucket contents. It does incur extra costs. In Logging, this allows you to configure the generation of log files as objects that are placed in the bucket on an hourly basis. You can also configure real-time log query from here.
Data Processing allows you to create rules for images, for things such as adding a watermark, creating a thumbnail, adding an effect on an image like blurring, sharpening, rotating, et cetera. And lastly, the Data Statistics. This shows graphs based on bucket usage, traffic, and requests. Then when a bucket is no longer required, we can then delete it from Basic Settings, and as I've created this bucket and haven't put any objects inside it at the moment, I can go straight down, click on Delete, where I then get the option to delete the bucket, and as I said before, once you've deleted the bucket and clicked OK, the bucket cannot be recovered.
Okay, so if I go back and now have a look at the Buckets list, we'll see that there's no buckets left. Okay, that concludes this demonstration on the lifecycle of a bucket in OSS. In the next demonstration, we'll look at the lifecycle of objects in a bucket.
David has been a trainer with QA for over 12 years and has been training cloud technologies since 2017. Currently certified in Microsoft and Alibaba cloud technologies David has previously been a system and Network administrator amongst other roles.
Currently, he is a Principle Technology Learning Specialist (Cloud) at QA. He loves nothing more than teaching cloud-based courses and also has a passion for teaching PowerShell scripting.
Outside of work, his main love is flying Radio control airplanes, and teaching people to fly them.