百度智能云

All Product Document

          Object Storage

          Manage Bucket

          Create Bucket

          1.Find and click button ("Create bucket") in the navigation bar on the left side of the console. In the pop-up box, follow the prompts to create a bucket.+ Button ("Create bucket" ), follow the prompts to create a bucket in the pop-up box.

          Note:

          • Bucket has an area attribute that can only be located in one area. As the name and area of the bucket cannot be changed after the creation of bucket name, so it is recommended to store it nearby according to the conditions of business to facilitate uploading and downloading to improve the access speed.
          • Each user is allowed to create only 100buckets.
          • CDN acceleration is enabled by default upon creation of a bucket. To disable it, you can directly set "CDN Official Acceleration" to OFF when creating a bucket.
          • If you subscribe the multi-region service, please select the region where the bucket is located as needed. For more information, please refer to Region Selection Instructions.
          • When creating a bucket, you can select the default storage type for the bucket. If the user does not specify the storage type for the object uploaded through API, CLI or SDK, it will be default storage type for the bucket. In the Console, the object uploaded is of standard storage type by default. In the event of discrepancy in storage type between object and bucket, the storage type for object prevails. Storage types include standard storage, low frequency storage, cold storage and archive storage. For specific usage scenarios and performance, please see Hierarchical Storage.
          • The name of each bucket is globally unique. You can use a prefix to ensure unique name, such as the name of your organization as the prefix of bucket. Once created, the bucket name cannot be changed.
          • You can set the corresponding privileges when creating a bucket, including: Private privilege, public read privilege and public read and write privilege. If you need to set advanced privileges, please create a bucket first and then go to the corresponding details page to set it. Please refer to Update Bucket Privilege for details.

          Bucket Overview

          Click bucket and enter bucket admin. Select "Bucket Overview" in the right navigation bar for the information, usage and monitoring data of bucket.

          image.png

          Bucket overview page mainly includes three-part data:

          • Bucket data flow: bucket usage, outer net data flow, CDN Origin traffic and API request amount of this month bucket data usage information is mainly for charging reference.
          • Bucket information: It includes bucket location, official domain name, bucket privilege, creation time and whether CDN acceleration is started and helps the users to know the related attributes of bucket.
          • Monitoring data: It includes capacity data, outer net data flow and API request data of the latest 30 days and is used for observing bucket capacity and flow trend, helping the users to locate the abnormality.

          Various monitoring data provided by bucket is defined as follows:

          • Read amount: The data flow from bucket to the outer net, unit: Byte A data point is collected per minute and represents total outflow of the outer net of this minute.
          • Write amount: Write the flow of the bucket, unit: Byte. A data point is collected per minute and represents total write amount of this minute.
          • Read frequency: Frequency for reading the bucket. A data point is collected per minute and represents the total read frequency of this minute. All HTTP GET/HEAD/OPTIONS requests for the bucket are included in the read frequency.
          • Write frequency: Frequency for writing the bucket. A data point is collected per minute and represents the total write frequency of this minute. All HTTP method (e.g. PUT/POST/DELETE) requests excluding reading requests for the bucket are included in the write frequency.
          • CDN Origin traffic: CDN read data flow returned from BOS source, unit: Byte. A data point is collected per minute and represents the total CDN Origin traffic of this minute.

          Update Bucket Privilege

          Privileges Description

          In order to guarantee high security of your data stored in BOS, we provide you with rich multi-level privileges management capabilities. privileges system of BOS is divided into the following three levels:

          • Bucket standard privilege
          • Coarse-grained custom privilege
          • Fine-grained custom privilege

          Bucket Standard Privilege Definition

          • Private: bucket Owner gets FULL_CONTROL, others do not have any privileges.
          • Public-read: bucket Owner gets FULL_CONTROL, others get READ privileges.
          • Public-read-write: bucket Owner gets FULL_CONTROL, others get READ and WRITE privileges.

          Definition of Customized Coarse Grain Privilege

          Where the above bucket standard privilege is not enough for you, it is available to generate the customized coarse grain privilege by your need. You can set READ, LIST, WRITE, FULL_CONTROL and MODIFY privilege for the specified users, specify the available resources of the privilege, IP address and Referer whitelist of the privilege.

          The customized coarse grain privileges with BOS supporting are described as follows:

          Privilege name Operation within privilege
          READ It is allowed to read object and the related information in the bucket. Please refer to Operations Included in READ Privilege for the detailed operation privilege.
          LIST List privilege. It is allowed to read the object list of the specified bucket and obtain all the MultipartUpload uncompleted. Please refer to Operations Included in LIST Privilege for the
          detailed operation privilege.
          WRITE It is allowed to create, override and delete object in the bucket. Please refer to Operations Included in WRITE Privilege for the detailed operation privilege.
          MODIFY The users are only allowed for data write operation, instead of delete operation. Please refer to Operations Included in MODIFY Privilege for the detailed operation privilege.
          FULL_CONTROL It includes all the above privileges and other operation privileges. Please refer to Operations Included in FULL_CONTROL Privilege for the detailed operation privilege.

          Definition of Customized Fine Grain Privilege

          Where the above customized coarse grain privilege cannot meet your fine-grained authorization demand, it is available to use the customized fine grain privilege provided by BOS.

          The customized fine grain privileges with BOS supporting are described as follows:

          Fine grain privilege name Operation within privilege
          Getbucket The privilege allows the users to acquire the bucket content and the related information. E.g. list the objects of the bucket and list all the Multipart UploadMultipartUpload uncompleted during the three-step uploading. Please refer to Operations Included in LIST Privilege for the
          detailed operation privilege.
          Getobject The privilege allows the users to acquire object content and the related operations of objectMeta information.
          Putobject With this privilege, you can perform operations related to object uploading, such as Putobject, Postobject, appendObject, Fetchobject, Copyobject, Three-Step Upload and Three-Step Copy.
          Deleteobject With this privilege, you can delete object-related operation one-by-one or in batches.
          Renameobject The privilege allows the users to rename object.
          Deleteobject The privilege allows the users to delete object.
          ListParts With this privilege, you can list all successfully uploaded Parts of the UploadId specified during Three-Step Upload to check its status.
          PutBucketAcl With this privilege, you can create bucket Acl.
          GetBucketAcl The privilege allows the users to acquire bucket Acl.
          PutObjectAcl The privilege allows the users to add new object Acl.
          GetObjectAcl The privilege allows the acquisition of object Acl.
          PutbucketCors With this privilege, you can set or delete a Cross-Origin Resource Sharing (CORS) rule on specified bucket.
          GetbucketCors The privilege allows the users to acquire the rules of a cross-region resource sharing (CORS) in the specified bucket.
          GetbucketStyle With this privilege, you can access or list the rule of bucket Style.
          PutbucketStyle With this privilege, you can create or delete the rule of bucket Style.
          GetbucketMirroring With this privilege, you can access the related information about bucket Mirror Back to the Source.
          PutbucketMirroring With this privilege, you can create or delete the related information about bucket Mirror Back to the Source.
          GetCopyRightProtection With this privilege, you can access the information of original protection configuration of bucket.
          PutCopyRightProtection With this privilege, you can enable or disable the original protection function of bucket.

          Operation Steps

          1.Click "Configuration management" in the right of each bucket, enter "Configuration management" page and set the read and write privilege of the specified bucket.

          2.Bucket can be set as private, public-read and public-read-write. Customized privilege can be selected for higher demand and click "Add Customized Authorization" to add.

          3.Fill in the corresponding options for authorization in the bucket privilege setting list popped up.

          image.png

          Note:

          • User ID authorization: User ID specified for authorization. Multiple ID can be authorized with one line for each ID. * indicates that it supports one * at most among all the users authorized. You can also fill in verifiedUsers, which represents the authorization for all the Baidu AI Cloud users.
          • Customized fine grain privilege: It includes READ, LIST, WRITE, MODIFY and FULL_CONTROL privileges. Please refer to Privileges Description.
          • Customized fine grain privilege: Users can click "Advanced Setup", unfold the fine grain privilege list, select one or more options as required and form the new customized privilege.
          • Resource: It specifies the scope of resources of the privilege. "Include" is corresponding to resource, meaning to set privilege for the resource within the specified scope. Resource vacancy is equal to "bucket name". The resource should start with the bucket name. Where there is one slash, the resource cannot end with the slash. There can be multiple resources with one for each line. Each line is equipped with one wildcard *and ends with the wildcard. Example: mybucket, mybucket/*, mybucket/myfolder/object*. "Exclude" is corresponding to not Resource, meaning to set privilege for the object without the specified scope. Where "exclude" is selected but notResource set is null, it means that notResource has not been set. Where the default setting is adopted at this time, it means that resource has been equipped and the content of resource is bucket and all objects.
          • Referer: Set Referer whitelist. Each Referer is separated by line breaks and each Referer can support one wildcard * at most. Only http and https protocols are supported. It is available to select whether to allow Referer to be null.
          • IP address: Specify the IP address list with the privilege. Use CIDR method to identify the IP. There can be multiple IP addresses with one for each line. Each line is equipped with one wildcard.*at most and ends with *. Example:192.168.0.1/24192.168.0.100192.168.* or 192.168.1.*.

          After authorization, users can see the privilege record generated in "Bucket Privilege Configuration" customization, and adjust the customized privilege with "Modify" and "delete" button.

          image.png

          Delete Bucket

          1.Click "Delete Bucket" to delete the specified bucket.

          image.png

          Note: Only when the current bucket is null, that is, excluding any object and unfinished three-step uploading part, can bucket be deleted, or the corresponding prompt will appear. To directly delete a non-empty bucket, you can use the force delete command through CLI Tools.bce bos rb bos:/<bucket-name>--force.

          Set Referer Whitelist

          Application Scenarios

          BOS supports the hotlink protection method based on HTTP header referrer field, in order to prevent the hotlinking of data stored on BOS. Set whitelist of Referer field with BOS console. After the whitelist is set, only the users in the whitelist of Referer field can access the data stored in bucket, and those request outside the whitelist will be refused. However, if the Referer is null, it is accessible by default, and is not restricted by whitelist.

          Rules

          • The whitelist can be set only with advanced bucket privilege.
          • Referer whitelist is valid for all APR requests to access BOS.
          • Referer whitelist is not case-sensitive, uses line break for separation and supports the wildcard (*). Each Referer supports one wildcard only.
          • Referer whitelist complies with the exact match principle. E.g.: http://www.baidu.com/abc/ and http://www.baidu.com/abc are regarded different in the whitelist.
          • Referer whitelist system supports the inspection of http and https protocols and automatically adds "/" for the host that is not ended with /.
          • When selecting "allow Referer to be null", Referer with whitelist and null Referer in HTTP requests are available for visit. When selecting "not allow Referer to be null", in HTTP requests, Referer with whitelist is available for visit, but the null Referer is not.

          Please refer to Bucket Privilege Control for more information about bucket privilege control.

          Operation Steps

          1.Click the "Basic Configuration" of each bucket and select "Bucket Privilege Setting".

          2.Choose "Customize" and "Add Privilege" to set the Referer whitelist.

          image.png

          3.Click "Confirm" and finish setting.

          4.In whitelist, click "Modify" and "Delete" to modify and delete the whitelist.

          Set Server Encryption

          In order to better guarantee the data safety, BOS supports the encryption code (Server-Side Encryption) function for data in server. During uploading, users can execute the server encryption setting for uploading data through carrying encrypted parameters. Meanwhile, in order to improve the usability, BOS adds server encryption switch in bucket setting. When it is started, the default encryption will be carried out for the newly uploading data in the bucket without influence to the data on stocks.

          BOS supports two encryption methods currently: BOS escrow key and KMS service escrow key

          As shown above, when BOS escrow key is selected, all data uploaded to the bucket will be protected with unified encryption through BOS escrow key. When users acquire the data, BOS service will execute the autonomous decryption without any other operations of users, easy and convenient.

          Click "Configuration management" and enter "Server Encryption Configuration" page for starting.

          image.png

          When users select KMS escrow key method, they can encrypt the data uploaded with their own managed keys.

          Operation Steps

          1.Users need to start Baidu AI Cloud Key Management Service and create the self-managed KMS key.

          image.png

          2.Log in BOS console, select "KMS Service Escrow Key" in the bucket server encryption configuration options and the already created key in Baidu AI Cloud key management service in the spinner. Afterwards, data uploaded by users to the bucket will be encrypted and protected with the customized key chosen by users.

          Meanwhile, when users read the data, BOS service will apply for data decryption from KMS service and feed the clear-text data back to users.

          Set Cross-origin Access

          BOS provides cross-origin resource sharing CORS setting in HTML5 protocol and helps users to realize cross-origin access.

          Set CORS Rules

          1.Click "Basic Configuration" and enter "cross-origin Access CORS Configuration" page.

          2.Click "Modify Configuration" and set CORS rules in the sheet popped up.

          image.png

          3.Click "OK" to save the rule.

          Set Parameter Description

          Name Description
          Origins Specify the sources of cross-region requests allowed. There can be multiple Origins with one Origin and one "*" symbol at most for each line.
          Methods Specify the cross-region request method allowed.
          Headers Specify the cross-region request Header allowed. There can be multiple Headers with one Header and one "*" symbol at most for each line.
          ExposeHeaders Specify response header (such as a XMLHttpRequest object of Javascript) that allows users to access from the application. There can be multiple
          ExposeHeaders with one for each line, but without "*" symbol.
          maxAgeSeconds Specify the cache time for browser to return the results of prefetch (OPTIONS) request for the specific resources.

          Please refer to Cross-origin Resource Sharing for more introduction about CORS.

          Modify CORS Rules

          Click "Edit" in the right of the list and modify CORS rules after adding CORS rules,

          Delete CORS Rules

          1.Add CORS rules and then click "Delete" in the right of the list to delete a CORS rule. 2.Click "Delete all rules" to delete all CORS rules set.

          Set Data Replication

          BOS provides data synchronization function. You can build the autonomous synchronization relation between two different buckets. BOS will autonomously and asynchronously execute the data synchronization from source bucket to target bucket.

          1. Click bucket name and enter bucket management page. Click "Basic Configuration" and find out "Data Synchronization Configuration".
          2. Click "Start Configuration" button and start the data synchronization configuration. Select synchronized target region, target bucket, data synchronization subject, object storage type, data synchronization strategy and whether historical data is synchronized, and then save it.

          image.png

          Note:

          • You can specify the data to be synchronized in the source bucket by defining the prefix of the file name, or you can directly synchronize all the data in the source bucket.
          • The storage type of the target bucket can be selected to be the same as that of the source object, or other storage types can be selected as needed.
          • When data copy operation is carried out on infrequency object, the retrieval fee will be triggered.
          • After the rules are successfully added, it is available to see, edit and delete the already synchronized strategy in the current bucket.
          • After the history file copy starts, all objects stored will be synchronously copied to the target bucket. The range of history file copy does not refer to the data synchronization subject.
          • Two buckets of data copy can be cross-origin or in the same region. Only data synchronization across regional buckets will trigger traffic charges, and data synchronization with regional buckets will not charge traffic charges.
          • Data copy supports the bidirectional synchronization of buckets. Suppose there are three buckets named A, B, and C. At present, it supports taking A as the source bucket of B and B as the source bucket of C; and taking A as the source bucket of B and B as the source bucket of A. However, it does not support using A as the source bucket for both B and C; and bucket with C as the target of both A and B.

          Configuring the Forward-to-origin Mirroring

          Introduction

          Forward-to-origin mirroring is mainly used to solve the problem of hot data migration for users without stopping service.

          After a bucket is configured with image forward-to-origin, when the user accesses an object in a BOS and the BOS finds that the object does not exist in the BOS, the BOS will request the object to the forward-to-origin address and store the data returned by the source station into the BOS while returning to the requesting user.

          BOS6.png

          Application scenarios

          Forward-to-origin mirroring is mainly used for seamless data migration. That is, the user migrates data from a source station to BOS without stopping service. The specific scenario is described as follows: Assuming that the source station has a batch of cold data and is constantly generating new hot data.

          1. Firstly, migrating the data to BOS through the BOS CLI or other migration tools (if there is a large amount of data, you can submit a work order). Then configuring the mirror back to the source station.
          2. Switching the domain name directly to BOS. Although some of the newly generated hot data has not yet been migrated to BOS, users can still access them normally from BOS (BOS reads data from the source station while storing the data into BOS), and the data will be deposited to BOS after every access.
          3. After the domain name is switched, the source station has no new data generated. At this time, you can scan the source station again and import the remaining data into BOS. Once done, you can turn off the forward-to-origin mirroring setting.

          Feature

          Forward-to-origin mirroring is a bucket-level setting. When bucket unlocks the forward-to-origin mirroring function, and the user's request to access the BOS GetObject returns 404, the mirroring back to the source will be triggered. The header and querystring carried in the GetObject request will not be sent to the source station. If the return message of the source station contains the following header (Content-Type, Content-Encoding, Content-Disposition, Cache-Control, Expires, Content-Language), BOS will keep them as meta information of the object and return it to the user at the same time.

          The forward-to-origin address supports the HTTP/HTTPS protocol, so you can use the domain name or IP. And adding ports is supported. If there is no protocol in the forward-to-origin address, the HTTP protocol is used by default.

          Note:

          1. Currently, the files captured by the forward-to-origin mirroring will be deposited to standard storage by default.
          2. Currently, forward-to-origin mirroring in the GetService request related to the image service is not supported.
          3. When the forward-to-origin mirroring is in progress in BOS, it will not carry the QueryString in the original request.

          Opening steps

          1. Click bucket name and enter bucket management page. Clicking "Basic Configuration" to enter the "Forward-to-origin Mirroring Configuration" setting page. By default, the forward-to-origin mirroring function is locked.
          2. Adjusting "Unlock Forward-to-origin Mirroring" to the on state, and specify the forward-to-origin address. After setting, clicking the Save key.

          image.png

          Set Access Log

          You can unlock the BOS log function when the user needs to track the access requests of BOS. The log recording function can be applied to access statistics and security audits. Each access log records detailed information about a single access request, including the requester, bucket name, request time, and request operation. For the description of the access log format, see Set Access Log. When access log function is enabled for bucket, a log file is generated for the access request of the bucket by hour according to the fixed naming rules, and the file is written into user specified bucket.

          1. Click bucket name and enter bucket management page. Clicking "Basic Configuration" to enter the "Log Management Configuration" page. Log function is locked by default.
          2. Adjusting the "Log Unlocking" to the on state, and specify the bucket and log prefix for log storage. After setting, click the Save key.

          image.png

          Note:

          • The destination bucket and source bucket for log storage must be the same Region.
          • There is no additional charge for the log function, only the storage costs generated by the log file is charged. Log file transfers do not incur data transfer charges, but access to the generated log files is billed equal to any other data transfer.
          • The log prefix can contain letters, numbers, underlines, strikes, and slashes, and must start with a letter with a length of 1-64 digits.

          Manage Lifecycle

          BOS supports lifecycle management of files based on bucket definition rules. Lifecycle management supports three functions: converting the storage type of "cold" data, deleting data that is no longer needed, and clearing outdated three-step upload data. Detailed description of the life-cycle see Lifecycle Management. Lifecycle management is not unlocked by default.

          1. Click bucket name and enter bucket management page. Clicking "Basic Configuration" to enter the "Life-cycle Configuration" page.
          2. Clicking to modify configuration. The rule can take effect on the entire bucket or the prefix, and choose different lifecycle management actions according to the needs of the scenario.

          image.png

          After the rule is successfully added, you can see the corresponding policy scope and action in the list, and you can also edit and delete it.

          Note:

          • When selecting the "effective for prefix" strategy, the prefix does not need to add the bucket name, it can directly be "myPrefix/*", otherwise the policy will not take effect.
          • Life-cycle policies do not take effect for unretrieved archived storage files.

          Static Website Hosting

          Basic Concept

          BOS supports users to host static website on bucket for light-weighted operation and maintenance of website. After the setting is effective, you can access to the hosting website by directly accessing to the domain name of bucket.

          Static website refers to a website that contains all static resources such as HTML and JPG, excludes server-side scripts such as PHP, JSP, or ASP.NET. Meanwhile, a statically hosted website does not support scripting on the server side.

          To deploy and manage a dynamic website, you can use Baidu Cloud Compute (BCC) or Cloud Container Engine (CCE).

          Use Mode

          Step 1: For a static website to be hosted, you need to upload the two core resources of "Index Page" and "404 Page" in the bucket:

          Index Page Settings

          A standard website usually consists of several index pages corresponding to the homepage of the website and submodules.

          BOS will return to the index page if the user fails to request to a specific page when visiting the root directory such as www.example.com in the browser address bar, or the directory ending with/such as www.example.com/folderA/.

          Web administrator can upload static resources in the root directory and subdirectories of bucket to for effect presentation when visiting the index page.

          At present, BOS supports an index page composed of html formatted files.

          404 Page Settings

          In the event of a common 404 error when visiting a static website, a designed error page will be provided by such website to give visitors a better experience.

          For a static website hosted on BOS, the web administrator can upload files in the formats of html, jpg, png, bmp, and webp as the 404 page in the root directory of bucket. When the data expected by the website visitors cannot be found, BOS will show such 404 page by default.

          Note:

          • buckets intended for static website hosting need to be set to public read privileges for the access of anonymous user. Therefore it is recommended not to upload confidential data to this bucket.
          • Please No. 404 page sets meta "Content-Disposition:attachment".

          Step 2: You can set the index page and the 404 page through BOS MMC after the administrator uploads the resources corresponding to the index page and the 404 page in the bucket.

          As shown in the figure, please fill in the resource names of the index page and the 404 page that you uploaded in the bucket, and click the "OK" button to make the settings take effect.

          image.png

          Note:

          • You can define the name of the index page arbitrarily according to the site management habits, such as "index.html" or "admin.html". However, you must define the same name for the resource corresponding to the index pages at all levels.
          • Please note the file formats supported by the index page and the 404 page.
          • At least one of the index page and the 404 page is filled with a name otherwise the settings will not take effect.
          • If the hosting page is set to archive type, the configuration will not take effect.
          • If the resource matching the page filled in by the administrator does not exist in the bucket, it is regarded as 404.If the resource corresponding to the 404 page does not exist, it will return to the original 404 error page.
          • When the user simultaneously opens the static website hosting and mirroring back-to-source for this bucket, if the resource expected by the user does not exist, BOS will try to mirror the source first. If there is still no corresponding resource, it will follow the logic of static hosting and feedback a 404 page.

          Example

          When the user creates a bucket named website in the bj region and places the following files in it:

          • index.html in the bucket root directory
          • Put 404.html in the bucket root directory.
          • index.html in the bucket level-2 directory website/car
          • apollo.jpg in the bucket level-2 directory website/car

          If the user enables the static website hosting function for the Website bucket, the visit behavior of the website visitor is positioned as follows:

          • When visiting www.website.bj.bcebos.com, the page is shown as index.html.
          • When visiting www.website.bj.bcebos.com/car, the page is shown as index.html.
          • When visiting www.website.bj.bcebos.com/car/apollo.jpg, the page is shown as apollo.jpg
          • When visiting www.website.bj.bcebos.com/car/dazhong.jpg, the page is shown as 404.html since the resource does not exist.

          At meanwhile, the static website hosting function also supports setting through the API. For details, please refer to the interface description as follows:

          Recycle Bin

          Overview

          In order to improve the reliability of BOS data, users can ensure that deleted data remains in the recycle bin for subsequent retrieval of deleted data by configuring the function of the recycle bin.

          Operation Steps

          1. In the "Bucket Setting" of the corresponding bucket, unlocking the recycle bin function and configuring the retention location of the Trash file.

          image.png

          Note: After the recycle bin function is unlocked, files deleted in the bucket will remain in the recycle bin storage path. The storage path is bucket/.trash/by default. Users can change this storage path, but they must follow the rules of path name. Only the Owner of the bucket and users with Full Control privilege can configure the recycle bin.

          1. When a file in bucket is deleted, the file will no longer exist in the console "Object Management", but will appear in the console "Recycle Bin" with the same file name.

          Note: Object of archive type, whether retrieved or not, will not enter the recycle bin after deletion.

          1. Users can manage junk files in the recycle bin as needed.
          • When a user performs a "restore" operation on a file, the file will appear in the file path before deleting and disappear from the recycle bin.
          • When the user performs a "complete deletion" operation on the file, the file will be completely erased. Please proceed with caution.

          Tag Management

          Overview

          Baidu AI Cloud provides tag management. By the addition of tags to each cloud resource, the resources can be quickly classified, identified, and managed.

          • Tags: Each tag consists of a key and a value, and the tag (key + value) is unique.
          • Single and batch: Support setting tags for individual resources, and can also create tags for cloud resources in batches.

          Limit

          Each user can create up to 200 tags.

          Add Tag

          Users can add tags to bucket according to the needs of projects and scenarios, which facilitates the classification and identification management of the bucket.

          Operation Steps

          1. Logging in the management console and select "Product Services> object Storage BOS" to enter the bucket management page.
          2. Checking one or more instances and clicking "Edit Tags".

          image.png

          Note: You can only add tags in batch editing tags, but you cannot manage existing tags. Note that the setting of label keys is case sensitive.

          1. In the pop-up dialog box, importing the custom tag key and tag value. Note that each bucket can have multiple label keys, but the key of each label must be unique, and the value can be empty and not filled in.
          2. Click "OK" to finish creating tags.

          Note:

          • When creating bucket tags in batches, you cannot check the individual instance tag that has been added.
          • If you modify a bucket tag, you need to unbind the tag and then set it again.

          Unbind the Tags

          If the bucket no longer needs tags, you can unbind the tags.

          Operation Steps

          1. Logging in the management console, selecting "Product Services> object Storage BOS", and entering the bucket management page.
          2. Checking a bucket and clicking "Editing Tag" in the action bar.
          3. In the pop-up dialog box, removing the tag to unbind the tag.

          Resource Billing

          Resource billing is a function to record the consumption-related data of the lifecycle of each resource, which is convenient for users to view the consumption data of resources and products in the dimension of resources.

          Note: BOS bucket is currently not supported for resource bill, so stay tuned.

          Operation Steps

          1. Logging in the management console and selecting "Product Services > Object Storage BOS".
          2. Click "Tag Management" on the left navigation bar to enter the tag list page.
          3. Check a tag and click "View Bill" in the action bar.
          4. Enter the resource billing page and select the month for which you want to view the bill. The consumption information of the current tag resource will be displayed. You can also perform tag search by combining filter criteria. You can select consumption details for other tags by tag keys and values, and download related information.
          5. Click View Details to view the instance ID, product name, and bill amount.
          Previous
          Operation Preparation
          Next
          Manage Object