Upload files
In BOS, the fundamental data unit for user operations is an object. There is no limit to the number of objects in a bucket, but a single object can store up to 5 TB of data. An object consists of a key, metadata, and data. The key is the object's name; metadata is the user's description of the object, made up of a series of name-value pairs; and data is the object's content.
The BOS Python SDK provides a rich set of file upload APIs, and files can be uploaded in the following ways:
- Simple upload
- Append upload
- Multipart upload
- Resumable upload
- Obtain upload progress
The naming rules for objects are as follows:
- Use UTF-8 for encoding.
- The length must range from 1 to 1023 bytes.
- The first character cannot be '/', and the '@' character is not allowed, as '@' is reserved for use in image processing APIs.
Simple upload
In simple upload scenarios, BOS supports uploading objects in the form of specified files, data streams and strings. Please refer to the following code:
-
The following code can be used for object upload:
Python1data = open(file_name, 'rb') 2 #Upload an object in the form of a data stream. Users need to calculate the data length content_length by themselves 3 #Users need to calculate content_md5. by themselves. The calculation method is to perform the md5 algorithm on the data to obtain 128-bit binary data, and then perform base64 encoding 4bos_client.put_object(bucket_name, object_key, data, content_length,content_md5) 5 #Upload object from a string 6bos_client.put_object_from_string(bucket_name, object_key, string) 7 #Upload object from a file 8bos_client.put_object_from_file(bucket_name, object_key, file_name)
Among these, data is a stream object. Different types of objects require different processing methods. For example, StringIO is used for uploading from strings, while open() is used for uploading from files. To simplify this, BOS provides a packaged API for users to perform uploads quickly.
Objects are uploaded to BOS as files. APIs related to put_object support uploading objects of up to 5 GB in size. Once the put_object, put_object_from_string, or put_object_from_file requests are successfully processed, BOS returns the object's ETag in the header as the file identifier.
These APIs all have optional parameters:
| Parameters | Description |
|---|---|
| content_type | The type of the uploaded file or string |
| content_md5 | File data verification: After setting, BOS will enable file content MD5 verification, compare the MD5 you provide with the MD5 of the file, and throw an error if they are inconsistent |
| content_length | Define the file length. The put_object_from_string() excludes this parameter |
| content_sha256 | Used for file verification |
| user_metadata | User-defined metadata |
| storage_class | Set file storage class |
| user_headers | User-defined header |
The calculation method involves applying the MD5 algorithm to the data to generate 128-bit binary data, which is then encoded using Base64. An example is as follows:
1import io
2import hashlib
3import base64
4file_name = "your_file"
5buf_size = 8192
6with open(file_name, 'rb') as fp:
7 md5 = hashlib.md5()
8 while True:
9 bytes_to_read = buf_size
10 buf = fp.read(bytes_to_read)
11 if not buf:
12 break
13 md5.update(buf)
14 content_md5 = base64.standard_b64encode(md5.digest())
Set file meta information
Object metadata refers to the attributes of files provided by users when uploading to BOS. It is mainly divided into two categories: standard HTTP attribute settings (HTTP headers) and user-defined metadata.
Set Http header of object The BOS Python SDK essentially calls the background HTTP API, so users can customize the HTTP header of the object when uploading files. The description of commonly used HTTP headers is as follows:
| Name | Description | Default value |
|---|---|---|
| Cache-Control | Specify the caching behavior of the web page when the object is downloaded | None |
| Content-Encoding | What kind of content encoding conversion has been performed by the message body | None |
| Content-Disposition | Indicate how the MINME user agent displays the attached file, whether to open or download it, and the file name | None |
| Expires | Cache expiration time | None |
Reference code is as follows:
-
Upload an object with a specific header from a string
Python1user_headers = {"header_key":"header_value"} 2 #Upload an object with a specific header from a string 3bos_client.put_object_from_string(bucket=bucket_name, 4 key=object_key, 5 data=string, 6 user_headers=user_headers) 7 #Upload an object with a specific header from a file 8bos_client.put_object_from_file(bucket=bucket_name, 9 key=object_key, 10 file_name=file, 11 user_headers=user_headers)
User-defined meta information
BOS supports user-defined metadata for describing objects. Example usage is shown in the following code:
1 #User-defined metadata
2 user_metadata = {"name":"my-data"}
3 #Upload an object with user-defined meta from a string
4 bos_client.put_object_from_string(bucket=bucket_name,
5 key=object_key,
6 data=string,
7 user_metadata=user_metadata)
8 #Upload an object with user-defined meta from a file
9 bos_client.put_object_from_file(bucket=bucket_name,
10 key=object_key,
11 file_name=file,
12 user_metadata=user_metadata)
Prompt:
- In the above code, the user has customized a metadata with the name "name" and value "my-data".
- When users download this object, this metadata can also be obtained.
- An object may have multiple similar parameters, but the total size of all user meta must not exceed 2KB.
Set the copy attributes of an object
BOS offers a copy_object API to copy an existing object to another object. During this process, it evaluates the source object's ETag or modification status to decide whether to proceed with the copy. Detailed parameter explanations are as follows:
| Name | Types | Description | Whether required |
|---|---|---|---|
| x-bce-copy-source-if-match | String | If the ETag value of the source object matches the ETag value provided by the user, the copy operation is performed; otherwise, it fails. | No |
| x-bce-copy-source-if-none-match | String | If the ETag value of the source object does not match the ETag value provided by the user, the copy operation is performed; otherwise, it fails. | No |
| x-bce-copy-source-if-unmodified-since | String | If the source object has not been modified since x-bce-copy-source-if-unmodified-since, the copy operation will proceed; otherwise, it will fail. | No |
| x-bce-copy-source-if-modified-since | String | If the source object has been modified since x-bce-copy-source-if-modified-since, the copy operation will proceed; otherwise, it will fail. | No |
The corresponding example code:
1copy_object_user_headers = {"copy_header_key":"copy_header_value"}
2bos_client.copy_object(source_bucket_name = bucket_name,
3 source_key = object_name,
4 target_bucket_name = bucket_name,
5 target_key = object_name,
6 user_metadata = user_metadata,
7 user_headers = user_headers,
8 copy_object_user_headers = copy_object_user_headers)
Set storage class when uploading an object
BOS supports standard storage, infrequent access storage, cold storage and archive storage. Uploading an object and storing it as a certain storage class (standard storage by default) is achieved by specifying the StorageClass. The parameters corresponding to the four storage classes are as follows:
| Storage class | Parameters |
|---|---|
| Standard storage | STANDARD |
| Infrequent access storage | STANDARD_IA |
| Cold storage | COLD |
| Archive storage | ARCHIVE |
Taking infrequent access storage and archive storage as an example, the code is as follows:
1 from baidubce.services.bos import storage_class
2 #Upload a cold storage class object from a file
3 bos_client.put_object_from_file(bucket=bucket_name,
4 key=object_key,
5 file_name=file,
6 storage_class=storage_class.COLD)
7 #Upload a cold storage class object from a string
8 bos_client.put_object_from_string(bucket=bucket_name,
9 key=object_key,
10 data=string,
11 storage_class=storage_class.COLD)
12 #Upload an archive storage class object from a file
13 bos_client.put_object_from_file(bucket=bucket_name,
14 key=object_key,
15 file_name=file,
16 storage_class=storage_class.ARCHIVE)
Append upload
Objects created using the simple upload method described above are all of a standard type and do not support append writes. This limitation can be inconvenient in scenarios where frequent data overwriting occurs, such as log files, video surveillance, and live video streaming.
To meet user needs, Baidu AI Cloud Object Storage (BOS) supports AppendObject, allowing files to be uploaded in an append-write manner. Objects created through the AppendObject operation are classified as Appendable Objects and can have data appended to them. The size limit for AppendObject is 0–5 GB. Note that the archive storage class does not support append uploads.
Example code for uploading via AppendObject is as follows:
1 #Upload an appendable object. Among them, "content_md5(data)" indicates that the user needs to calculate the md5 value of the uploaded data by themselves.
2 #The calculation method is to perform the md5 algorithm on the data to obtain 128-bit binary data, and then perform base64 encoding. Examples can be shown in the "Simple Upload" section above.
3 Among them, "content_length(data)" indicates that the user needs to calculate the length of the uploaded data by themselves
4 response = bos_client.append_object(bucket_name=bucket_name,
5 key=object_key,
6 data=data,
7 content_md5=content_md5(data), content_length=content_length(data))
8 Get position for the next append write
9 next_offset = response.metadata.bce_next_append_offset
10 bos_client.append_object(bucket_name=bucket_name,
11 key=object_key,
12 data=next_data,
13 content_md5=content_md5(next_data), content_length=content_length(next_data),
14 offset=next_offset)
15 #Upload an appendable object from a string
16 from baidubce.services.bos import storage_class
17 bos_client.append_object_from_string(bucket_name=bucket_name,
18 key=object_key,
19 data=string,
20 offset=offset,
21 storage_class=storage_class.STANDARD,
22 user_headers=user_headers)
Multipart upload
Apart from uploading files to BOS using the putObject API, BOS also provides another upload method: Multipart Upload. This mode can be used in various scenarios, including but not limited to the following:
- When resumable uploads are required.
- When uploading files larger than 5GB.
- When the connection to the BOS server is frequently interrupted due to unstable network conditions.
- Enable streaming file uploads.
- The file size cannot be determined before uploading.
Here, we will explain the implementation of Multipart Upload step by step.
Initialize Multipart Upload
BOS uses initiate_multipart_upload method to initialize a multipart upload event:
1upload_id = bos_client.initiate_multipart_upload(bucket_name, object_key).upload_id
This method returns an InitMultipartUploadResponse object, which contains the uploadId parameter to identify the current upload event.
Initialization of multipart upload with specific headers
1bos_client.initiate_multipart_upload(bucket_name=bucket,
2 key=object_key,
3 user_headers=user_headers)
Headers that can be set include: "Cache-Control," "Content-Encoding," "Content-Disposition," and "Expires." The get-object and get-object-meta APIs will return these four headers as set.
Initialization of multipart upload for infrequent access, cold storage and archive storage
The initialization of infrequent access storage multipart upload needs to specify storage_class, please refer to the following code (cold storage and so on):
1from baidubce.services.bos import storage_class
2bos_client.initiate_multipart_upload(bucket_name=bucket,
3 key=object_key,
4 storage_class = storage_class.STANDARD_IA)
Upload parts
After initialization, perform multipart upload:
1 left_size = os.path.getsize(file_name)
2# left_size is used to set the starting position of the part
3# Set the starting offset position of the part
4 offset = 0
5 part_number = 1
6 part_list = []
7 while left_size > 0:
8# Set each part to 5MB
9 part_size = 5 * 1024 * 1024
10 if left_size < part_size:
11 part_size = left_size
12 response = bos_client.upload_part_from_file(
13 bucket_name, object_key, upload_id, part_number, part_size, file_name, offset)
14 left_size -= part_size
15 offset += part_size
16 part_list.append({
17 "partNumber": part_number,
18 "eTag": response.metadata.etag
19 })
20 part_number += 1
Note:
- The offset parameter is specified in bytes and represents the starting position of the part within the file.
- The size parameter, defined in bytes, specifies the size of each part. Except for the last part, all other parts must be larger than 5 MB. However, the Upload Part API does not validate part sizes immediately; validation occurs only upon calling complete_multipart_upload().
- To avoid errors during network transmission, it's recommended to use the Content-MD5 value returned by BOS for each part after an Upload Part request to verify the correctness of the uploaded data. Once all parts are combined into one object, it no longer includes the MD5 value.
- The part number range is 1–10,000. If this range is exceeded, BOS returns an InvalidArgument error code.
- For each uploaded part, the stream must be positioned at the beginning of the respective part.
- After each part upload, BOS's response includes an etag and partNumber, which are essential for completing the multipart upload. These must be saved, typically in a list, to ensure smooth processing in subsequent steps.
Complete multipart upload
1bos_client.complete_multipart_upload(bucket_name, object_key, upload_id, part_list)
In this context, the part_list type is a list where each element is a dictionary. Each dictionary contains two keys: one for partNumber and another for eTag.
An example is as follows:
1[{'partNumber': 1, 'eTag': 'f1c9645dbc14efddc7d8a322685f26eb'}, {'partNumber': 2, 'eTag': 'f1c9645dbc14efddc7d8a322685f26eb'}, {'partNumber': 3, 'eTag': '93b885adfe0da089cdf634904fd59f71'}]
The parameters available for calling in the resolution class returned by this method are as follows:
| Parameters | Description |
|---|---|
| bucket | Bucket name |
| key | Object name |
| e_tag | ETag of each multipart uploaded |
| location | Object URL |
Note: The ETag contained in this object corresponds to the ETag of each part during the multipart upload. Once BOS receives the Part list submitted by the user, it verifies the validity of each part one by one. After validation, BOS assembles the parts into a complete object.
Cancel multipart upload event
Users can cancel multipart uploads using the abort_multipart_upload method.
1bos_client.abort_multipart_upload(bucket_name, object_key, upload_id = upload_id)
Get unfinished multipart upload event
Users can obtain the unfinished multipart upload events in the bucket by the following two methods:
Method 1:
1response = bos_client.list_multipart_uploads(bucket_name)
2for item in response.uploads:
3 print(item.upload_id)
list_multipart_uploads BOS returns a maximum of 1,000 Multipart Uploads each time, and supports prefix and delimiter filtering.
The parameters available for the list_multipart_uploads method also include:
| Name | Types | Description | Whether required |
|---|---|---|---|
| delimiter | String | Delimiter; primarily used to implement the logic of the list folder | No |
| key_marker | String | After the objects are sorted lexicographically, this time they are returned starting from the one after keyMarker | No |
| max_uploads | Int | Maximum number of Multipart Uploads upon this request, defaulting to 1,000, maximum 1,000 | No |
| prefix | String | Key prefix, which must be included in the returned object keys with limits | No |
The parameters available for calling in the resolution class returned by the list_multipart_uploads method are as follows:
| Parameters | Description |
|---|---|
| bucket | Bucket name |
| key_marker | The name of the multipart object to start uploading |
| next_key_marker | This item is returned only when delimiter is specified and IsTruncated is true, serving as the marker value for next query |
| is_truncated | Indicate whether all queries are returned; false - all results are returned this time; true - not all results are returned this time |
| prefix | The matched objects from prefix to the first Delimiter character are returned as a group of elements |
| common_prefixes | This item is returned only when a delimiter is specified |
| delimiter | Query terminator |
| max_uploads | Maximum number of requests returned |
| uploads | Container for all unfinished multipart upload events |
| +owner | User information of the bucket corresponding to the bucket |
| +id | User ID of bucket owner |
| +display_name | Name of bucket owner |
| +key | The name of the object to which the part belongs |
| +upload_id | Multipart upload ID |
| +initiated | Starting time for multipart upload |
The list_all_multipart_uploads method returns a generator for uploads and is not limited by the maximum of 1,000 results returned at a time; it will return all results.
Method 2:
1uploads = bos_client.list_all_multipart_uploads(bucket_name)
2for item in uploads:
3 print(item.upload_id)
Get all uploaded information
Users can use the following two methods to obtain all uploaded parts in an upload event:
Method 1:
1response = bos_client.list_parts(bucket_name, object_key, upload_id)
2for item in response.parts:
3 print(item.part_number)
Note:
- BOS arranges parts in ascending order of PartNumber.
- Due to potential network transmission errors, it is not recommended to finalize the Part list for CompleteMultipartUpload based on the ListParts results.
The parameters available for the list_parts method also include:
| Name | Types | Description | Whether required |
|---|---|---|---|
| max_parts | Int | Maximum number of parts BOS can return in a single response, defaulting to 1,000, maximum 1,000 | No |
| part_number_marker | Int | Sorted by partNumber. The starting part of this request is returned from the next one following this partNumber. | No |
The parameters available for calling in the resolution class returned by the list_parts method are as follows:
| Parameters | Description |
|---|---|
| bucket | Bucket name |
| key | Object name |
| initiated | Starting time for this multipart upload |
| max_parts | Maximum number of requests returned |
| is_truncated | Indicate whether all queries are returned; false - all results are returned this time; true - not all results are returned this time |
| storage_class | The storage class of the Object is currently divided into standard class STANDARD, infrequent access class STANDARD_IA, cold storage class COLD and archive class ARCHIVE |
| part_number_marker | Part start marker position |
| parts | Part list, list type |
| +part_number | Part number |
| +last_modified | The last modification time of this part |
| +e_tag | ETag of each multipart uploaded |
| +size | The size of the part content (in bytes) |
| upload_id | ID of this multipart upload |
| owner | User information of the bucket corresponding to the bucket |
| +id | User ID of bucket owner |
| +display_name | The name of the bucket owner |
| next_part_number_marker | partNumber of the last record returned upon this request, which could be used as the part_number_marker for the next request |
Method 2:
1parts = bos_client.list_all_parts(bucket_name, object_key, upload_id = upload_id)
2for item in parts:
3 print(item.part_number)
The list_all_parts method returns a generator for parts and is not limited by the maximum of 1,000 results returned at a time; it will return all results.
Get the storage class of the object for multipart upload
1response = bos_client.list_parts(bucket_name=bucket,
2 key=object_key,
3 upload_id=upload_id)
4print(response.storage_class)
Encapsulate multipart upload
In the Python SDK, BOS provides the put_super_object_from_file() API, which encapsulates the three steps of multipart upload: initiate_multipart_upload, upload_part_from_file, and complete_multipart_upload. Users can simply call this API to complete a multipart upload.
1import multiprocessing
2file_name = "/path/to/file.zip"
3result = bos_client.put_super_obejct_from_file(bucket_name, key, file_name,
4 chunk_size=5, thread_num=multiprocessing.cpu_count())
5if result:
6 print("Upload success!")
The parameters available for calling by the method also include:
| Name | Types | Description | Whether required |
|---|---|---|---|
| chunk_size | int | The part size is specified in MB, with a default of 5 MB. | No |
| thread_num | int | Number of threads in the thread pool for multipart upload: The default is equal to the number of CPU cores | No |
If uploading a large file takes too long, and the user wishes to terminate the multipart upload operation, they can call the cancel() method in UploadTaskHandle to cancel the operation. An example is as follows:
1import threading
2from baidubce.services.bos.bos_client import UploadTaskHandle
3file_name = "/path/to/file.zip"
4uploadTaskHandle = UploadTaskHandle()
5t = threading.Thread(target=bos_client.put_super_obejct_from_file, args=(bucket_name, key, file_name),
6 kwargs={
7 "chunk_size": 5,
8 "thread_num": multiprocessing.cpu_count(),
9 "uploadTaskHandle": uploadTaskHandle
10 })
11t.start()
12time.sleep(2)
13uploadTaskHandle.cancel()
14t.join()
Resumable upload
When users upload large files to BOS, if the network is unstable or the program crashes, the entire upload will fail, and the parts that have been uploaded before the failure will also be invalid. Users have to start over. This not only wastes resources but also often fails to complete the upload after multiple retries in an unstable network environment. Based on the above scenarios, BOS provides the capability of resumable upload:
- In a generally stable network, it is recommended to use the three-step upload type, dividing the object into 1 MB parts, refer to [Multipart Upload](#Multipart upload).
- If your network condition is very poor, it is recommended to use the appendObject method for resumable upload, appending small data (256 KB) each time, refer to [Append Upload](#Append upload).
Tips
- Resumable upload is an encapsulation and enhancement of multipart upload, implemented using multipart upload;
- For large files or poor network environments, it is recommended to use multipart upload;
Fetch and upload
The code below is designed to fetch resources from a given URL and store them in a specified bucket. The requester must have write permission for the bucket to perform this action. Only one object can be fetched at a time, and the user has the option to customize the object's name. By default, the fetching process is synchronous. For more information, refer to the FetchObject API.
1from baidubce.services.bos.bos_client import FETCH_MODE_ASYNC
2from baidubce.services.bos import storage_class
3fetch_url="<YOUR_URL>"
4 // Synchronous fetching by default
5bos_client.fetch_object(bucket_name, object_key, fetch_url)
6 // Asynchronous fetching
7response = bos_client.fetch_object(bucket_name,
8 object_key,
9 fetch_url,
10 fetch_mode=FETCH_MODE_ASYNC,
11 storage_class=storage_class.COLD)
12print("jobId:{}, return code:{}, return message:{}".format(response.job_id,
13 response.code,
14 response.message))
Obtain upload progress
The Python SDK provides real-time upload progress updates during the upload process. It currently supports simple uploads, append uploads, and multipart uploads. To enable this feature, add the progress_callback parameter to the corresponding API and use a progress bar callback function, or opt for the default progress bar callback function available in the tool class.
The example of the callback function is as follows:
1def percentage(consumed_bytes, total_bytes):
2 """Use progress bar callback function to calculate the current completion percentage
3
4 :param consumed_bytes: The amount of data uploaded/downloaded
5 :param total_bytes: Total amount of data
6 """
7 if total_bytes:
8 rate = int(100 * (float(consumed_bytes) / float(total_bytes)))
9 print('\r{0}% '.format(rate))
10 sys.stdout.flush()
11# progress_callback is an optional parameter used to implement the progress bar function.
12bos_client.put_object(bucket_name, object_key, data, content_length,content_md5, progress_callback=percentage)
It is recommended to use the default progress bar callback function (utils.default_progress_callback) in the tool class, which currently supports percentage and progress bar display. The example is as follows:
1# Import the SDK tool class package
2from baidubce import utils
3# progress_callback is an optional parameter used to implement the progress bar function.
4bos_client.put_object(bucket_name, object_key, data, content_length,content_md5, progress_callback=utils.default_progress_callback)
- put_object example code
1# Import the SDK tool class package
2from baidubce import utils
3# progress_callback is an optional parameter used to implement the progress bar function.
4data = open(file_name, 'rb')
5bos_client.put_object(bucket_name, object_key, data, content_length, content_md5,
6progress_callback=utils.default_progress_callback)
7 #Upload object from a string
8bos_client.put_object_from_string(bucket_name, object_key, string,
9progress_callback=utils.default_progress_callback)
10 #Upload object from a file
11bos_client.put_object_from_file(bucket_name, object_key, file_name,
12progress_callback=utils.default_progress_callback)
- append_object example code
1# Import the SDK tool class package
2from baidubce import utils
3# progress_callback is an optional parameter used to implement the progress bar function.
4 #Upload an appendable object.
5response = bos_client.append_object(bucket_name=bucket_name,
6 key=object_key,
7 data=data,
8 content_md5=content_md5(data), content_length=content_length(data),
9 progress_callback=utils.default_progress_callback)
10 #Upload an appendable object from a string
11result = bos_client.append_object_from_string(bucket_name=bucket_name,
12 key=object_key,
13 data=String, progress_callback=utils.default_progress_callback)
- Example code of upload_part_from_file
1# Import the SDK tool class package
2from baidubce import utils
3# progress_callback is an optional parameter used to implement the progress bar function.
4bos_client.upload_part_from_file(
5 bucket_name, key, upload_id, part_number, part_size, file_name, offset, progress_callback=utils.default_progress_callback)~~~~
Support single-link rate limit
Baidu AI Cloud Object Storage (BOS) provides a public network bandwidth limit of 10 Gbit/s per bucket and an intranet bandwidth limit of 50 Gbit/s per bucket. When the upload or download usage reaches the bandwidth limit, the error code RequestRateLimitExceeded will be returned. To ensure normal service usage, BOS supports traffic control during uploads and downloads to prevent high traffic services from affecting other applications.
Example of upload class request API
The value range of the rate limit value is 819200~838860800 in bit/s, that is, 100KB/s~100MB/s. The rate limit value must be a number. BOS will limit the rate of this request according to the specified rate limit value. When the rate limit value is not within this range or is illegal, it will return the error code 400.
1traffic_limit_speed = 819200 * 5
2# put a file as object
3_create_file(file_name, 5 * 1024 * 1024)
4bos_client.put_object_from_file(bucket_name, key, file_name, traffic_limit=traffic_limit_speed)
5# multi-upload operation samples
6# put a super file to object
7_create_file(file_name, 10 * 1024 * 1024)
8# SuperFile step 1: init multi-upload
9upload_id = bos_client.initiate_multipart_upload(bucket_name, key).upload_id
10# SuperFile step 2: upload file part by part
11left_size = os.path.getsize(file_name)
12offset = 0
13part_number = 1
14part_list = []
15while left_size > 0:
16 part_size = 5 * 1024 * 1024
17 if left_size < part_size:
18 part_size = left_size
19 response = bos_client.upload_part_from_file(bucket_name, key, upload_id, part_number, part_size, file_name, offset, traffic_limit=traffic_limit_speed)
20 left_size -= part_size
21 offset += part_size
22 # your should store every part number and etag to invoke complete multi-upload
23 part_list.append({
24 "partNumber": part_number,
25 "eTag": response.metadata.etag
26 })
27 part_number += 1
28# copy a object
29bos_client.copy_object(source_bucket, source_key, target_bucket, target_key, traffic_limit=traffic_limit_speed)
30# append object
31bos_client.append_object(bucket_name, key, traffic_limit=traffic_limit_speed)
32# upload part copy
33bos_client.upload_part_copy(source_bucket, source_key, target_bucket, target_key, upload_id, part_number,
34 part_size, offset, traffic_limit=traffic_limit_speed)
