File Management
Upload Files
In BOS, the basic unit of data for user operations is object. object includes Key, Meta and Data. Key is the name of object. Meta is the user's description of the object and consists of a series of Name-Value pairs. Data is the data of the object.
BOS GO SDK provides rich file upload interfaces, and can upload files with the following methods:
- Simple Upload
- Append Upload
- Capture upload
- Multipart upload
- Automatic three-step upload
Simple Upload
BOS supports object upload in the forms of file, data stream, binary string and string in a simple upload, see the following codes:
// import "github.com/baidubce/bce-sdk-go/bce"
// Upload from local file
etag, err := bosClient.PutobjectFromFile(BucketName, ObjectName, fileName, nil)
// Upload from string
str := "test put object"
etag, err := bosClient.PutobjectFromString(BucketName, ObjectName, str, nil)
// Upload from byte array
byteArr := []byte("test put object")
etag, err := bosClient.PutobjectFromBytes(BucketName, ObjectName, byteArr, nil)
// Upload from data stream
bodyStream, err := bce.NewBodyFromFile(fileName)
etag, err := bosClient.Putobject(BucketName, ObjectName, bodyStream, nil)
// Provide necessary parameters to upload from data stream using the basic interface
bodyStream, err := bce.NewBodyFromFile(fileName)
etag, err := bosClient.BasicPutobject(BucketName, ObjectName, bodyStream)
Object is uploaded to BOS in file form, and the interface of the said simple upload supports the upload of object no more than 5GB. After request is successful, BOS returns the ETag of object in Header as the file identification.
Set objet metadata
Object metadata is the attribute description of the file when the user uploads the file to BOS. It is mainly divided into two types: Set HTTP Headers and custom meta-information.
Set the Http Header of object
BOS GO SDK is to call the background HTTP interface in essence, so users can customize the Http Header of object when uploading files. Common http headers are described as follows:
Name | Description | Default value |
---|---|---|
Content-MD5 | File data verification, BOS will enable MD5 verification of file content after setting. Comparing the MD5 you provided with the MD5 of the file, an error will be thrown if it is inconsistent. | Yes |
Content-Type | The MIME of the file defines the type of the file and the web page code, and determines the form and code in which the browser will read the file. If not specified, BOS is generated automatically according to the file extension, and if there is no file extension, the default value is filled in. | application/octet-stream |
Content-Disposition | It instructs the MIME user agent how to display additional files, open or download, and file names. | None |
Content-Length | If the length of the uploaded file exceeds that of the stream/file, it will be cut; otherwise, it will be calculated as the actual value. | Length of stream/file |
Expires | Cache expiration time | None |
Cache-Control | It specifies the caching behavior of the web page when the object is downloaded. | None |
The reference codes are as follows:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.PutobjectArgs)
// Set MIME type of upload contents
args.ContentType = "text/javascript"
// Set length of upload contents
args.ContentLength = 1024
// Set cache expiration time
args.Expires = "Mon, 19 Mar 2018 11:55:32 GMT"
// Set cache behavior
args.CacheControl = "max-age=3600"
etag, err := bosClient.Putobject(BucketName, ObjectName, bodyStream, args)
Note: When a user uploads object, SDK sets ContentLength and ContentMD5 automatically to ensure data validity. If a user sets ContentLength by himself, it must be a value more than or equal to 0, and equal to or less than the size of actual object, to upload the contents of interception, and an error is reported when it is negative or less than the actual size.
Custom meta data
Custom meta data is available under BOS for object description. As shown in the following code:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.PutobjectArgs)
// Set custom meta data
args.UserMeta = map[string]string{
"name1": "my-metadata1",
"name2": "my-metadata2",
}
etag, err := bosClient.Putobject(BucketName, ObjectName, bodyStream, args)
Tips
- In the codes above, users customize a meta data named "name1" and "name2", with values respectively "my-metadata1" and "my-metadata2".
- When users download this object, they can get metadata together.
- One object possesses similar parameters, but the total size of User Meta bellows 2KB.
Set storage type when uploading object
BOS supports standard, infrequency and cold storages, the upload of object and storage as infrequency storage type are realized via StorageClass, and the three storage types correspond to the following parameters:
Storage type | Parameter |
---|---|
Standard storage | STANDRAD |
infrequency storage | STANDARD_IA |
Cold storage | COLD |
Take infrequency storage as an example, the code is as follows:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.PutobjectArgs)
args.StorageClass = api.STORAGE_CLASS_STANDARD_IA
etag, err := bosClient.Putobject(BucketName, ObjectName, bodyStream, args)
Append Upload
In the simple upload method introduced above, the objects created are of Normal type, and you cannot append, as it is inconvenient to use in scenarios where data copying is frequent, such as log, video monitoring and live video.
For this reason, Baidu AI Cloud BOS specifically supports appendObject, namely, uploading files through append. The object created by the appendObject operation is of Appendable object. And you can append data to the object. appendObject size is 0-5G. When you have a poor network condition, it is recommended to use appendObject method for upload, and append a smaller data each time (e.g. 256kb).
The sample code uploaded through appendObject is as follows:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.appendObjectArgs)
// 1.Original interface upload, set as a infrequency storage, set appended deviation position.
args.StorageClass = api.STORAGE_CLASS_STANDARD_IA
args.Offset = 1024
res, err := bosClient.appendObject(BucketName, ObjectName, bodyStream, args)
// 2.Simple encapsulated interface, only supports setting of offset
res, err := bosClient.SimpleappendObject(BucketName, ObjectName, bodyStream, offset)
// 3.Encapsulated interface uploaded from string, only supports setting of offset
res, err := bosClient.SimpleappendObjectFromString(BucketName, ObjectName, "abc", offset)
// 4.Encapsulated interface uploaded from a given file name, only supports setting of offset
res, err := bosClient.SimpleappendObjectFromFile(BucketName, ObjectName, "<path-to-local-file>", offset)
fmt.Println(res.ETag) // Print ETag.
fmt.Println(res.ContentMD5) // Print ContentMD5
fmt.Println(res.NextAppendOffset) // Print NextAppendOffset.
Capture Upload
BOS supports url provided by users to automatically capture relevant contents, and save them as object of specified name of specified object.
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.FetchobjectArgs)
// 1.Original interface capture, set as asynchronous capture mode
args.FetchMode = api.FETCH_MODE_ASYNC
res, err := bosClient.Fetchobject(bucket, object, url, args)
// 2.Basic capture interface, set as capture mode by default
res, err := bosClient.BasicFetchobject(bucket, object, url)
// 3.Easy-to-use interface, specify optional parameters directly
res, err := bosClient.SimpleFetchobject(bucket, object, url,
api.FETCH_MODE_ASYNC, api.STORAGE_CLASS_STANDARD_IA)
fmt.Println(res.ETag) // Print ETag.
Multipart Upload
In addition to uploading file to BOS via simple upload and appending download mode, BOS provides another upload mode, Multipart Upload. You can use Multipart Upload mode in the following application scenarios (but not limited to this), such as:
- Breakpoint upload support is required.
- The file to upload is larger than 5 GB.
- The network conditions are poor, and the connection with BOS servers is often disconnected.
- The file needs to be uploaded streaming.
- The size of the uploaded file cannot be determined before uploading it.
BOS GO SDK provides the control parameters of operation by part:
- MultipartSize: The size of each part is 10MB by default, and no less than 5MB minimally.
- MaxParallel: Concurrency of operation by part, 10 by default.
The following sample codes set the part size as 20MB, with concurrency of 100:
// import "github.com/baidubce/bce-sdk-go/services/bos"
client := bos.NewClient(<your-ak>,<your-sk>,<endpoint>)
client.MultipartSize = 20 * (1<< 10)
client.MaxParallel = 100
In addition to the parameters above, each part set is aligned by 1MB, and meanwhile, the maximum number of parts is limited to be no more than 10,000, and if the part is small, causing the number of parts exceeding this upper limit, part size is adjusted automatically.
Next, the implementation of Multipart Upload is described step by step. Suppose you have a file with the local path of /path/to/file.zip
, upload it through Multipart Upload to BOS for its large size.
Initialize Multipart Upload
BasicInitiateMultipartUpload
method is used to initialize a basic upload by part event:
res, err := bosClient.BasicInitiateMultipartUpload(BucketName, ObjectKey)
fmt.Println(res.UploadId) // UploadId obtained after printing initialized upload by part
The returned result contains UploadId
, which is the unique identification of upload by event part, and we will use it in the subsequent operation.
Initialization of uploading infrequency storage type object
The InitiateMultipartUpload
interface provided by BOS GO SDK can set other relevant parameters of upload by part, and the following codes initialize an upload by part event of infrequency storage.
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.InitiateMultipartUploadArgs)
args.StorageClass = api.STORAGE_CLASS_STANDARD_IA
res, err := bosClient.InitiateMultipartUpload(BucketName, ObjectKey, contentType, args)
fmt.Println(res.UploadId) // UploadId obtained after printing initialized upload by part
Initialization of uploaded cold storage object
Initialize a Multipart Upload event of infrequency storage:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.InitiateMultipartUploadArgs)
args.StorageClass = api.STORAGE_CLASS_COLD
res, err := bosClient.InitiateMultipartUpload(BucketName, ObjectKey, contentType, args)
fmt.Println(res.UploadId) // UploadId obtained after printing initialized upload by part
Upload in parts
Next, upload the file in parts.
// import "github.com/baidubce/bce-sdk-go/bce"
// import "github.com/baidubce/bce-sdk-go/services/bos"
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
file, _ := os.Open("/path/to/file.zip")
// The part size is aligned by MULTIPART_ALIGN=1MB.
partSize := (bosClient.MultipartSize +
bos.MULTIPART_ALIGN - 1) / bos.MULTIPART_ALIGN * bos.MULTIPART_ALIGN
// Obtain the file size, and calculate number of parts, with maximum number of part MAX_PART_NUMBER=10000
fileInfo, _ := file.Stat()
fileSize := fileInfo.Size()
partNum := (fileSize + partSize - 1) / partSize
if partNum > bos.MAX_PART_NUMBER { // When the maximum number of parts is exceeded, the part size needs to be adjusted.
partSize = (fileSize + bos.MAX_PART_NUMBER + 1) / bos.MAX_PART_NUMBER
partSize = (partSize + bos.MULTIPART_ALIGN - 1) / bos.MULTIPART_ALIGN * bos.MULTIPART_ALIGN
partNum = (fileSize + partSize - 1) / partSize
}
// Create the list of ETag and PartNumber after uploading of each part
partEtags := make([]api.UploadInfoType)
// Upload part by part
for i := int64(1); i<= partNum; i++ {
// Calculate offset and the current upload size uploadSize
uploadSize := partSize
offset := partSize * (i - 1)
left := fileSize - offset
if left< partSize {
uploadSize = left
}
// Create a file flow of specified offset and size
partBody, _ := bce.NewBodyFromSectionFile(file, offset, uploadSize)
// Upload the current part
etag, err := bosClient.BasicUploadPart(BucketName, ObjectKey, uploadId, int(i), partBody)
// Save the serial number and ETag returned after successful upload of the current part
partEtags = append(partEtags, api.UploadInfoType{int(i), etag})
}
The core of the code above is to call BasicUploadPart
method to upload each part, but the following need to be noted:
- The BasicUploadPart method requires that all but the last Part be larger than or equal to 5 MB. However, the interface does not check the size of uploaded Part; the size will be checked only when the multipart upload is completed.
- To ensure that no error occurs in the process of ne2rk transmission, it is recommended to use the Content-MD5 value returned by each part BOS to respectively check the validity of uploaded part data after
BasicUploadPart
. When all part data is combined into one object, it no longer contains the MD5 value. - Part numbers range from 1 to 10,000. If this range is exceeded, BOS will return the error code of InvalidArgument.
- Every time a Part is uploaded, the returned result of BOS will contain a
PartETag
object. It is a combination of the Etag and the PartNumber of the uploaded part. It will be used in the next steps to complete Multipart Upload, so it needs to be saved. In general, thesePartETag
objects will be saved to the List.
Complete Multipart Upload
Complete the Multipart Upload as shown in the following code:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
completeArgs := api.CompleteMultipartUploadArgs{partEtags}
res, _ := bosClient.CompleteMultipartUploadFromStruct(
BucketName, ObjectKey, uploadId, completeArgs, nil)
// Output contents of result object
fmt.Println(res.Location)
fmt.Println(res.bucket)
fmt.Println(res.Key)
fmt.Println(res.ETag)
The partETags
in the code above are list of partETag saved in the Step 2, and after receiving the list of Part submitted by the users, BOS verifies the validity of each data part one by one. When all data Parts are verified, BOS will combine these data parts into a complete object.
Cancel Multipart Upload
You can use the abortMultipartUpload method to cancel Multipart Upload.
bosClient.AbortMultipartUpload(BucketName, ObjectKey, uploadId)
Get Unfinished Multipart Upload
You can use the ListMultipartUploads
method to get unfinished Multipart Upload events in the bucket.
// The list gives all uncompleted part information under bucket
res, err := BasicListMultipartUploads(BucketName)
// Output status information of returned result
fmt.Println(res.bucket)
fmt.Println(res.Delimiter)
fmt.Println(res.Prefix)
fmt.Println(res.IsTruncated)
fmt.Println(res.KeyMarker)
fmt.Println(res.NextKeyMarker)
fmt.Println(res.MaxUploads)
// Transverse list of all uncompleted part information
for _, multipartUpload := range res.Uploads {
fmt.Println("Key:", multipartUpload.Key, ", UploadId:", multipartUpload.UploadId)
}
Note: 1. By default, if the number of multipart upload event is more than 1,000, only 1,000 objects are returned, and the value of IsTruncated in returned result is True, and meanwhile, NextKeyMarker is returned as the starting point of the next reading. 2. To return more multipart upload events, you can use KeyMarker parameter for reading by times.
Get All Uploaded Part Information
You can use the ListParts
method to get all uploaded parts in an uploaded event.
// List the part uploaded successfully currently with the basic interface.
res, err := bosClient.BasicListParts(BucketName, ObjectKey, uploadId)
// Provide parameters with original interface, and list at most 100 blocks uploaded successfully currently.
args := new(api.ListPartsArgs)
args.MaxParts = 100
res, err := bosClient.ListParts(BucketName, ObjectKey, uploadId, args)
// Print the returned status result
fmt.Println(res.bucket)
fmt.Println(res.Key)
fmt.Println(res.UploadId)
fmt.Println(res.Initiated)
fmt.Println(res.StorageClass)
fmt.Println(res.PartNumberMarker)
fmt.Println(res.NextPartNumberMarker)
fmt.Println(res.MaxParts)
fmt.Println(res.IsTruncated)
// Print part information
for _, part := range res.Parts {
fmt.Println("PartNumber:", part.PartNumber, ", Size:", part.Size,
", ETag:", part.ETag, ", LastModified:", part.LastModified)
}
Note: 1. By default, if the number of multipart upload event in bucket is more than 1,000, only 1,000 objects are returned, and the value of IsTruncated in returned result is True, and meanwhile, NextPartNumberMarker is returned as the starting point of the next reading. 2. To return more multipart upload events, you can use PartNumberMarker parameter for reading by times.
The example above is realized by API successively, and is not implemented concurrently; to speed up, users need to realize the part of concurrent upload. For convenience to users, BOS Client encapsulates the concurrent interface UploadSuperFile
uploaded by event:
- Interface:
UploadSuperFile(bucket, object, fileName, storageClass string) error
-
Parameter:
- bucket: name of bucket of upload object
- object: name of upload object
- fileName: name of local file
- storageClass: storage type of upload object, which is standard storage by default
-
Return value:
- error: error in upload process, null if upload is successful.
You only need to give bucket
, object
, filename
for multipart upload concurrently, and users also can specify storageClass
of upload object.
Automatic Three-step Upload
For the object exceeding 5G, provide encapsulated three-step upload, see the following codes:
// import "github.com/baidubce/bce-sdk-go/bce"
// Upload from local file
res, err := bosClient.ParallelUpload(BucketName, ObjectName, fileName, "", nil)
Return the response and error message corresponding to request, and args parameter information is the same with simple upload above.
Download File
BOS GO SDK provides rich file download interfaces, and users can download files from BOS as follows:
- Simple streaming download
- Download to local file
- Range download
Simple Streaming Download
You can read object in a stream through the following codes:
// Provide bucket and object to obtain an object directly
res, err := bosClient.BasicGetobject(BucketName, ObjectName)
// Get objectMeta.
meta := res.objectMeta
// Obtain object reading stream (io.ReadCloser)
stream := res.Body
// Ensure that object reading stream is disabled.
defer stream.Close()
// Call Read method of stream object to process object
...
Note: 1. The returned result of the interface above contains various information of object, including bucket of object, object name, MedaData and a reading stream. 2. Metadata of object can be obtained via the objectMeta field of result object, and it contains ETag, Http Header defined at the time of uploading object, and custom metadata. 3. The reading stream of returned object can be obtained via the Body field of result object, and the contents of object are read to file or memory or otherwise operated by operating reading stream.
Download to Local File
You can download object to the specified file through the following code:
err := bosClient.BasicGetobjectToFile(BucketName, ObjectName, "path-to-local-file")
Range Download
For more functions, download range and returned header can be specified to obtain object more accurately. If the specified download range is 0-100, the 0-100th (including) byte of data is returned, 101 bytes of data in total, i.e. [0, 100].
// Specify starting position of range and returned header
responseHeaders := map[string]string{"ContentType": "image/gif"}
rangeStart = 1024
rangeEnd = 2048
res, err := bosClient.Getobject(BucketName, ObjectName, responseHeaders, rangeStart, rangeEnd)
// Not specify starting position start
res, err := bosClient.Getobject(BucketName, ObjectName, responseHeaders, rangeStart)
// Not specify range
res, err := bosClient.Getobject(BucketName, ObjectName, responseHeaders)
// Not specify optional header returned
res, err := bosClient.Getobject(BucketName, ObjectName, nil)
Based on the download interface of range, users can realize download of file by segment and breakpoint resume. For convenience to users, BOS GO SD encapsulates concurrently downloaded interface DownloadSuperFile`:
- Interface:
DownloadSuperFile(bucket, object, fileName string) error
-
Parameter:
- bucket: name of bucket of download object
- object: name of download object
- fileName: file name of the object saved locally
-
Return value:
- error: error in download process, null if upload is successful.
The interface utilizes concurrent control parameters to execute concurrent range upload, and the parameter is directly downloaded to the specified file.
Other Methods
Get storage type of object
The storage class attributes of object are classified into STANDARD
(standard storage), STANDARD_IA
(infrequency storage) and COLD
(cold storage), which can be realized via the following codes:
res, err := bosClient.GetobjectMeta(BucketName, ObjectName)
fmt.Println(res.StorageClass)
Obtain object metadatadata only
With GetobjectMeta, you can only obtain object metadatadata, but not object entity. As shown in the following code:
res, err := bosClient.GetobjectMeta(BucketName, ObjectName)
fmt.Printf("Metadata: %+v\n", res)
Get File Download URL
The user can get the URL of the specified object by the following code:
// 1.Original interface, set bucket, object name, expiration time, request method, request header and request parameter
url := bosClient.GeneratePresignedUrl(BucketName, ObjectName,
expirationInSeconds, method, headers, params)
// 2.Basic interface, `GET` method by default, and it is only needed to set the expiration time.
url := bosClient.BasicGeneratePresignedUrl(BucketName, ObjectName, expirationInSeconds)
Note:
- Before calling this function, the user needs to manually set endpoint as the domain name of the region. Baidu AI Cloud currently has opened access to multi-region support, please refer to Region Selection Instructions. Currently, it supports "North China-Beijing", "South China-Guangzhou" and "East China-Suzhou". Beijing:
http://bj.bcebos.com
; Guangzhou:http://gz.bcebos.com
; Suzhou:http://su.bcebos.com
.- ExpirationInSeconds
is the effective duration of the specified URL. It is an optional parameter calculated from the current time. When not configured, the default value of the system is 1,800 seconds. To set a time not invalid permanently,
expirationInSeconds` parameter can be set as -1, and it cannot be set as other negative numbers.- If the expected file is readable publicly, the corresponding URL link can be obtained by rapidly splicing simple rules:
http://{$BucketName}.{$region}.bcebos.com/{$ObjectName}.
Enumerate Files in Storage Space
BOS GO SDK supports users to list object with the following 2 methods:
- Simple enumeration
- Complex enumeration by parameters
In addition, you can simulate folders while listing files.
Simple Enumeration
When users want to list the files needed simply and quickly, they can return ListobjectsResult object which contains return result of the request via Listobjects method. You can obtain all descriptive information from the Contents field of ListobjectsResult object.
listobjectResult, err := bosClient.Listobjects(BucketName, nil)
// Print the status result of current Listobjects request
fmt.Println("Name:", listobjectResult.Name)
fmt.Println("Prefix:", listobjectResult.Prefix)
fmt.Println("Delimiter:", listobjectResult.Delimiter)
fmt.Println("Marker:", listobjectResult.Marker)
fmt.Println("NextMarker:", listobjectResult.NextMarker)
fmt.Println("MaxKeys:", listobjectResult.MaxKeys)
fmt.Println("IsTruncated:", listobjectResult.IsTruncated)
// Print the specific result of Contents field
for _, obj := range listobjectResult.Contents {
fmt.Println("Key:", obj.Key, ", ETag:", obj.ETag, ", Size:", obj.Size,
", LastModified:", obj.LastModified, ", StorageClass:", obj.StorageClass)
}
Note: 1. By default, if the number of objects in bucket is more than 1,000, only 1,000 objects are returned, and the value of IsTruncated in returned result is True, and NextMarker is returned as the starting point of the next reading. 2. To increase the number of returned objects, you can use the marker parameter to read by several times.
Complex Enumeration by Parameters
In addition to simple listing, users can realize various inquiry functions by setting ListobjectsArgs parameters. Parameters that can be set by ListobjectsArgs are as follows:
Parameter | Function |
---|---|
Prefix | The object key returned by the qualification must be prefixed with prefix. |
Delimiter | Delimiter is a character used to group object names. All names contain the specified prefix and appear for the first time. The object between Delimiter character is used as an array |
Marker | The set result returns from the first alphabetically sorted after the marker. |
MaxKeys | Limit the maximum number of objects returned, and if the maximum number is not set, it is 1,000 by default, and the value of max-keys cannot be more than 1,000. |
Note: 1. If an object is named with Prefix, when only Prefix is used for inquiry, all Keys returned still contain object named with Prefix, see List all Files under Directory Recursively. 2. If an object is named with Prefix, when combination of Prefix and Delimiter is used for inquiry, all Keys returned contain Null, and the name of Key does not contain Prefix, see View Files and Subdirectories under Directory.
Next, we use several cases to illustrate the method of parameter enumeration:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.ListobjectsArgs)
// Specify the maximum return parameter as 500.
args.MaxKeys = 500
// Specify the meeting of specific prefix
args.Prefix = "my-prefix/"
// Specify separator to realize the function of similar folder
args.Delimiter = "/"
// Set the sorting result after specific object.
args.Marker = "bucket/object-0"
listobjectResult, err := bosClient.Listobjects(BucketName, args)
Simulate Folder Feature
No concept of folder exists in BOS storage results. All elements are stored in object, but BOS users often need folders to manage files when using data. Therefore, BOS provides the ability to create simulated folders, which essentially creates an object with size of 0. You can upload and download this object, but the console displays it as a folder for objects ending with "/".
You can simulate the folder function through the combination of Delimiter and Prefix parameters. The combination of Delimiter and Prefix works like this:
If setting Prefix to a folder name, you can list the files that begin with Prefix, that is, all the recursive files and subfolders (directories) under the folder. The file name is displayed in Contents. If Delimiter is set to "/" again, the return value only lists the files and subfolders (directories) under the folder. The names of subfiles (directories) under the folder are returned in the CommonPrefixes section, and the recursive files and folders under the subfolders are not displayed.
Here are some application modes:
List all files in the bucket
To obtain all files under bucket, users can refer to the following codes:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.ListobjectsArgs)
args.Delimiter = "/"
listobjectResult, err := bosClient.Listobjects(BucketName, args)
Recursively list all files in the directory
Set the Prefix
parameter to access all files under a directory:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.ListobjectsArgs)
args.Prefix = "fun/"
args.MaxKeys = 1000
listobjectResult, err := bosClient.Listobjects(BucketName, args)
fmt.Println("objects:")
for _, obj := range listobjectResult.Contents {
fmt.Println(obj.Key)
}
Output:
objects:
fun/
fun/movie/001.avi
fun/movie/007.avi
fun/test.jpg
View files and subdirectories under the directory
The files and subdirectories under directory can be listed with the combination of Prefix
and Delimiter
:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.ListobjectsArgs)
args.Delimiter = "/"
args.Prefix = "fun/"
listobjectResult, err := bosClient.Listobjects(BucketName, args)
// Transverse all objects (current directory and direct sub-folder)
fmt.Println("objects:")
for _, obj := range listobjectResult.Contents {
fmt.Println(obj.Key)
}
// Transverse all CommonPrefixes (subdirectories)
fmt.Println("CommonPrefixs:")
for _, obj := range listobjectResult.CommonPrefixes {
fmt.Println(obj.Prefix)
}
Output: objects: fun/ fun/test.jpg
CommonPrefixs:
fun/movie/
In the result returned, the list in objectSummaries
gives the files under fun directory. The list of CommonPrefixs
gives all sub-folders under fun directory. It is clear that the fun/movie/001.avi
file and fun/movie/007.avi
file are not listed because they are movie
directory of the fun
folder.
List the Storage Attributes of Objects in Bucket
After uploading, if you need to view the storage class attribute of all objects in the specified bucket, you can use the following code:
listobjectResult, err := bosClient.Listobjects(BucketName, args)
for _, obj := range listobjectResult.Contents {
fmt.Println("Key:", obj.Key)
fmt.Println("LastModified:", obj.LastModified)
fmt.Println("ETag:", obj.ETag)
fmt.Println("Size:", obj.Size)
fmt.Println("StorageClass:", obj.StorageClass)
fmt.Println("Owner:", obj.Owner.Id, obj.Owner.DisplayName)
}
Privilege Control
Set the Access Privilege of Object
Currently BOS supports two ways to set ACL. The first is to use Canned Acl, and during PutObjectAcl, access privilege is set via the header field "x-bce-acl" or "x-bce-grant-privilege", and the privileges available currently include private and public-read, and the 2 types of headers cannot appear in a request simultaneously. The second is to upload an ACL file. For details, please see Set Object Privilege Control.
Set Canned ACL
Canned ACL is a predefined access privilege, users can select to set an object, and three interfaces are supported:
// 1.Set with x-bce-acl Header
err := bosClient.PutObjectAclFromCanned(bucket, object, cannedAcl) //Values of cannedAcl can be: private, public-read
// 2.Set with x-bce-grant-{privilege} Header
err1 := bosClient.PutObjectAclGrantRead(bucket, object, userId)
err2 := bosClient.PutObjectAclGrantFullControl(bucket, object, userId)
// UserId is an authorized user, which supports variable parameters and introduces more user IDs.
Set Custom ACL
You can set the custom access privilege of object in bucket by reference to the following codes, and four different parameters are set:
// import "github.com/baidubce/bce-sdk-go/bce"
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
// 1.Upload ACL file stream directly
aclBodyStream := bce.NewBodyFromFile("<path-to-acl-file>")
err := bosClient.PutObjectAcl(bucket, object, aclBodyStream)
// 2.Use ACL json string directly
aclString := `{
"accessControlList":[
{
"grantee":[{
"id":"e13b12d0131b4c8bae959df4969387b8" //Specify user ID
}],
"privilege":["FULL_CONTROL"] //Specify user privilege
}
]
}`
err := bosClient.PutObjectAclFromString(bucket, object, aclString)
// 3.Use ACL file
err := bosClient.PutObjectAclFromFile(bucket, object, "<acl-file-name>")
// 4.Use ACL struct object setting
grantUser1 := api.GranteeType{"<user-id-1>"}
grantUser2 := api.GranteeType{"<user-id-2>"}
grant1 := api.GrantType{
Grantee: []api.GranteeType{grantUser1},
Permission: []string{"FULL_CONTROL"}
}
grant2 := api.GrantType{
Grantee: []api.GranteeType{granteUser2},
Permission: []string{"READ"}
}
grantArr := make([]api.GranteType)
grantArr = append(grantArr, grant1)
grantArr = append(grantArr, grant2)
args := &api.PutObjectAclArgs{grantArr}
err := bosClient.PutObjectAclFromStruct(BucketName, object, args)
Get the Access Permission of Object
The following codes enable it to obtain the access privilege of an object:
result, err := bosClient.GetObjectAcl(BucketName, object)
The field of result object returned contains the detailed contents of access privilege, specifically defined as follows:
type GetObjectAclResult struct {
AccessControlList []struct{
Grantee []struct{
Id string
}
Permission []string
}
}
Delete the Access Permission of Object
For the object with access privilege set, this interface can be called for deletion:
err := bosClient.DeleteObjectAcl(BucketName, object)
Delete File
Delete a single file
You can refer to the following code to delete an object:
// Specify the name of object to be deleted and name of bucket
err := bosClient.Deleteobject(BucketName, ObjectName)
Delete multiple files
You can also delete more files under the same bucket by calling deletion at one time, with the following parameters:
Parameter name | Description | Father node |
---|---|---|
objects | Save the container of object information to be deleted, contain one or more object elements. | - |
+key | Name of object to be deleted | objects |
The specific examples are as follows:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
// 1.Original interface, provide List Stream of more objects
res, err := bosClient.DeleteMultipleobjects(bucket, objectListStream)
// 2.Provide json string deletion
objectList := `{
"objects":[
{"key": "aaa"},
{"key": "bbb"}
]
}`
res, err := bosClient.DeleteMultipleobjectsFromString(bucket, objectList)
// 3.Provide deletion of List object of object
deleteobjectList := make([]api.DeleteobjectArgs, 0)
deleteobjectList = append(deleteobjectList, api.DeleteobjectArgs{"aaa"})
deleteobjectList = append(deleteobjectList, api.DeleteobjectArgs{"bbb"})
multiDeleteObj := &api.DeleteMultipleobjectsArgs{deleteobjectList}
res, err := bosClient.DeleteMultipleobjectsFromStruct(bucket, multiDeleteObj)
// 4.Provide name list of object to be deleted directly
deleteobjects := []string{"aaa", "bbb"}
res, err := bosClient.DeleteMultipleobjectsFromKeyList(bucket, deleteobjects)
Note:
When more objects are deleted at one time, the returned result contains the name list of object not deleted successfully. When some objects are deleted successfully,
res
contains the name list not deleted successfully. When some objects are deleted successfully, witherr
nil
, andres
notnil
, all deletions are judged to be successful:err
isio.EOF
andres
isnil
Check if the File Exists
You can check whether a file exists through the following operations:
// import "github.com/baidubce/bce-sdk-go/bce"
_, err := bosClient.GetobjectMeta(BucketName, ObjectName)
if realErr, ok := err.(*bce.BceServiceError); ok {
if realErr.StatusCode == 404 {
fmt.Println("object not exists")
}
}
fmt.Println("object exists")
Get and Update File Meta-information
Object metadata is the attribute description of files uploaded by users to BOS. It includes two types: HTTP standard attribute (HTTP Headers) and custom meta-information.
Get File Meta-information
With GetobjectMeta, you can only obtain object metadata, but not object entity. As shown in the following code:
res, err := bosClient.GetobjectMeta(BucketName, ObjectName)
fmt.Printf("Metadata: %+v\n", res)
Modify Object Metadata
BOS modifies object's Metadata by copying object. That is, when copying object, set the destination bucket as the source bucket, set the destination object as the source object, and set a new Metadata to modify Metadata through copy. If the Metadata is not set, an error is reported. In this mode, the necessary copy mode is "replace" ("copy" by default). The following is an example:
// import "github.com/baidubce/bce-sdk-go/bce"
args := new(api.CopyobjectArgs)
// Copy mode must be set as "replace", and by default, "copy" cannot execute modification of Metadata.
args.MetadataDirective="replace"
// Set Metadata parameter value, see official website for the specific field.
args.LastModified = "Wed, 29 Nov 2017 13:18:08 GMT"
args.ContentType = "text/json"
// Modify Metadata with Copyobject interface, with source object the same with destination object.
res, err := bosClient.Copyobject(bucket, object, bucket, object, args)
Copy File
Copy File
You can copy an object through the Copyobject function, as shown in the following code:
// 1.Original interface, copy parameters can be set
res, err := bosClient.Copyobject(BucketName, ObjectName, srcbucket, srcobject, nil)
// 2.Omit copy parameter, and default is used
res, err := bosClient.BasicCopyobject(BucketName, ObjectName, srcbucket, srcobject)
fmt.Println("ETag:", res.ETag, "LastModified:", res.LastModified)
// 3.Automate three-step copy, and copy parameter can be set
res, err := bosClient.ParallelCopy(srcbucket, srcobject, BucketName, ObjectName, nil)
The result object returned by the interface above contains the ETag of new object, and modification time LastModified.
Set Copy Parameter to Copy Object
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.CopyobjectArgs)
// Set custom Metadata.
args.UserMeta = map[string]string{"<user-meta-key>": "<user-meta-value>"}
res, err := bosClient.Copyobject(BucketName, ObjectName, srcbucket, srcobject, args)
fmt.Println("ETag:", res.ETag, "LastModified:", res.LastModified)
Set the Copy attribute of object
When executing copy, users can judge the Etag or modification status of source object, and decide whether to execute copy according to the judgment result. The following shows parameters in detail:
Name | Type | Description | Required or not |
---|---|---|---|
x-bce-copy-source-if-match | String | If ETag value of source object is equal to ETag provided by the user, copy operation is performed, otherwise the copy fails. | No |
x-bce-copy-source-if-none-match | String | If ETag value of source object is equal to ETag provided by the user, copy operation is performed, otherwise copy fails. | No |
x-bce-copy-source-if-unmodified-since | String | If source object is not modified after x-bce-copy-source-if-unmodified-since, copy operation is performed, otherwise copy fails. | No |
x-bce-copy-source-if-modified-since | String | If source object is modified after x-bce-copy-source-if-modified-since, copy operation is performed, otherwise copy fails. | No |
Corresponding sample code:
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := new(api.CopyobjectArgs)
// Set custom Metadata.
args.UserMeta = map[string]string{"<user-meta-key>": "<user-meta-value>"}
// Set copy-source-if-match
args.IfMatch = "111111111183bf192b57a4afc76fa632"
// Set copy-source-if-none-match
args.IfNoneMatch = "111111111183bf192b57a4afc76fa632"
// Set copy-source-if-modified-since
args.IfModifiedSince = "Fri, 16 Mar 2018 17:07:21 GMT"
// Set copy-source-if-unmodified-since
args.IfUnmodifiedSince = "Fri, 16 Mar 2018 17:07:21 GMT"
res, err := bosClient.Copyobject(BucketName, ObjectName, srcbucket, srcobject, args)
fmt.Println("ETag:", res.ETag, "LastModified:", res.LastModified)
Multipart Upload Copy
In addition to copying files through CopyObject interface, BOS also provides another copy mode - ParallelCopy. You can use ParallelCopy in the following application scenarios (but not limited to this), such as:
- Breakpoint copy support is required.
- The file to copy is larger than 5 GB.
- The network conditions are poor, and the connection with BOS servers is often disconnected.
Sample code:
// Automate three-step copy, and copy parameter can be set
res, err := bosClient.ParallelCopy(srcBucket, srcObject, bucketName, objectName, nil)
fmt.Println("ETag:", res.ETag, "LastModified:", res.LastModified)
Synchronize Copy
The Copyobject interface of the current BOS is implemented through synchronization. In synchronization mode, the BOS server returns successfully after Copy is completed. Synchronous copy can help users judge the copy status, but the copy time perceived by users will be longer, and the copy time is proportional to the file size.
Synchronous Copy is more in line with industry conventions and improves compatibility with other platforms. Synchronous Copy also simplifies the business logic of BOS server and improves service efficiency.
Archive Type
Archive Upload
During upload, set StorageClass as api.STORAGE_CLASS_ARCHIVE
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
args := api.PutobjectArgs{
StorageClass:api.STORAGE_CLASS_ARCHIVE,
}
resCommon, err := bosClient.PutobjectFromFile(bucket, object, filename, &args)
Thaw
Use a newly encapsulated api Restoreobject
// import "github.com/baidubce/bce-sdk-go/services/bos/api"
err := bosClient.Restoreobject(bucket, object, restoreDays, api.RESTORE_TIER_STANDARD)
Select Files
SelectObject Interface supports users to execute SQL statement for Object content with specified format (CSV/JSON) in BOS. It screens, analyzes and filtrates the Object content through SQL structured query language and returns the file content required to users. Sample code:
Select CSV
// Select parameters for CSV files
csvArgs := &api.SelectObjectArgs{
SelectType: "csv",
SelectRequest: &api.SelectObjectRequest{
Expression: "c2VsZWN0ICogZnJvbSBCb3NPYmplY3Qgd2hlcmUgY2FzdChfMSBBUyBpbnQpICogY2FzdChfMiBBUyBpbnQpID4gY2FzdChfMyBBUyBmbG9hdCkgKyAx",
ExpressionType: "SQL",
InputSerialization: &api.SelectObjectInput{
CompressionType: "NONE",
CsvParams: map[string]string{
"fileHeaderInfo": "IGNORE",
"recordDelimiter": "Cg==",
"fieldDelimiter": "LA==",
"quoteCharacter": "Ig==",
"commentCharacter": "Iw==",
},
},
OutputSerialization: &api.SelectObjectOutput{
OutputHeader: false,
CsvParams: map[string]string{
"quoteFields": "ALWAYS",
"recordDelimiter": "Cg==",
"fieldDelimiter": "LA==",
"quoteCharacter": "Ig==",
},
},
RequestProgress: &api.SelectObjectProgress{
Enabled: true,
},
},
}
csvRes, err := bosClient.SelectObject(bucket, csvObject, csvArgs)
if err != nil {
fmt.Println(err)
return
}
Select JSON
// Select parameters for JSON files
jsonArgs := &api.SelectObjectArgs{
SelectType: "json",
SelectRequest: &api.SelectObjectRequest{
Expression: "c2VsZWN0ICogZnJvbSBCb3NPYmplY3QucHJvamVjdHNbKl0ucHJvamVjdF9uYW1l",
ExpressionType: "SQL",
InputSerialization: &api.SelectObjectInput{
CompressionType: "NONE",
JsonParams: map[string]string{
"type": "LINES",
},
},
OutputSerialization: &api.SelectObjectOutput{
JsonParams: map[string]string{
"recordDelimiter": "Cg==",
},
},
RequestProgress: &api.SelectObjectProgress{
Enabled: true,
},
},
}
jsonRes, err := bosClient.SelectObject(bucket, jsonObject, jsonArgs)
if err != nil {
fmt.Println(err)
return
}
Resolution results
The response returned by SelectObject uses the coding methods with fixed structure. Please refer to SelectObject Code Example for specific resolution methods.