Upload Object
In BOS, the fundamental data unit for user operations is the object. Although there is no limit to the number of objects in a bucket, each object can store a maximum of 5 TB of data.
An object consists of three components: a key, metadata, and data. Specifically:
- Key is the name of the object;
- Meta is the user’s description of the object, consisting of a series of Name-Value pairs;
- The data component represents the content of the object.
The BOS JavaScript SDK provides a rich set of file upload APIs, and files can be uploaded in the following ways:
- Simple upload
- Append upload
- Multipart upload
- Resumable upload
The naming rules for objects are as follows:
- Use UTF-8 for encoding.
- The length must range from 1 to 1023 bytes.
- The first character cannot be '/', and the '@' character is not allowed, as '@' is reserved for use in image processing APIs.
Simple upload
In simple upload scenarios, the JS SDK supports uploading objects in the form of data streams, strings, specified files (only supported in the Node.js environment), and blob objects (only supported in the browser environment). They correspond to the putObject, putObjectFromString, putObjectFromFile, and putObjectFromBlob methods respectively.
-
Basic workflow
- Create a BosClient instance.
- Call methods related to putObject()
-
Example code
JavaScript1function done(response) { 2 // Upload completed 3} 4function fail(fail) { 5 // Upload failed 6}
// Upload in buffer form var buffer = new Buffer('hello world'); client.putObject(bucket, object, buffer) .then(done) .catch(fail);
1// Upload in string form
2client.putObjectFromString(bucket, object, 'hello world')
3 .then(done)
4 .catch(fail);
5
6 // Upload in file form, only supported in Node.js environment
7client.putObjectFromFile(bucket, object, <path-to-file>)
8 .then(done)
9 .catch(fail);
10
11 // Upload in blob object form, only supported in browser environment
12 client.putObjectFromBlob(bucket, object, <blob object>)
13 .then(done)
14 .catch(fail);
15```
16 > **Note:** Objects are uploaded to BOS in the form of files. putObject function supports uploading objects with a size not exceeding 5 GB. After the putObject request is processed successfully, BOS will return the ETag of the object in the Header as the file identifier.
Append upload
Objects created using the simple upload method described above are all of a standard type and do not support append writes. This limitation can be inconvenient in scenarios where frequent data overwriting occurs, such as log files, video surveillance, and live video streaming.
To address this, Baidu AI Cloud Object Storage (BOS) specifically supports the AppendObject method, which allows files to be uploaded in an append-write fashion. Objects created through the AppendObject operation are categorized as Appendable Objects, enabling data to be appended to them. The size limit for AppendObject files is 0–5 GB.
1let bucketName = "yourbucket";
2let appendKey = "appendObjectKey";
3 // When uploading for the first time, set offset to null
4client.appendObjectFromString(bucketName,appendKey,"firstContent",null)
5.then(function(response){
6 // Get the offset from the response header
7 var offset = + response.http_headers['x-bce-next-append-offset'];
8 // When appending upload for the second time, pass in the offset obtained above
9 client.appendObjectFromString(bucketName, appendKey, "appendContent", offset);
10});
Multipart upload
In addition to using the putObject() method for file uploads to BOS, BOS also supports another upload mode called Multipart Upload. This mode can be used in scenarios such as:
- When resumable uploads are required.
- When uploading files larger than 5GB.
- When the connection to the BOS server is frequently interrupted due to unstable network conditions.
- Enable streaming file uploads.
- The file size cannot be determined before uploading.
Multipart upload is slightly more complex than direct upload. Multipart upload is divided into three stages:
- Initiate upload (initiateMultipartUpload)
- Upload parts (uploadPartFromBlob)
- Complete upload (completeMultipartUpload)
Browser-side code example
Divide the file in parts
1let options = {
2 'Content-Type': 'application/json', // Add http header
3 'Cache-Control': 'public, max-age=31536000', // Specify cache directives
4 'Content-Disposition': 'attachment; filename="example.jpg"', // Indicate how the response content should be displayed
5 'x-bce-meta-foo1': 'bar1', // Add custom meta information
6 'x-bce-meta-foo2': 'bar2', // Add custom meta information
7 'x-bce-meta-foo3': 'bar3', // Add custom meta information
8};
9 let PART_SIZE = 5 * 1024 * 1024; // Specify the part size
10function getTasks(file, uploadId, bucketName, key) {
11 let leftSize = file.size;
12 let offset = 0;
13 let partNumber = 1;
14 let tasks = [];
15 while (leftSize > 0) {
16 let partSize = Math.min(leftSize, PART_SIZE);
17 tasks.push({
18 file: file,
19 uploadId: uploadId,
20 bucketName: bucketName,
21 key: key,
22 partNumber: partNumber,
23 partSize: partSize,
24 start: offset,
25 stop: offset + partSize - 1
26 });
27 leftSize -= partSize;
28 offset += partSize;
29 partNumber += 1;
30 }
31 return tasks;
32}
Handle the upload logic for each part
1function uploadPartFile(state, client) {
2 return function(task, callback) {
3 let blob = task.file.slice(task.start, task.stop + 1);
4 client.uploadPartFromBlob(task.bucketName, task.key, task.uploadId, task.partNumber, task.partSize, blob)
5 .then(function(res) {
6 ++state.loaded;
7 callback(null, res);
8 })
9 .catch(function(err) {
10 callback(err);
11 });
12 };
13}
Initialize uploadID, start uploading parts, and complete the upload
1let uploadId;
2client.initiateMultipartUpload(bucket, key, options)
3 .then(function(response) {
4 uploadId = response.body.uploadId; // Start the upload and get the server-generated uploadId
5 let deferred = sdk.Q.defer();
6 let tasks = getTasks(blob, uploadId, bucket, key);
7 let state = {
8 lengthComputable: true,
9 loaded: 0,
10 total: tasks.length
11 };
12 // To manage multipart uploads, the async library (https://github.com/caolan/async) is used for asynchronous processing
13 let THREADS = 2; // Number of parts uploaded simultaneously
14 async.mapLimit(tasks, THREADS, uploadPartFile(state, client), function(err, results) {
15 if (err) {
16 deferred.reject(err);
17 } else {
18 deferred.resolve(results);
19 }
20 });
21 return deferred.promise;
22 })
23 .then(function(allResponse) {
24 let partList = [];
25 allResponse.forEach(function(response, index) {
26 // Generate the part list
27 partList.push({
28 partNumber: index + 1,
29 eTag: response.http_headers.etag
30 });
31 });
32 return client.completeMultipartUpload(bucket, key, uploadId, partList); // Complete the upload
33 })
34 .then(function (res) {
35 // Upload completed
36 })
37 .catch(function (err) {
38 // Upload failed, add your code
39 console.error(err);
40 });
Node.js-side code example
Divide the file in parts, initialize UploadID, and upload parts
1let options = {
2 'Content-Type': 'application/json', // Add http header
3 'Cache-Control': 'public, max-age=31536000', // Specify cache directives
4 'Content-Disposition': 'attachment; filename="example.jpg"', // Indicate how the response content should be displayed
5 'x-bce-meta-foo1': 'bar1', // Add custom meta information
6 'x-bce-meta-foo2': 'bar2', // Add custom meta information
7 'x-bce-meta-foo3': 'bar3', // Add custom meta information
8};
9 let PART_SIZE = 5 * 1024 * 1024; // Specify the part size
10let uploadId;
11client.initiateMultipartUpload(bucket, key, options)
12 .then(function(response) {
13 uploadId = response.body.uploadId; // Start the upload and get the server-generated uploadId
14 let deferred = sdk.Q.defer();
15 let blob = {
16 // Use the fs file library to get the file size
17 size: fs.statSync(localFileName).size,
18 filename: localFileName
19 }
20 let tasks = getTasks(blob, uploadId, bucket, key);
21 let state = {
22 lengthComputable: true,
23 loaded: 0,
24 total: tasks.length
25 };
26 // To manage multipart uploads, the async library (https://github.com/caolan/async) is used for asynchronous processing
27 let THREADS = 2; // Number of parts uploaded simultaneously
28 async.mapLimit(tasks, THREADS, uploadPartFile(state, client), function(err, results) {
29 if (err) {
30 deferred.reject(err);
31 } else {
32 deferred.resolve(results);
33 }
34 });
35 return deferred.promise;
36 })
37 .then(function(allResponse) {
38 let partList = [];
39 allResponse.forEach(function(response, index) {
40 // Generate the part list
41 partList.push({
42 partNumber: index + 1,
43 eTag: response.http_headers.etag
44 });
45 });
46 return client.completeMultipartUpload(bucket, key, uploadId, partList); // Complete the upload
47 })
48 .then(function (res) {
49 // Upload completed
50 })
51 .catch(function (err) {
52 // Upload failed, add your code
53 console.error(err);
54 });
55function getTasks(file, uploadId, bucketName, key) {
56 let leftSize = file.size;
57 let offset = 0;
58 let partNumber = 1;
59 let tasks = [];
60 while (leftSize > 0) {
61 let partSize = Math.min(leftSize, PART_SIZE);
62 tasks.push({
63 file: file.filename,
64 uploadId: uploadId,
65 bucketName: bucketName,
66 key: key,
67 partNumber: partNumber,
68 partSize: partSize,
69 start: offset,
70 stop: offset + partSize - 1
71 });
72 leftSize -= partSize;
73 offset += partSize;
74 partNumber += 1;
75 }
76 return tasks;
77}
78function uploadPartFile(state, client) {
79 return function(task, callback) {
80 console.log("task: ", task)
81 return client.uploadPartFromFile(task.bucketName, task.key, task.uploadId, task.partNumber, task.partSize, task.file , task.start)
82 .then(function(res) {
83 ++state.loaded;
84 console.log("ok")
85 callback(null, res);
86 })
87 .catch(function(err) {
88 console.log("bad")
89 callback(err);
90 });
91 };
92}
Cancel multipart upload event
Users can cancel multipart uploads by using the abortMultipartUpload method.
1client.abortMultipartUpload(<BucketName>, <Objectkey>, <UploadID>);
Retrieve unfinished multipart upload event
Users can use the listMultipartUploads method to retrieve ongoing multipart upload events within a bucket.
1client.listMultipartUploads(<bucketName>)
2 .then(function (response) {
3 // Traverse all upload events
4 for (var i = 0; i < response.body.multipartUploads.length; i++) {
5 console.log(response.body.multipartUploads[i].uploadId);
6 }
7 });
Get all uploaded part information
Users can use the listParts method to retrieve all parts uploaded during an upload event.
1client.listParts(<bucketName>, <key>, <uploadId>)
2 .then(function (response) {
3 // Traverse all upload events
4 for (var i = 0; i < response.body.parts.length; i++) {
5 console.log(response.body.parts[i].partNumber);
6 }
7 });
Resumable upload
When users upload large files to BOS, network instability or program crashes can cause the entire upload to fail, invalidating any parts uploaded before the failure. Users must restart the upload process, which wastes resources and often leads to repeated failures in unstable network conditions. To address this scenario, BOS offers resumable upload functionality, leveraging multipart upload capabilities. This method splits the file into multiple parts, uploads these parts separately, and once all parts are uploaded, combines them into a complete object.
putSuperObject
Supported in version
1.0.1-beta.2and above
The JavaScript SDK provides the putSuperObject method, which is a high-level encapsulation of APIs related to multipart upload. It supports functions such as pausing, resuming, canceling upload tasks, setting the number of concurrent shards, and retrying on failure.
Request headers
No special request headers beyond common headers.
Initialization parameters
| Parameter name | Description | Types | Required | Default value | Example value |
|---|---|---|---|---|---|
| bucketName | Bucket name | string |
Yes | - | "bucket001" |
| objectName | Object name after upload | string |
Yes | - | "file001" |
| data | Upload data: when the type is string, it represents the file path; Buffer and Blob objects are also supported | string | Buffer | Blob |
Yes | - | - |
| StorageClass | File storage class | "STANDARD" | "STANDARD_IA" | "COLD" | "ARCHIVE" | "MAZ_STANDARD" | "MAZ_STANDARD_IA" |
No | "STANDARD" |
"STANDARD" |
| chunkSize | Default size of uploaded file shards, in bytes | number |
No | 5 * 1024 ** 2(5MB) | 1048576 |
| partConcurrency | Number of concurrent shards | number | No | 5 | 5 |
| ContentLength | File size, in bytes, not required; it will be automatically calculated internally | string |
No | - | 1048576 |
| ContentType | File media type, not required; it will be automatically generated based on the objectName field internally | string |
No | - | "application/x-gtar" |
| uploadId | Multipart upload task ID, not required; it needs to be passed in when resuming a task from a breakpoint | string |
No | - | "a44cc9bab11cbd156984767aad637851" |
| onProgress | Upload progress callback function | ProgressCallback |
No | - | - |
| onStateChange | Task status change callback function | StateChangeCallback |
No | - | - |
ProgressCallback
1type ProgressCallback = (
2 /* Current upload speed */
3 speed: string,
4 /* Upload progress, retaining 4 decimal places */
5 progress: number,
6 /* Upload progress - percentage */
7 percent: string,
8 /* Number of bytes uploaded */
9 uploadedBytes: number,
10 /* Total bytes of the file */
11 totalBytes
12) => void
StateChangeCallback
1type StateChangeCallback = (
2 /* Status */
3 state: string,
4 /* If the upload fails, the failure message will be passed; if the upload is successful, the address after upload will be returned */
5 options: {
6 message: string;
7 data: Record<string, any> | null
8 }
9) => void;
Status
"inited": Task initialization completed"running": Task queue is running"paused": Task queue paused"completed": Task completed, upload ended"cancelled": Task cancelled, upload ended"failed": Task abnormal, upload ended
Node.js-side code example
1const sdk = require('@baiducloud/sdk');
2const client = new sdk.BosClient({
3 endpoint: 'http://bj.bcebos.com',
4 credentials: {
5 ak: "<Your Access Key>",
6 sk: "<Your Secret Key>"
7 }
8});
9 // Bucket name
10const bucketName = "<Your Bucket Name>";
11 // Name of the file after upload
12const objectName = 'demo.tgz';
13 // Local file path
14const data = '/Mock/path/to/local/file/demo.tgz';
15 // Initialize upload task
16const SuperUploadTask = client.putSuperObject({
17 // Bucket name
18 bucketName,
19 // Object name after upload
20 objectName,
21 // Upload data: when the type is string, it represents the file path
22 data,
23 // Number of concurrent shards
24 partConcurrency: 2,
25 // Upload progress callback function
26 onProgress: (options) => {
27 const {speed, progress, percent, uploadedBytes, totalBytes} = options;
28 console.log(options);
29 },
30 // Status change callback function
31 onStateChange: (state, data) => {
32 if (state === 'completed') {
33 console.log('Upload successful');
34 } else if (state === 'failed') {
35 console.error('Upload failed, reason:' + data.message);
36 } else if (state === 'cancelled') {
37 console.log('Upload task cancelled');
38 } else if (state === 'inited') {
39 console.log('Upload task initialization completed');
40 } else if (state === 'running') {
41 console.log('Upload task starts running...');
42 } else if (state === 'paused') {
43 console.log('Upload task paused');
44 }
45 }
46});
47 // Start upload task
48const tasks = SuperUploadTask.start();
49 console.log('Split tasks: ', tasks);
50 // Pause upload task
51setTimeout(() => {
52 SuperUploadTask.pause();
53}, 5000);
54 // Resume upload task
55setTimeout(() => {
56 SuperUploadTask.resume();
57}, 15000);
58 // Cancel upload task
59setTimeout(async () => {
60 const result = SuperUploadTask.cancel();
61 console.log(result ? 'Task cancelled successfully' : 'Failed to cancel task');
62}, 25000);
Browser-side code example
1<html>
2 <head>
3 <meta charset="utf-8" />
4 <title>SuperUpload Test</title>
5 <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/amis/6.2.2/sdk.min.css" integrity="sha512-9yikVhRqNeq1rypIzAFKR8CA2uG8V5gYppoDKK4xx+7eoLXJsEm+f9QN0++xqHvrOxxvb93uiipgYk9uEW7RlA==" crossorigin="anonymous" referrerpolicy="no-referrer" />
6 <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/amis/6.2.2/cxd.min.css" integrity="sha512-9yikVhRqNeq1rypIzAFKR8CA2uG8V5gYppoDKK4xx+7eoLXJsEm+f9QN0++xqHvrOxxvb93uiipgYk9uEW7RlA==" crossorigin="anonymous" referrerpolicy="no-referrer" />
7 <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/simple-notify@1.0.4/dist/simple-notify.css" />
8 </head>
9 <body>
10 <div id="root"></div>
11 <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js" integrity="sha512-v2CJ7UaYy4JwqLDIrZUI/4hqeoQieOmAZNXBeQyjo21dadnwR+8ZaIJVT8EE2iyI61OV8e6M8PP2/4hpQINQ/g==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
12 <script src="https://cdn.jsdelivr.net/npm/simple-notify@1.0.4/dist/simple-notify.min.js"></script>
13 <script src="https://cdnjs.cloudflare.com/ajax/libs/amis/6.2.2/sdk.min.js" integrity="sha512-BpMIHWCtAUDARuH/qnGH6eBxoOv4l0i4Y9/9f6vnGVDNnPqbWCZoPgk3/KvY6NuOb6cAPFmqVN1R2sXz7Z3VcQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
14 <script src="https://bce.bdstatic.com/lib/@baiducloud/sdk/1.0.1-beta.2/baidubce-sdk.bundle.min.js" ></script>
15 <script>
16 (async function () {
17 // Instantiate BOS SDK
18 const client = new window.baidubce.sdk.BosClient({
19 endpoint: 'http://bj.bcebos.com',
20 credentials: {
21 ak: "<Your Access Key>",
22 sk: "<Your Secret Key>"
23 }
24 });
25 // Instantiate BOS SDK
26 const bucketName = '<Your Bucket Name>';
27 // Upload task instance
28 let SuperUpload;
29 // Progress information
30 let p1 = 0;
31
32 // Message popup
33 function pushNotify({status, title, text}) {
34 new Notify({
35 status: status || 'success',
36 title: title,
37 text: text,
38 distance: 200,
39 position: 'x-center'
40 });
41 }
42 // Get task instance
43 function getInstance() {
44 if (SuperUpload) {
45 pushNotify({title: 'Get instance information', text: 'Instance information got successfully'});
46 } else {
47 pushNotify({status: 'error', title: 'Get instance information', text: 'Failed to get instance information'});
48 }
49 }
50
51 async function start() {
52 const files = $('#file')?.[0]?.files;
53
54 if (!files || files.length === 0) {
55 pushNotify({status: 'error', title: 'Upload', text: 'Please select a file first'});
56 return;
57 }
58
59 const file = files[0];
60 let reader = new FileReader();
61 let rs = reader.readAsArrayBuffer(file);
62 let blob = null;
63
64 // Read the file as a Blob object
65 reader.onload = async e => {
66 if (typeof e.target.result === 'object') {
67 blob = new Blob([e.target.result]);
68 } else {
69 blob = e.target.result;
70 }
71
72 SuperUpload = client.putSuperObject({
73 bucketName,
74 objectName: file.name,
75 ContentLength: file.size,
76 ContentType: file.type,
77 data: blob,
78 partConcurrency: 2,
79 onProgress: options => {
80 const {speed, progress, uploadedBytes, totalBytes} = options;
81
82 console.log(options);
83 amisScoped.updateProps({
84 data: {p1: progress * 100}
85 });
86 },
87 onStateChange: (state, data) => {
88 if (state === 'completed') {
89 pushNotify({title: 'Upload', text: 'Upload successful'});
90 console.log(data);
91 } else if (state === 'failed') {
92 pushNotify({status: 'error', title: 'Upload failed', text: data.message});
93 } else if (state === 'cancelled') {
94 pushNotify({title: 'Upload', text: // Upload task cancelled'});
95 } else if (state === 'inited') {
96 pushNotify({title: 'Upload', text: 'Upload task initialization completed'});
97 } else if (state === 'running') {
98 pushNotify({title: 'Upload', text: 'Upload task starts running...'});
99 } else if (state === 'paused') {
100 pushNotify({title: 'Upload', text: 'Upload task has been paused'});
101 }
102 }
103 });
104
105 const tasks = await SuperUpload.start();
106 console.log('Slice list: ', tasks);
107 };
108 }
109
110 function pause() {
111 if (SuperUpload) {
112 SuperUpload.pause();
113 } else {
114 pushNotify({status: 'error', title: 'Pause', text: 'Instance does not exist'});
115 }
116 }
117
118 function resume() {
119 if (SuperUpload) {
120 SuperUpload.resume();
121 } else {
122 pushNotify({status: 'error', title: 'Resume', text: 'Instance does not exist'});
123 }
124 }
125
126 async function cancel() {
127 if (SuperUpload) {
128 const result = await SuperUpload.cancel();
129
130 if (result) {
131 SuperUpload = undefined;
132 p1 = 0;
133
134 amisScoped.updateProps({
135 data: {p1: 0}
136 });
137 }
138 } else {
139 pushNotify({status: 'error', title: 'Cancel task', text: 'Instance does not exist'});
140 }
141 }
142
143 const amis = amisRequire('amis/embed');
144 const amisJSON = {
145 type: 'page',
146 body: [
147 {
148 type: 'alert',
149 title: 'CORS Configuration',
150 body: 'If the API reports a CORS error, please enable the <a href="https://cloud.baidu.com/doc/BOS/s/Dk6kqw1g8", target="__blank">CORS configuration</a> for the corresponding bucket',
151 level: 'warning',
152 showIcon: true,
153 className: 'mb-3'
154 },
155 {
156 type: 'custom',
157 name: 'file',
158 onMount: (dom, value, onChange, props) => {
159 const input = document.createElement('input');
160 input.setAttribute('type', 'file');
161 input.setAttribute('id', 'file');
162
163 dom.appendChild(input);
164 }
165 },
166 {
167 type: 'progress',
168 value: '${p1}',
169 style: {
170 marginTop: '20px',
171 width: '500px'
172 }
173 },
174 {
175 type: 'flex',
176 justify: 'flex-start',
177 style: {
178 marginTop: '20px'
179 },
180 items: [
181 {
182 type: 'button',
183 level: 'primary',
184 label: // Get instance,
185 style: {
186 marginRight: '10px'
187 },
188 onClick: getInstance
189 },
190 {
191 type: 'button',
192 level: 'primary',
193 label: 'Upload',
194 style: {
195 marginRight: '10px'
196 },
197 onClick: start
198 },
199 {
200 type: 'button',
201 level: 'primary',
202 label: 'Pause',
203 style: {
204 marginRight: '10px'
205 },
206 onClick: pause
207 },
208 {
209 type: 'button',
210 level: 'primary',
211 label: 'Resume',
212 style: {
213 marginRight: '10px'
214 },
215 onClick: resume
216 },
217 {
218 type: 'button',
219 level: 'danger',
220 label: 'Cancel',
221 onClick: cancel
222 }
223 ]
224 }
225 ]
226 };
227 const amisScoped = amis.embed('#root', amisJSON, {data: {p1}});
228 })()
229 </script>
230 </body>
231</html>
Set Http Header and custom meta data for object
The SDK essentially invokes the backend HTTP API, allowing the BOS service to enable users to customize HTTP headers. It also lets users add custom metadata to the object being uploaded. For example, the putObjectFromFile() function can handle this with the following code:
-
Example code
JavaScript1let options = { 2 'Content-Type': 'application/json', // Add http header 3 'Cache-Control': 'public, max-age=31536000', // Specify cache directives 4 'Content-Disposition': 'attachment; filename="example.jpg"', // Indicate how the response content should be displayed 5 'x-bce-meta-foo1': 'bar1', // Add custom meta information 6 'x-bce-meta-foo2': 'bar2', // Add custom meta information 7 'x-bce-meta-foo3': 'bar3', // Add custom meta information 8} 9client.putObjectFromFile(bucket, object, <path-to-file>, options) 10 .then(done) 11 .catch(fail);Plain Text1> **Note:** The key of custom Meta information needs to start with `x-bce-meta-`.
Obtain upload progress
The JavaScript SDK provides real-time upload progress information, which can be accessed by listening to the process event. This feature is supported by all upload-related APIs.
-
Example code: taking the putObjectFromBlob API as an example
JavaScript1// Upload in blob object form, only supported in browser environment 2 client.putObjectFromBlob(bucket, object, <blob object>) 3 .then(done) 4 .catch(fail); 5client.on('progress', function() { 6 // do something 7})
Synchronous callback
The JavaScript SDK supports the BOS server-side synchronous callback API. By adding x-bce-process in the request header or adding x-bce-process in the url query to specify the callback server address and related parameter configurations, the BOS server can actively call the callback API to notify the client after the upload is completed.
Currently, only normal upload (PutObject) and complete multipart upload (CompleteMultipartUpload) are supported
- Method 1:
Use the
callbackparameter, and the SDK will help you process the parameters and add them to the request header
1try {
2 const res = await client.putObjectFromString('bucketName', 'fileName', 'demo-string', {
3 callback: {
4 urls: ["https://www.test.com/callback"],
5 vars: {name: 'baidu'},
6 encrypt: 'config',
7 key: 'callback1'
8 }
9 });
10 /* callback result */
11 console.log(res.body.callback.result);
12} catch (e) {
13 /* callback error code */
14 console.error(res.body.callback.code);
15 /* callback error message */
16 console.error(res.body.callback.message);
17}
- Method 2:
Handle the parameters yourself, and add the
"x-bce-process"parameter and its value to the request header
1try {
2 const res = await client.putObjectFromString('bucketName', 'fileName', 'demo-string', {
3 'x-bce-process': 'callback/callback,u_WyJodHRwczovL3d3dy50ZXN0LmNvbS9jYWxsYmFjayJd,m_sync,v_eyJuYW1lIjoiYmFpZHUifQ'
4 });
5 /* callback result */
6 console.log(res.body.callback.result);
7} catch (e) {
8 /* callback error code */
9 console.error(res.body.callback.code);
10 /* callback error message */
11 console.error(res.body.callback.message);
12}
