Select scanning
Overview
The SelectObject API supports executing SQL statements on the content of objects in specified formats (CSV/JSON/Parquet) in BOS, and returns the file content required by users after screening, analyzing, and filtering the object content through SQL, a structured query language.
Currently, if users wish to filter the content of objects stored in BOS, they must first download individual objects using the GetObject API, then analyze and filter the data locally. The SelectObject API, however, integrates the filtering process into the BOS service layer, reducing network bandwidth usage and latency when downloading data from BOS. It also conserves resources like CPU and memory needed for data filtering, ultimately lowering customers' application costs when accessing BOS data.
Applicable scenarios
A common use case for SelectObject is working with big data tools, replacing the traditional GetObject API for processing BOS data. It is used for tasks such as extracting specific content from log files or filtering data for analysis.
Usage requirements
To filter objects in BOS using the SelectObject API, the following constraints and detailed requirements must be met:
-
Supported file types
- Only support selecting files in UTF-8 encoded formats that comply with RFC 4180 standard CSV (including TSV and other CSV-like files), JSON files, and Parquet files;
- For CSV files: The maximum length of a single row and a single column is 512 K each;
- For JSON files: Two types are supported - DOCUMENT and LINES. DOCUMENT: The entire file is a single JSON object. LINES: The entire file consists of multiple lines of JSON objects (the file itself is not a valid JSON object). Lines are separated by newline delimiters, and users can specify common delimiters such as \n, \r, and \r\n;
- Support files in three storage classes: Standard, Infrequent Access, and Cold Storage;
- Support files encrypted via three server-side encryption methods: SSE-BOS, SSE-KMS, and SSE-C;
- Support files compressed using GZIP (content is decompressed in a streaming manner and returned). Note: GZIP files do not support the deflate format, and must comply with RFC 1952 (see Gzip Compression Standard Reference). Support Parquet files compressed by GZIP (column-wise) or SNAPPY (column-wise) for restoration.
-
Supported SQL Syntax
- Only the SELECT syntax is supported currently. SQL statements must follow the format:
Select field_list From source Where condition Limit number; - Support data types: string, int (64 bit), float (64 bit), timestamp, boolean, decimal;
- Support operations: logical conditions (AND/OR/NOT), arithmetic expressions (+/-/*///%), comparison operators (>, =, <, >=, <=, !=), matching operators (LIKE, BETWEEN+AND, IN), null checks (IS NULL/IS NOT NULL);
- Support functions/keywords: aggregation functions (AVG, COUNT, MAX, MIN, SUM), conversion function (CAST), alias keyword (AS);
- Only single-file queries are supported. Keywords like join, order by, group by, having, and offset are not supported.
- Only the SELECT syntax is supported currently. SQL statements must follow the format:
-
SQL Statement Restrictions
- Single SQL statement constraints: maximum length: 16 K; maximum number of columns: 1,000; maximum column name length: 1,024; maximum number of aggregation operations (e.g., count, avg): 100;
- Strings in SQL statements must be enclosed in single quotes; identifiers must be enclosed in double quotes. Example:
SELECT * FROM "ident" = 'str'(where ident is an identifier in the data, and str is a specific string value); - The LIMIT function has higher priority than aggregation functions. For example:
Select avg(cast(_1 as int)) from BosObject limit 100means calculating the average of the first 100 elements - this differs from MySQL semantics; - The COUNT function can only be used with
*(i.e.,count()); forms like count(_1) are not allowed. - The maximum length of JSON node data specified by the json path after FROM in an SQL statement is 512 K, with a maximum depth of 10 levels;
- The array wildcard
[*]can only be used when selecting JSON files.[*]cannot appear in expressions after select or where - it can only be used in the json path after from; - CSV-specific restrictions: After the from keyword, only BosObject is allowed;
- WHERE clause restrictions: Cannot contain aggregation conditions; only logical operators are allowed;
- The LIKE statement supports a maximum of 5 % wildcards (representing 0 or more arbitrary characters) and _ (representing a single character). The IN statement supports a maximum of 1,024 constant items;
- Fields after the Select field can be column names, CSV column indices (e.g., _1, _2), or aggregation functions (e.g., AVG(CAST _1 as int)). Standalone CAST _1 as int is not allowed. Binary expressions are not supported for fields;
- If one field after select is
*, no other fields are allowed. For example:select *, _1 from sis invalid. Aggregation functions and standalone column names cannot appear alone in select fields. All alias names in select fields must be unique; - If an SQL for JSON contains a field or source in the form
key[*]/key[1], the field is considered to represent selecting an array element with the key. If an SQL field/source includes a form likekey[a], it will be parsed as a keykey[a]to retrieve the corresponding value in the JSON; - Case sensitivity: Key matching in JSON files and SQL is case-sensitive. For example: select s.Age and select s.age are different;
- For the
BETWEEN ANDandINkeywords (used for set and range matching), ensure all values belong to the same data type.
Select data fault tolerance mechanism
(I) Handling Missing Data
- For CSV files: If a column’s data is missing and the column is used for condition judgment in WHERE, the condition is deemed unsatisfied, and the row is skipped. If the missing column is used for aggregation operations in SELECT (e.g., avg(cast _1 as int)), aggregating a non-existent column is considered invalid - the process will terminate immediately and return a corresponding error message;
- For JSON files: The same rules apply if a key is missing;
- Missing columns in CSV files or missing keys in JSON files are treated as NULL by default (e.g., IS NULL will return true).
- Additional scenarios:
- If a JSON key or CSV column is used in an expression after WHERE (e.g.,
…… where _1 = '',…… where a.b = 'value'), it is treated as NULL if missing- If a JSON key or CSV column is directly used as a field after select (e.g.,
select _1 from……,select a.b from……): Missing CSV columns return an empty string by default. Missing JSON keys also return an empty string by default
(II) Handling Data Type Mismatches
- For CSV files: If a column’s data type is invalid (e.g.,
CAST _1 as INTwhere _1 is a non-numeric string, causing a cast failure): IfCAST _1 as INTis used for condition judgment in WHERE, the condition is deemed unsatisfied, and the row is skipped. If the column is used for aggregation operations in SELECT (e.g.,avg(cast _1 as int)), aggregating an invalid column is considered invalid - the process will terminate immediately and return a corresponding error message; - For JSON files: The same rules apply if a key’s corresponding data type is invalid
Note
- SelectObject is a CPU-intensive request. Data scanning volume is calculated in 8 MB units. For specific pricing standards, see [Product Pricing - Data Processing Fees - Select Scanning Fees](BOS/Product pricing/Charge Item Introduction/Charge Item Overview.md).
- The maximum QPS allowed for SelectObject requests per bucket is 500.
- If SQL statements or file content contain special characters, use the \ escape character. For example:
select "key }"[" from BosObject.array[*]means selecting the value corresponding to the pathkey }"[in the JSON array. - For scenarios demanding high data precision (e.g., floating-point calculations or financial transactions), it is recommended to use the decimal data type. Decimal supports operations such as arithmetic (+, -, *, /), comparisons (>, =, <, >=, <=, !=), and matching conditions (BETWEEN + AND, IN). Other numeric types (e.g., int, float) can also be involved in operations with decimal, as they will be automatically converted to decimal by default for enhanced accuracy.
- Data in CSV files is treated as string by default. JSON does not have a built-in decimal data type. To process a CSV column or JSON value as decimal, the CAST function must be used. Examples:
cast(_1 as decimal),cast(1.23 as decimal),cast(key as decimal).
CSV Object
Selecting a CSV object typically involves using column indices or column names to retrieve specific columns, or performing aggregation on certain columns. For example, a test.csv file contains columns of various data types. Note: By default, each column in a CSV is a string, so corresponding CAST conversions are required. Do not leave spaces between column delimiters;
1header1,header2,header3
21,2,3.4
3a,b,c
4"d","e","f"
5true,false,true
62006-01-02 15:04:06,"2006-01-02 16:04:06",2006-01-02 17:04:06
Common SQL statements
| SQL statement | Description | Remarks |
|---|---|---|
| select * from BosObject limit 100 | Return the first 100 rows of the object | - |
| select header1,header2 from BosObject | Return columns named header1 and header2 from the object | The fileHeaderInfo parameter must be "USE" |
| select _1,_3 from BosObject where cast(_1 as int) <= cast(_3 as int) | Return integers from columns 1 and 3 where column 1 is less than or equal to column 3 | Columns represented by _1 and _3 must be integer-type to allow CAST conversion; otherwise, the row is skipped for failing the condition |
| select count(*) from BosObject | Return the total number of rows in the object | - |
| select AVG(cast(_1 AS int)), MAX(cast(_1 AS int)), MIN(cast(_1 AS int)) from BosObject | Return the average, maximum, and minimum values of the first column in the object | The first column of each row must not contain non-integer strings; otherwise, the process fails immediately |
| select SUM(cast(header1 AS float)) from BosObject WHERE cast(header1 AS float) != 1 | Return the sum of values in the header1 column where the value is not equal to 1 | The header1 column of each row must not contain non-numeric strings |
| select * from BosObject where _1 LIKE '%Fruit_' | Examples of matching values: "apple tree", "fruits and vegetables"; non-matching example: "apple" | Strings after the LIKE operator must be enclosed in single quotes |
| select * from BosObject where cast(_1 AS int) % 3 = 0 | Return all rows where column _1 is divisible by 3 | _1 must be an integer string to use the % operator |
| select * from BosObject where cast(_1 AS int) between 1 and 2 | Return all rows where column _1 is in the range [1,2] | _1 must be an integer string |
| select * from BosObject where cast(_1 AS timestamp) NOT IN (cast('2006-01-02 15:04:06' as timestamp), cast('2006-01-03 15:04:06' as timestamp)) | Return all rows where column _1 is not in the IN range | _1 must be a date string |
| select * from BosObject where cast(_1 AS int) * cast(_2 AS int) > cast(_3 AS float) + 1 | Return all rows where column _1 satisfies the conditional expression result | _1, _2, _3 must be valid strings that meet CAST conditions |
| SELECT _1,_2 FROM BosObject WHERE cast(_2 as decimal) IN (cast('5.1824349494011866916128' as decimal),cast('5.00000000000001000000000' as decimal)) | Return columns 1 and 2 of all rows where column _2 is in the range [5.1824349494011866916128,5.00000000000001000000000] | Column _2 is compared as the decimal data type |
| SELECT MAX(CAST(_3 AS DECIMAL)) FROM BosObject WHERE CAST(_3 AS DECIMAL) >= cast('559062242.92723' as float)) | Return the maximum value in column _3 where the value is greater than or equal to 559062242.92723 | Column _3 is compared as the decimal data type |
JSON Object
Selecting a JSON object typically involves using keys to retrieve corresponding data. JSON files include two types: LINES and DOCUMENT, and their content must comply with official standards.
JSON DOCUMENT Object
1{"name": "Smith",
2"age": 16,
3"weight": 65.5,
4"org": null,
5"projects":
6 [
7 {"project_name":"project1", "completed":false},
8 {"project_name":"project2", "completed":true}
9 ]
10}
JSON LINES Object
1{"name": "Smith",
2"age": 16,
3"org": null,
4"projects":
5 [
6 {"project_name":"project1", "completed":false},
7 {"project_name":"project2", "completed":true}
8 ]
9}
10{"name": "charles",
11"age": 17,
12"org": "baidu",
13"weight": 65.5,
14"projects":
15 [
16 {"project_name":"project3", "completed":false},
17 {"project_name":"project4", "completed":true}
18 ]
19}
Common SQL statements
- Basic Form of Json Path:
field0.field1[n].property1.attributes[*]means: Find the n-th element in the array under the field1 node (which is under the field0 node at the root of the JSON file), then find all content in the attributes array under property1 of that element. - SQL for JSON objects supports aggregation functions, logical operations, and mathematical operations, among others. JSON values come with intrinsic data types and typically do not require CAST conversion—unless explicit parsing to decimal or another type is necessary.
| SQL statement | Description | Remarks |
|---|---|---|
| select projects from BosObject where name='Smith' | Return projects elements in the JSON file where name='Smith' | |
| select * from BosObject.projects[*].project_name | Return the project_name field of the projects node array under the root node of the JSON file | |
| select s.completed from BosObject.projects[1] s where s.project_name='project2' | Return the completed field value of the first element in the projects array of the object, where project_name = 'project2' | |
| select * from BosObject s where s.org IS NULL AND weight is null | Return records in the JSON file where both name and weight are null | A non-existent weight node is also treated as null |
Parquet Object
Selecting data from a Parquet object usually involves using keys to retrieve specific information. Parquet is a widely used columnar storage format in the big data field. Its selection method is similar to that of JSON objects.
Error return codes
- Server-side error codes may be returned as an HTTP status code, or in the error-code field of the End Message, depending on the specific error type
| ErrorCode | Description | HTTP Status Code |
|---|---|---|
| AggregateInvalidField | Invalid use of aggregation functions in SQL statements (only numeric columns can be aggregated) | 400 |
| DecompressError | Object decompression failure | 400 |
| DataOverflowsType | Result overflow of aggregated columns (exceeding type limits) | 400 |
| FieldNotExist | The content corresponding to the field in the SQL statement does not exist in the file | 400 |
| HeaderNotExist | The header information does not exist in the CSV object | 400 |
| InappropriateJson | The content format of the JSON object is incorrect | 400 |
| InappropriateParquet | The content format of the Parquet object is incorrect | 400 |
| InvalidCompressionTypeParameter | The compressionType parameter in the SelectObject request is invalid | 400 |
| InvalidExpressionParameter | The expression parameter in the SelectObject request is invalid | 400 |
| InvalidExpressionTypeParameter | The expressionType parameter in the SelectObject request is invalid | 400 |
| InvalidFileType | Select only supports retrieving content from CSV, JSON, and Parquet objects | 400 |
| InvalidJsonTypeParameter | The json type parameter in the SelectObject request is invalid | 400 |
| InvalidQuoteFieldsParameter | The quote fields parameter in the SelectObject request is invalid | 400 |
| InvalidSelectRequestJsonBody | The JSON body in the SelectObject request is invalid | 400 |
| InvalidSqlBetweenOperator | Incorrect use of the BETWEEN operator in the SQL statement: BETWEEN and AND must be used together, and the data types on both sides of AND must be consistent | 400 |
| InvalidSqlBinaryExpr | Illegal use of binary operators: the data types of the left and right operands do not match | 400 |
| InvalidSqlFields | The field in the SELECT clause of the SQL statement is invalid, which may be caused by the presence of binary operators or other illegal operations | 400 |
| InvalidSqlFunction | Incorrect use of functions in the SQL statement: check the data types and number of function parameters | 400 |
| InvalidSqlInOperator | Incorrect use of the IN operator in the SQL statement: the data types of values inside IN must be consistent | 400 |
| InvalidSqlIsOperator | Incorrect use of the IS operator in the SQL statement: it can only be used with NULL/NOT NULL | 400 |
| InvalidSqlJsonPathDepth | The depth of the selected JSON object node is invalid (exceeds the limit of 1,024 or is less than 1) | 400 |
| InvalidSqlLikeOperator | Incorrect use of the LIKE operator in the SQL statement | 400 |
| InvalidSqlLimitValue | The value of the Limit field in the SQL statement is invalid; it must be a positive integer | 400 |
| InvalidSqlNotOperator | Incorrect use of the NOT operator in the SQL statement: it can only be used before BETWEEN/IN/LIKE to indicate negation | 400 |
| InvalidSqlSource | The source after FROM in the SQL statement is invalid; check whether the source format meets the requirements | 400 |
| RecordTooLarge | The length of a single row record in the CSV file exceeds the 512 KB limit | 400 |
| SqlFieldsNumExceedLimit | The number of fields in the SELECT clause of the SQL statement exceeds the limit | 400 |
| SqlSourceNumExceedLimit | Only one source is allowed after FROM in the SQL statement | 400 |
| SqlSyntaxError | The SQL statement is invalid (contains syntax errors) | 400 |
SDK usage examples
Currently, the [BOS Java SDK](BOS/SDK/Java-SDK/File management.md), [BOS GO SDK](BOS/SDK/GO-SDK/File management.md), and [BOS Python SDK](BOS/SDK/Python-SDK/File management/Upload files.md) all support the SelectObject API.
Java SDK example
1public void selectCsv(BosClient client, String bucketName, String csvObject) {
2 System.out.println("------ select csv object ------");
3 SelectObjectRequest request = new SelectObjectRequest(bucketName, csvObject)
4 .withSelectType(Constants.SELECT_TYPE_CSV)
5 .withExpression("select * from BosObject limit 3")
6 .withExpressionType(SelectObjectRequest.ExpressionType.SQL)
7 .withInputSerialization(new InputSerialization()
8 .withCompressionType("NONE")
9 .withFileHeaderInfo("NONE")
10 .withRecordDelimiter("\r\n")
11 .withFieldDelimiter(",")
12 .withQuoteCharacter("\"")
13 .withCommentCharacter("#"))
14 .withOutputSerialization(new OutputSerialization()
15 .withOutputHeader(false)
16 .withQuoteFields("ALWAYS")
17 .withRecordDelimiter("\n")
18 .withFieldDelimiter(",")
19 .withQuoteCharacter("\""))
20 .withRequestProgress(false);
21 SelectObjectResponse response = client.selectObject(request);
22 // Output query result
23 printRecords(reponse.getMessages());
24}
25public void selectJson(BosClient client, String bucketName, String jsonObject) {
26 System.out.println("------ select json object ------");
27 SelectObjectRequest request = new SelectObjectRequest(bucketName, jsonkey)
28 .withSelectType(Constants.SELECT_TYPE_JSON)
29 .withExpression("select * from BosObject where age > 20")
30 .withInputSerialization(new InputSerialization()
31 .withCompressionType("NONE")
32 .withJsonType("LINES"))
33 .withOutputSerialization(new OutputSerialization()
34 .withRecordDelimiter("\n"))
35 .withRequestProgress(false);
36 SelectObjectResponse response = client.selectObject(request);
37 // Output query result
38 printRecords(reponse.getMessages());
39}
40public void selectParquet(BosClient client, String bucketName, String parquetObject) {
41 System.out.println("------ select parquet object ------");
42 SelectObjectRequest request = new SelectObjectRequest(bucketName, parquetObject)
43 .withSelectType(Constants.SELECT_TYPE_PARQUET)
44 .withExpression("select * from BosObject where age > 20")
45 .withInputSerialization(new InputSerialization()
46 .withCompressionType("NONE"))
47 .withOutputSerialization(new OutputSerialization()
48 .withRecordDelimiter("\n"))
49 .withRequestProgress(false);
50 SelectObjectResponse response = client.selectObject(request);
51 // Output query result
52 printRecords(reponse.getMessages());
53}
54public void printRecords(SelectObjectResponse.Messages messages) {
55 if (messages == null) {
56 return;
57 }
58 while (messages.hasNext()) {
59 SelectObjectResponse.CommonMessage message = messages.next();
60 if (message.Type.equals("Records")) {
61 for (String record: message.getRecords()) {
62 System.out.println(record);
63 }
64 }
65 }
66}
Golang example
1package main
2import (
3 "bufio"
4 "encoding/binary"
5 "fmt"
6 "io"
7 "strings"
8)
9import (
10 "github.com/baidubce/bce-sdk-go/services/bos"
11 "github.com/baidubce/bce-sdk-go/services/bos/api"
12)
13func main() {
14 selectBosObject()
15}
16func selectBosObject() {
17 // Initialize BosClient
18 AK, SK := "ak", "sk"
19 ENDPOINT := "bj.bcebos.com"
20 bosClient, _ := bos.NewClient(AK, SK, ENDPOINT)
21 // First, ensure the bucket and object exist, and the object complies with the CSV/JSON file format requirements
22 bucket := "select-bucket"
23 csvObject := "test.csv"
24 fmt.Println("------ select csv object -------")
25 csvArgs := &api.SelectObjectArgs{
26 SelectType: "csv",
27 SelectRequest: &api.SelectObjectRequest{
28 Expression: "c2VsZWN0ICogZnJvbSBCb3NPYmplY3Qgd2hlcmUgY2FzdChfMSBBUyBpbnQpICogY2FzdChfMiBBUyBpbnQpID4gY2FzdChfMyBBUyBmbG9hdCkgKyAx",
29 ExpressionType: "SQL",
30 InputSerialization: &api.SelectObjectInput{
31 CompressionType: "NONE",
32 CsvParams: map[string]string{
33 "fileHeaderInfo": "IGNORE",
34 "recordDelimiter": "Cg==",
35 "fieldDelimiter": "LA==",
36 "quoteCharacter": "Ig==",
37 "commentCharacter": "Iw==",
38 },
39 },
40 OutputSerialization: &api.SelectObjectOutput{
41 OutputHeader: false,
42 CsvParams: map[string]string{
43 "quoteFields": "ALWAYS",
44 "recordDelimiter": "Cg==",
45 "fieldDelimiter": "LA==",
46 "quoteCharacter": "Ig==",
47 },
48 },
49 RequestProgress: &api.SelectObjectProgress{
50 Enabled: true,
51 },
52 },
53 }
54 csvRes, err := bosClient.SelectObject(bucket, csvObject, csvArgs)
55 if err != nil {
56 fmt.Println(err)
57 return
58 }
59 parseMessages(csvRes)
60 fmt.Println("------ select json object -------")
61 jsonObject := "test.json"
62 jsonArgs := &api.SelectObjectArgs{
63 SelectType: "json",
64 SelectRequest: &api.SelectObjectRequest{
65 Expression: "c2VsZWN0ICogZnJvbSBCb3NPYmplY3QucHJvamVjdHNbKl0ucHJvamVjdF9uYW1l",
66 ExpressionType: "SQL",
67 InputSerialization: &api.SelectObjectInput{
68 CompressionType: "NONE",
69 JsonParams: map[string]string{
70 "type": "LINES",
71 },
72 },
73 OutputSerialization: &api.SelectObjectOutput{
74 JsonParams: map[string]string{
75 "recordDelimiter": "Cg==",
76 },
77 },
78 RequestProgress: &api.SelectObjectProgress{
79 Enabled: true,
80 },
81 },
82 }
83 jsonRes, err := bosClient.SelectObject(bucket, jsonObject, jsonArgs)
84 if err != nil {
85 fmt.Println(err)
86 return
87 }
88 parseMessages(jsonRes)
89}
90 // Parse all headers and store them in a map
91func parseHeaders(headers []byte) map[string]string {
92 hm := make(map[string]string)
93 index := 0
94 for index < len(headers) {
95 // headers key length
96 keyLen := int(headers[index])
97 index += 1
98 // headers key
99 key := headers[index : index+keyLen]
100 index += keyLen
101 // headers value length
102 valLenByte := headers[index : index+2]
103 valLen := int(binary.BigEndian.Uint16(valLenByte))
104 index += 2
105 // headers value
106 val := headers[index : index+valLen]
107 index += valLen
108 hm[string(key)] = string(val)
109 }
110 return hm
111}
112func parseMessages(res *api.SelectObjectResult) {
113 defer res.Body.Close()
114 reader := bufio.NewReader(res.Body)
115 for {
116 // total length in prelude, 4 bytes
117 p := make([]byte, 4)
118 l, err := io.ReadFull(reader, p)
119 if err != nil || l < 4 {
120 fmt.Printf("read total length err: %+v, len: %d\n", err, l)
121 break
122 }
123 totalLen := binary.BigEndian.Uint32(p)
124 // headers length in prelude, 4 bytes
125 l, err = io.ReadFull(reader, p)
126 if err != nil || l < 4 {
127 fmt.Printf("read headers length err: %+v, len: %d\n", err, l)
128 break
129 }
130 headersLen := binary.BigEndian.Uint32(p)
131 // headers part
132 headers := make([]byte, headersLen)
133 l, err = io.ReadFull(reader, headers)
134 if err != nil || uint32(l) < headersLen {
135 fmt.Printf("read headers data err: %+v, len: %d\n", err, l)
136 break
137 }
138 // Get the header length, parse the header content, and determine the specific message type; stop reading if it is an end message
139 // If it is a continuation message (cont msg), call the callback function to output progress information; if it is a record message (record msg), output record information
140 headersMap := parseHeaders(headers)
141 if headersMap["message-type"] == "Records" {
142 // payload part
143 payloadLen := totalLen - headersLen - 12
144 payload := make([]byte, payloadLen)
145 if _, err := io.ReadFull(reader, payload); err != nil {
146 fmt.Printf("read payload data err: %+v\n", err)
147 }
148 // Set the newline character in the OutputSerialization field you use for line segmentation
149 rs := strings.Split(string(payload), "\n")
150 _, err = io.ReadFull(reader, p)
151 crc := binary.BigEndian.Uint32(p)
152 recordsMsg := &api.RecordsMessage{
153 CommonMessage: api.CommonMessage{
154 Prelude: api.Prelude{
155 TotalLen: totalLen,
156 HeadersLen: headersLen,
157 },
158 Headers: headersMap,
159 Crc32: crc,
160 },
161 Records: rs,
162 }
163 fmt.Printf("RecordsMessage: %+v\n", recordsMsg)
164 continue
165 }
166 if headersMap["message-type"] == "Cont" {
167 // payload part, progress
168 bs := make([]byte, 8)
169 _, err = io.ReadFull(reader, bs)
170 bytesScanned := binary.BigEndian.Uint64(bs)
171 br := make([]byte, 8)
172 _, err = io.ReadFull(reader, br)
173 bytesReturned := binary.BigEndian.Uint64(br)
174 _, err = io.ReadFull(reader, p)
175 crc := binary.BigEndian.Uint32(p)
176 contMsg := &api.ContinuationMessage{
177 CommonMessage: api.CommonMessage{
178 Prelude: api.Prelude{
179 TotalLen: totalLen,
180 HeadersLen: headersLen,
181 },
182 Headers: headersMap,
183 Crc32: crc,
184 },
185 BytesScanned: bytesScanned,
186 BytesReturned: bytesReturned,
187 }
188 fmt.Printf("ContinuationMessage: %+v\n", contMsg)
189 continue
190 }
191 if headersMap["message-type"] == "End" {
192 _, err = io.ReadFull(reader, p)
193 crc := binary.BigEndian.Uint32(p)
194 endMsg := &api.EndMessage{
195 CommonMessage: api.CommonMessage{
196 Prelude: api.Prelude{
197 TotalLen: totalLen,
198 HeadersLen: headersLen,
199 },
200 Headers: headersMap,
201 Crc32: crc,
202 },
203 }
204 fmt.Printf("EndMessage: %+v\n", endMsg)
205 break
206 }
207 }
208}
Python SDK example
Currently, the [BOS Python SDK](BOS/SDK/Python-SDK/File management/Upload files.md) also supports the SelectObject API. For specific usage, refer to the relevant content in the "Python-SDK" - "File Management" - "Select Files" chapter.
