@@ -234,3 +234,53 @@ the maximum size of the strings that will be processed is 20 characters. Since
234
234
cx_Oracle allocates memory for each row based on this value, it is best not to
235
235
oversize it. The first parameter of ``None `` tells cx_Oracle that its default
236
236
processing will be sufficient.
237
+
238
+ Loading CSV Files into Oracle Database
239
+ ======================================
240
+
241
+ The :meth: `Cursor.executemany() ` method and `csv module
242
+ <https://docs.python.org/3/library/csv.html#module-csv> `__ can be used to
243
+ efficiently load CSV (Comma Separated Values) files. For example, consider the
244
+ file ``data.csv ``::
245
+
246
+ 101,Abel
247
+ 154,Baker
248
+ 132,Charlie
249
+ 199,Delta
250
+ . . .
251
+
252
+ And the schema:
253
+
254
+ .. code-block :: sql
255
+
256
+ create table test (id number, name varchar2(25));
257
+
258
+ Instead of looping through each line of the CSV file and inserting it
259
+ individually, you can insert batches of records using
260
+ :meth: `Cursor.executemany() `:
261
+
262
+ .. code-block :: python
263
+
264
+ import cx_Oracle
265
+ import csv
266
+
267
+ . . .
268
+
269
+ # Predefine the memory areas to match the table definition
270
+ cursor.setinputsizes(None , 25 )
271
+
272
+ # Adjust the batch size to meet your memory and performance requirements
273
+ batch_size = 10000
274
+
275
+ with open (' testsp.csv' , ' r' ) as csv_file:
276
+ csv_reader = csv.reader(csv_file, delimiter = ' ,' )
277
+ sql = " insert into test (id,name) values (:1, :2)"
278
+ data = []
279
+ for line in csv_reader:
280
+ data.append((line[0 ], line[1 ]))
281
+ if len (data) % batch_size == 0 :
282
+ cursor.executemany(sql, data)
283
+ data = []
284
+ if data:
285
+ cursor.executemany(sql, data)
286
+ con.commit()
0 commit comments