sql - Alternative to MySQL LOAD LOCAL INFILE REPLACE to update a table -


if have file:

1  someval1 2  someval2 ... 

and table in mysql

------------- |id | value | |-----------| | 1 | null  | | 2 | null  |  ... 

and goal set value in right column of table value in right column of csv file wherever id's match. can use query (but don't want to)

load data    local infile 'file'   replace   table mytable   fields terminated ' '   lines terminated '\n'   (id, value); 

the reason don't want use replace because according mysql docs, deletes entire row whenever there match on primary key (in case, 'id') , reinserts row information in file. i'm assuming means @ point in future if add column table, i'd have make sure whatever file using query contained of column values every row might updated; otherwise non-targeted columns deleted.

isn't there better alternative updating table using local file 1 (or maybe more?) columns in input file contain primary keys need matched?

one possible approach:

use load data infile load csv data temporary table.

then, use insert ...select ...on duplicate key update perform update.

edit, worked & tested example:

the problem load data infile, identified op, load data infile optimised speed , pretty dumb when comes handling collisions in incoming data.

it has 2 options: replace or ignore. replace real sledgehammer of solution. when duplicate key found deletes original row , inserts new one. problem here data in existing row not contained in csv file lost.

the alternative 'ignore', fares little better. doesn't affect existing data, when it's done there's no way tell lines in csv file have been applied, , have not.

load data infile great it's best at: rapid bulk uploading of new data. updating of existing data may have uses, it's not tool.

we write routine in our favourite language (or language use, if it's not our favourite - php anyone?), parse csv file , apply updates insert...on duplicate update... query, introduces additional level of scripting , denies mysql opportunity optimise it's doing. in case, large number of individual insert queries far efficient. use insert syntax allows multiple value sets, large imports potentially runs against maximum request size limit.

the solution use 2 stage approach , allow mysql best.

import csv file temporary table load data infile, use insert...select...on duplicate key update handle update process in more intelligent way.

suppose have table this:

create table `test`.`bestmate` (   `id` int not null,   `name` varchar(45) null,   `age` int null,   `bestmate` varchar(45) null,   primary key (`id`),   unique index `id_unique` (`id` asc)); 

we want import csv data on regular basis other source, this:

id, name, age 1, fred, 16 2, alan, 22 3, george, 26 

our bestmate column updated our users on line, don't have information in our csv file, , don't want lose it.

first up, create temporary table hold our csv data , import file.

drop table if exists `csvtable` ;  create temporary table `csvtable`   (`id` int not null,   `name` varchar(45) null,   `age` int null);  load data local infile 'c:\\users.csv' table `csvtable`   fields terminated ','    optionally enclosed '"'   lines terminated '\r\n'   ignore 1 lines; 

now can insert or update rows in our bestmate table in 1 command, leaving data in table untouched.

insert `bestmate` (`id`,`name`,`age`) select * `csvtable`    on duplicate key update     `bestmate`.`name` = `csvtable`.`name`,     `bestmate`.`age` = `csvtable`.`age`; 

finally, tidy after ourselves.

drop table if exists `csvtable` ; 

Comments

Popular posts from this blog

google api - Incomplete response from Gmail API threads.list -

Installing Android SQLite Asset Helper -

Qt Creator - Searching files with Locator including folder -