Question on Ole db destination provider - SSIS
Hello there I have a package that imports data from a file to a SQL Server destination. My file has duplicate rows on the primary key column. This will cause the package to fail, since the table will not allow insertion of duplicate primary keys. I use ole db destination task . To this task, i have set error redirection so that any error that causes the rows that can cause error to be redirected. I am facing a different problem due to this property, i have set my ole db destination -- data access as table or view - fast load. If i run the package, all my rows fail insertion into my table and get redirected to my error output file. However if i change my data access mode -- table or view, this package runs and only the duplicate row gets redirected to the error file. I am not very clear with this scenario. Can anyone please advise?. Since the input data is very high -- more than 5 million rows. The table or view data access mode will make the data insert very slow. Please correct me if there is anything wrong,. regards prasanna
April 14th, 2011 5:52pm

in the OLEDB Destination: Change AccessMode to OpenRowSethttp://www.rad.pasfu.com
Free Windows Admin Tool Kit Click here and download it now
April 15th, 2011 1:56am

in the OLEDB Destination: Change AccessMode to OpenRowSet Isn't that the same as selecting Table or View in the editor? And the problem of the slow insert with this option still remains. Prasanna, you should try to filter out the duplicate record before the OLE DB Destination, so that you can still use the fast load option. (NOT with the Sort component, that will kill your performance) Another option is to drop indexes (which is not a bad idea if you are inserting 5 million rows), remove any duplicates if necessary and create index again. MCTS, MCITP - Please mark posts as answered where appropriate.
April 15th, 2011 4:48am

Work Around: First load the file into some staging area (Temp Table) then use another DFT where you are selecting distinct records from staging. after the load truncate the staging table. Let us TRY this | My Blog :: http://quest4gen.blogspot.com/
Free Windows Admin Tool Kit Click here and download it now
April 15th, 2011 5:07am

Work Around: First load the file into some staging area (Temp Table) then use another DFT where you are selecting distinct records from staging. after the load truncate the staging table. Isn't that quite resource intensive for 5 million rows?MCTS, MCITP - Please mark posts as answered where appropriate.
April 15th, 2011 5:16am

HI Prasanna, Whats the new data acess mode you are now using for the OLEDB destination? Could not get it from the statement "I am facing a different problem due to this property, i have set my ole db destination -- data access as table or view - fast load. If i run the package, all my rows fail insertion into my table and get redirected to my error output file. However if i change my data access mode -- table or view, this package runs and only the duplicate row gets redirected to the error file." Regards and good Wishes, Deepak.
Free Windows Admin Tool Kit Click here and download it now
April 15th, 2011 5:18am

Isn't that quite resource intensive for 5 million rows? MCTS, MCITP - Please mark posts as answered where appropriate. Hi Koen, once data comes inside the SQL, it ease to play with it.... you can save multiple transformation in SSIS if you use optimize SQL query in OLEDB source. Let us TRY this | My Blog :: http://quest4gen.blogspot.com/
April 15th, 2011 6:44am

Isn't that quite resource intensive for 5 million rows? MCTS, MCITP - Please mark posts as answered where appropriate. Hi Koen, once data comes inside the SQL, it ease to play with it.... you can save multiple transformation in SSIS if you use optimize SQL query in OLEDB source. I was more thinking about the resources used to write 5 million rows to disk in SQL Server, so that you can read them out again with a distinct query.MCTS, MCITP - Please mark posts as answered where appropriate.
Free Windows Admin Tool Kit Click here and download it now
April 15th, 2011 7:05am

hello all thanks for your replies. I tried couple of above mentioned options. But finally concluding with the following, the Primary key will not be created in the destination. Data will be loaded , duplicates will be checked and then dropped if required. After that primary key shall be created. the reason behind this was. 1. initially i tried getting all unique rows from sybase (source). this was resulting in a double table scan, which was making the data migration very slow query used was select * from table1 where tablel1.primary_key not in (select table1.primary_key from table1 group by primary_key having count(*) > 1 ) 2. Also, since primary key was already in destination, we had to rebuild the indexes again. So we now are loading into destination without primary keys and then we are creating the primary key on the required tables.
April 18th, 2011 11:14am

Glad that you found a working solution. And thanks for posting back!MCTS, MCITP - Please mark posts as answered where appropriate.
Free Windows Admin Tool Kit Click here and download it now
April 18th, 2011 1:11pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics