Query To Find The Duplicate Records
SELECT ColumnName1,ColumnName2,ColumnNameN,count(*) As UserDefinedColName
FROM TableName
GROUP BY ColumnName1,ColumnName2,ColumnNameN
HAVING count(*) > 0
Query To Delete Duplicate Records
select distinct * into #tmpTableName from TableName
truncate table TableName
insert TableName select * from #tmpTableName
drop table #tmpTableName
5 comments:
Would it make _much_ more sense to use a schema that does not allow you to corrupt your database?
Can't see any reason for deleting duplicates if you need them for some reason (as you have a schema that allows them).
hi joonas,
thanks for your comments,think in this scenario, during the course of bulk insert by importing the data from the text file, suppose you have to run that job twice in a day for example day time and in evening the same file will be import again in the single day so there is a chance for the duplicate entry of record in the table, by keeping that scenario in mind i post this article, if u have any better way to over come this situation means please share with us.
Perhaps I'm repeating what Joonas said, but again there shouldn't be such duplicates allowed to begin with and should be handled when you do the bulk insert.
hi michael,
if you have any idea to avoid duplicate record while working on bulk insert please share with us
This will delete all records from the table MyTable which have the same value for the field dupField, leaving that record which has the lowest value in uniqueField.
delete from T1 from MyTable T1, MyTable T2
where T1.dupField = T2.dupField
and
T1.uniqueField > T2.uniqueField
Post a Comment