Tuesday, March 27, 2012

Copy field data from table to another

Let's say, I have Table1 and Table2, both with the exactly same
structure and data in it, except that Table1.Field1 is empty in data
but Table2.Field1 have data in it.

How could I copy all the data from Table2.Field1 to Table1.Field1?
Is there any simpler way except looping through all the records to do
it?Sorry, just some add on to my previous question.
If I have this:

Table1
RecId ==> record ID
Name ==> name (empty)
Desc ==> description
... (another 23 fields to go)

Table2
RecId ==> recordID
Name ==> name (with data)
Desc ==> description
... (another 29 fields to go)

The Table1.Name is empty in data, then Table2.Name have data.
How could I write a SQL statement to copy all the data from Table2.Name
to Table1.Name since the table structure is different. I need to copy
the data in by mathing the RecordId.

I mean, copy Table2.Name to Table1.Name where their recordID is the
same.
Just as an extra info, there're 500,000 records in the tables.

Thanks a lot in advance.

Peter CCH|||Hi

Assuming recordid is a key that you match the records on. If your fields
have no value I assume they are NULL.

UPDATE T
SET Name = S.NAME,
Desc = S.Desc,
...

FROM Table 1 T
JOIN Table2 S ON s.recordid = t.recordid
WHERE Name IS NULL
OR DESC IS NULL

If you dont want to update table1 if it has a value try something like.

UPDATE T
SET Name = CASE WHEN NAME IS NULL THEN S.NAME ELSE NAME END,
Desc = CASE WHEN NAME IS NULL THEN S.DESC ELSE DESC END,,
...

FROM Table 1 T
JOIN Table2 S ON s.recordid = t.recordid
WHERE Name IS NULL
OR DESC IS NULL

John

"Peter CCH" <petercch.wodoy@.gmail.com> wrote in message
news:1112424964.224162.251730@.g14g2000cwa.googlegr oups.com...
> Sorry, just some add on to my previous question.
> If I have this:
> Table1
> RecId ==> record ID
> Name ==> name (empty)
> Desc ==> description
> ... (another 23 fields to go)
> Table2
> RecId ==> recordID
> Name ==> name (with data)
> Desc ==> description
> ... (another 29 fields to go)
> The Table1.Name is empty in data, then Table2.Name have data.
> How could I write a SQL statement to copy all the data from Table2.Name
> to Table1.Name since the table structure is different. I need to copy
> the data in by mathing the RecordId.
> I mean, copy Table2.Name to Table1.Name where their recordID is the
> same.
> Just as an extra info, there're 500,000 records in the tables.
>
> Thanks a lot in advance.
>
>
> Peter CCH|||Since it join both of the tables first then only update the value, if
the table have 500,000 records, will the SQL statement above takes very
long time? (Assume it update that 500,000 records)|||Hi

5000000 rows would not take that long, but as the where condition limits the
rows you may see less being updated. You may also want to check that you are
not updating with the same values.

UPDATE T
SET Name = S.NAME,
[Desc] = S.[Desc],
...

FROM Table 1 T
JOIN Table2 S ON s.recordid = t.recordid
WHERE ( T.Name IS NULL AND T.Name <> S.Name )
OR ( T.[DESC] IS NULL AND T.[DESC] <> S.[DESC])

John

"Peter CCH" <petercch.wodoy@.gmail.com> wrote in message
news:1112454364.911855.240540@.o13g2000cwo.googlegr oups.com...
> Since it join both of the tables first then only update the value, if
> the table have 500,000 records, will the SQL statement above takes very
> long time? (Assume it update that 500,000 records)|||Peter CCH (petercch.wodoy@.gmail.com) writes:
> Since it join both of the tables first then only update the value, if
> the table have 500,000 records, will the SQL statement above takes very
> long time? (Assume it update that 500,000 records)

Updating 500000 rows is usually not snap. Exactly how long time depends
on your hardware, but also the definition of the column. If the column
is fixed length, all updates can be in place. But if the column is a
varchar or varbinary column, many rows will grow out of their current
pages, so that must be a lot of page splits and data moved around. Again,
exactly how massive this effect will be depends on the data. If the
average length is three characters, you have have luck and most pages
have space to spare. If the average length is 100 bytes, this is less
likely.

Another thing that matters is whether data and log files have the space.
Updating half a million rows will take a toll on the log file. Exactly
how big that toll is, depends on the width of the table. If each row
is 10 bytes in averages you need a lot less log if the average row length
is 360 bytes, If the log does not have the space, it will have to
autogrow and autogrow does not come for free. If you are running with
simple recovery, you can hold down the log explosion by doing the
update in batches of 50000 or so.

But in short, the only way to get answer to a question like this one is
to benchmark.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.aspsqlsql

No comments:

Post a Comment