SQL question

Carl Sappenfield CSAPPENFIELD at kc.rr.com
Sun Aug 10 02:19:49 CDT 2003


A reorg, or whatever mimics a reorg in that DB is surely the way to go.
After all, you're going to have to reorg the table anyway after deleting
that many rows.

----- Original Message -----
From: "Jim Herrmann" <jimh at ItDepends.com>
To: "KCLUG (E-mail)" <kclug at kclug.org>
Sent: Saturday, August 09, 2003 12:32 AM
Subject: Re: SQL question

> First off, as a DB2 DBA of 15 years, I would have used a reorg with
> discards to delete that much data.  It's waaay faster, because it
> doesn't do logging, and doesn't go through the data base engine.
> However, since you didn't tell us what kind of data base you are using,
> I'm not sure if you have that option.  If you don't have a utility
> option to complete this process, or you simply can't take the outage
> that a utility solution would require, and you must use SQL, then I have
> a suggestion or two.  Does this database management system handle
> transactions, meaning that it does logging?  If it does, then *never*
> try to delete that much data in one statement.  One way to handle it is
> to write a program to open a cursor with hold, then delete say 10,000
> rows, and commit.  The "with hold" means that the cursor is not closed
> at commit time.  Another way to break it up is by limiting the result
> set of your delete with the predicate (where clause).  Experiment with
> SELECT statements until you get the result sets down to a smaller number
> that your logs can handle, then use the same predicate for you delete
> statement.  Repeat over and over for with different predicates that take
> bite sized chunks of data, and do commits in between each delete.
>
> Utilities are your friends.  Go that route if at all possible.  If you
> want more help, just let me know what kind of DBMS you are working with,
> and I can help you work through it.  Data base is my life.  :-)
>
> Peace,
> Jim Herrmann
>
> Brian Densmore wrote:
>
> >Well although this is not Linux related it could easily be.
> >I have a somewhat large database (approx 180 million records)
> >that I need to delete certain "sets" from based on a foreign
> >key in a smaller table (approx 4 million records) and the
> >records I want to delete match a field in a third table
> >(very small < 100 records). I ran this delete query and it ran
> >for 18 hours and still didn't finish. We had to kill it
> >(because the server was swapping back and forth and came
> >to a standstill because we only have a 1/2 gig of RAM).
> >Anyway, I, have two questions:
> >1) why should it take 18 hours to run (the delete query should
> >have deleted about 1/3 of the data in the database),
> >2) is there anyway to have a single delete query statement
> >do periodic commits (as really dangerous as it is)?
> >
> >basically what I have is this:
> >
> >table bad_marks (uid,mark)
> >table messages  (indice,mark,...)
> >table msg_elements (id,indice,...)
> >
> >where bad_marks.mark 1->M/1->0 messages.mark, uid = PK
> >      messages.indice 1->M msg_element.indice, message.indice = PK,
message.mark = indexed
> >      msg_element id = PK, indice = indexed
> >and there is no referential integrity turned on (yet).
> >
> >Thanks,
> >Brian
> >
> >"Three OS's from corporate-kings in their towers of glass,
> >Seven from valley-lords where orchards used to grow,
> >Nine from dotcoms doomed to die,
> >one from the dark lord Gates on his dark throne
> >In the Land of Redmond where the Shadows lie.
> >one OS to rule them all, one OS to find them,
> >one OS to bring them all and in the darkness bind them,
> >In the Land of Redmond where the Shadows lie."    john thrum
> >
> >
> >
> >
> >
> >
> >
>
>
>




More information about the Kclug mailing list