DELETING 100 Million From A Table Weekly SQl SERVER 2000

Nov 5, 2005

DELETING 100 million from a table weekly SQl SERVER 2000

Hi All

We have a table in SQL SERVER 2000 which has about 250 million records
and this will be growing by 100 million every week. At a time the table
should contain just 13 weeks of data. when the 14th week data needs to
be loaded the first week's data has to be deleted.

And this deletes 100 million every week, since the delete is taking lot
of transaction log space the job is not successful.

Can you please help with what are the approaches we can take to fix
this problem?

Performance and transaction log are the issues we are facing. We tried
deletion in steps too but that also is taking time. What are the
different ways we can address this quickly.

Please reply at the earliest.

Thanks
Harish

View 4 Replies


ADVERTISEMENT

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

Deleting 8 Million Rows From Database

Jul 16, 2013

i am deleating 8 Million rows from my database,I am wondering how to control T-Log,also I heard something about row lock and table lock

View 4 Replies View Related

SQL Server 2012 :: Copy A Table With 200 Million Rows To Another Table On Same Server

Aug 11, 2014

I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...

View 9 Replies View Related

Generating Records Weekly / Bi Weekly Based On The Received Date Field

Feb 18, 2014

I have a query that will generate records monthly based on the number of months that i calculate between two date feilds for a given requestid. How can i use the same query to generate records for weekly and bi weekly based on the receiveddate field that i use in the subtraction for calculating the number of months.

Also when inserting i have been adding a month for every record as i was generating monthly and now i would have to add week and 2 weeks to the receiveddate

SET NOCOUNT ON
GO
declare @num_of_times int
declare @count int
declare @frequency varchar(10)
declare @num_of_times1 int

[Code] ....

View 6 Replies View Related

SQL Server 2014 :: Insert 500 Million Rows Into In-memory Table

Jul 29, 2014

I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.

I need a sample script to insert 500 million records into a table ....

View 9 Replies View Related

Deleting The Master Table Withour Deleting The Child Tables

Aug 9, 2007

Hi
i have to delete the master table data without deleting the child table records,is there any solution for this,  parent table has relation with the child table.
regards
vinod.t.v

View 9 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

Purge Records From Table In A Weekly Schedule

Jul 23, 2002

Hello all,

I hope someone can help me with a big problem... I'm using Citrix Resource Management Services with a SQL 2000 database. Their are 15 citrix servers which are all reporting to the SQL database.

The database is expanding very quickly and is becoming slower and slower.

My question is: I want to schedule a purge of old records on a friday afternoon, like this:

WEEK 1 - MON / FRI
WEEK 2 - MON / FRI (Friday's purge records week1)
WEEK 3 - MON / FRI (Friday's purge records week2)
etc...

Is this possible? if yes how do i do this !??!

Thank you very much for any info!!

Daan

View 1 Replies View Related

Indexing A Table With 80 Million Records

Mar 26, 2004

i have a directory database with approx. 80 million records. i am feeding the database with bulk_insert. Indexing one of the fields took about 8 hrs. After indexing when i run queries with the indexed field the response time is under 1 sec. However if i run select queries with like on non-indexed fields it takes more than 2 mins. So i decided to index 4 other fields in the database and it looks like the indexing process is going to run for 2 days.
i am a novice in SQL database design and i am not sure if this is the best way to index the table. i am just using create index. Any suggestions / advice welcome.

View 5 Replies View Related

Adding New Column To The Table With 16 Million Row

Jul 11, 2013

We have a table with 16 Million records, and also this table is replicated.

We want to add a new column in to this table for some reason?

View 1 Replies View Related

Should I Split This 175 Million Record Table?

Jul 20, 2005

Hello,We maintain a 175 million record database table for our customer.This is an extract of some data collected for them by a third partyvendor, who sends us regular updates to that data (monthly).The original data for the table came in the form of a single, largetext file, which we imported.This table contains name and address information on potentialcustomers.It is a maintenance nightmare for us, as prior to this the largesttable we maintained was about 10 million records, with lesscomplicated updates required.Here is the problem:* In order to do the searching we need to do on the table it has 8 ofits 20 columns indexed.* It takes hours and hours to do anything to the table.* I'd like to cut down as much as possible the time required to updatethe file.We receive monthly one file containing 10 million records that arenew, and can just be appended to the table (no problem, simple importinto SQL Server).We also receive monthly one file containing 10 million records thatare updates of information in the table. This is the tricky one. Theonly way to uniquely pair up a record in the update file with a recordin the full database table is by a combination of individual_id, zip,and zip_plus4.There can be multiple records in the database for any givenindividual, because that individual could have a history that includesmultiple addresses.How would you recommend handling this update? So far I have mostlytried a number of execution plans involving deleting out the recordsin the table that match those in the text file, so I can then importthe text file, but the best of those plans takes well over 6 hours torun.My latest thought: Would it help in any way to partition the tableinto a number of smaller tables, with a view used to reference them?We have no performance issues querying the table, but I need somethoughts on how to better maintain it.One more thing, we do have 2 copies of the table on the server at alltimes so that one can be actively used in production while we runupdates on the other one, so I can certainly try out some suggestionsover the next week.Regards,Warren WrightDallas

View 7 Replies View Related

Deleting Records From A Table In Server Database

Aug 24, 2015

I'm trying to delete some records from some tables in a SQL Server 2008 R2 database. There's a foreign key relationship between the two tables. To make things easier here's the definition of both tables:

-- Parent table
CREATE TABLE [dbo].[PharmInvInItemPackages](
[InventoryInDetailID] [int] IDENTITY(1,1) NOT NULL,
[InventoryInID] [int] NOT NULL,
[ItemPackageID] [int] NOT NULL,

[code]....

View 5 Replies View Related

Is A Half A Million Record Database Table OK?

Apr 6, 2007

I don't work much with the back end of software development so there is a lot about SQL Server I do not know.
We are building a database. The database will have about 10 tables in it. 3 of these tables will probably have a huge amount of data in them. Specifically each one of the 3 tables will each have about a half a million database records in it. Each record is about 100 characters max in length.(Im am including numbers as characters and summing the individual columns/fields to come up with 100).
Will a SQL server database table with A half a million records in it be possible? We have tried to normalize the database to cut down on the size of the table but it all comes out to about a half a million records per table.
Any help is deeply appreciated.
Bill 
 

View 1 Replies View Related

SQL 2012 :: 1.5 Million Records Into Temp Table

Sep 23, 2014

I come from a web based world were loading 1.5 million records into a temp table is suicide. I’m doing more data warehouse stuff now and I was looking into optimizing a buddies proc and noticed he was loading 1.5 million records into a temp table. We had a discussion about it because being from a web world I was drastically against it. He on the other hand didn’t feel it was an issue being it gets called once maybe twice a day. The tempdb is set to autogrow and it is on a different drive than all the other databases on the box. It has one ldf and mdf. He’s creating an index on the table after load. Why we shouldn’t be loading 1.5 million recs into temp table?

View 5 Replies View Related

Solution To Store Million Of Data In A Table.

Jun 27, 2007

Dear all,

i need to design a database table which will store supplier's demand information. 1 supplier will probably have 10000 records and there are posibility that there are 10,000 suppliers. So, in total, the number of records will be 10000 * 10000 = XXXXA LOT XXXX which will be very large number of record to be inserted into a table. So, how can i design an table and structure to cater this scenario? Thanks.

Hope to hear from you..

View 10 Replies View Related

Updating A Column In A Table That Contains 50 Million Rows

Feb 27, 2008

I'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.

update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1



There is a one to one relationship between the two tables.

View 8 Replies View Related

Transact SQL :: Updating A Table With 45 Million Records

Jul 21, 2015

I am trying to update a large table which consists of 45 million records , it is taking more than 2 days to the update , below is my approach

1. The table has only one clustered index and no other indexes on the table.
2. I am updating in batches say 20000 record-wise.
3. Changed the recovery mode to bulk logged and auto-growth size is set to  300MB and there is enough space in my disk for transaction log .

But still the query is running slowly.

View 10 Replies View Related

Rebuild Clustered Index On 500 Million Row Table???

Jan 17, 2008

My environment is SQL 2000. I have a table with 500 million rows. The table is consistently getting updated and inserted. I can not take the table offline. My clustered index needs to be rebuilt due to decreased performance. How do I accomplish this?

View 7 Replies View Related

Need Suggestion On Loading A 50 Million Records Table From Oracle

Feb 16, 2006

All,

I need to load a 50 million records table monthly. Any suggestion about the best/fast way to do it?

Thanks a lot

View 2 Replies View Related

T-SQL (SS2K8) :: Table With 3 Million Plus Records Taking Half A Minute?

Aug 6, 2015

I have a table that I need to do some computations on all the data but first I need to remove the duplicate records and insert the results into a destination table. Here's the example below. My table has 3.1 million rows. I have tried using the DISTINCT and the GROUP BY but both ways to select the data takes about half a minute to run. I'm wondering if there is a way to increase performance. Users are ok with this time since the process runs overnight but improving it won't hurt. I do have a clustered index on these fields but that doesn't seem to improve any.

SELECTDateYear ,
DateMonth ,
Nbr ,
Nbr1 ,
Nbr2 ,
Datafield1 ,
Datafield2,

[code].....

View 7 Replies View Related

Transact SQL :: Query To Update A Table With More Than 150 Million Rows Of Data?

Sep 17, 2015

I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:

Source Tables :

OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,

[Code] ....

The update requirement is as follows:

DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
    SET rowcount 10000
    update r
    set Comp=t.Comp

[Code] ....

The update took more than 48h and didn't terminate , how to accelerate it ?

View 6 Replies View Related

Retrieving Data From Table With 7 Million Entries Takes Time

Jul 25, 2007

Can anyone help me on this...
when i select data from table using select statement it takes huge amount of time....The table contains 7 million entries and when i select by mentioning a criteria it takes around 45 secs..The system has 4GB RAM and Dual Processing CPU. The select statement does not contain any grouping and all..

Will it take this much time to retrieve data.?.
The table does include an indexed field,
So can anyone help me on the different things i can do to make the retrieval faster?

Andy

View 5 Replies View Related

SQL Server 2008 :: Deleting Data (rows) From Table To Reclaim Space?

Feb 11, 2015

I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.

If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?

View 8 Replies View Related

Weekly Server Crash

Nov 1, 1999

Been trying to send this all week...

-----Original Message-----
From: Driggers, John
To: 'SQL Discussions'
Sent: 10/27/99 9:10 AM
Subject: FW: Weekly server hang




I also see the one below prior to another crash....going through tech
net now...but not seeing anything that reflects the messages below. The
results from searching on "Exception_Access_Violation" I'm not sure
apply in my case...also looks like I have at least two causes of crashes
(how can one interpret the below statements???)

Thanks, John
--------------------

99/10/24 10:38:00.06 spid10 EXCEPTION_ACCESS_VIOLATION raised,
attempting to create symptom dump
99/10/24 10:38:00.06 spid10 Initializing symptom dump and stack dump
facilities
99/10/24 10:38:02.61 spid10 ***BEGIN STACK TRACE***
99/10/24 10:38:02.61 spid10 0x00404CD9 in SQLSERVR.EXE,
rm_ods_handler() + 0x0329
99/10/24 10:38:02.64 spid10 0x00405571 in SQLSERVR.EXE, st_do_enlist()
+ 0x00C1
99/10/24 10:38:02.64 spid10 0x004071CA in SQLSERVR.EXE,
CDTCState::init() + 0x033A
99/10/24 10:38:02.65 spid10 0x005A70A3 in SQLSERVR.EXE,
lddb_fixdbosuid() + 0x0423
99/10/24 10:38:02.68 spid10 0x005A6CC2 in SQLSERVR.EXE,
lddb_fixdbosuid() + 0x0042
99/10/24 10:38:02.68 spid10 0x005963CB in SQLSERVR.EXE, textalloc() +
0x04CB
99/10/24 10:38:02.71 spid10 0x00463F4B in SQLSERVR.EXE, agghaving() +
0x004B
99/10/24 10:38:02.71 spid10 0x00409829 in SQLSERVR.EXE, opencheck() +
0x0089
99/10/24 10:38:02.71 spid10 0x00427B09 in SQLSERVR.EXE,
tbswritecheck() + 0x0969
99/10/24 10:38:02.71 spid10 0x00250FED in opends60.dll
99/10/24 10:38:02.71 spid10 0x0025055B in opends60.dll
99/10/24 10:38:02.71 spid10 0x002414D1 in opends60.dll
99/10/24 10:38:02.71 spid10 0x00241384 in opends60.dll
99/10/24 10:38:02.71 spid10 0x10219D84 in MSVCRT40.dll
99/10/24 10:38:02.71 spid10 0x77F04F3E in KERNEL32.dll
99/10/24 10:38:02.71 spid10 ***END STACK TRACE***

************************************************** **********************
*************

Cindy, nothing in the NT logs but found this in the SQL logs:



99/10/25 09:25:15.45 spid71 EXCEPTION_ACCESS_VIOLATION raised,
attempting to create symptom dump
99/10/25 09:25:15.45 spid71 Initializing symptom dump and stack dump
facilities
99/10/25 09:25:20.45 spid71 ***BEGIN STACK TRACE***
99/10/25 09:25:20.46 spid71 0x00404CD9 in SQLSERVR.EXE,
rm_ods_handler() + 0x0329
99/10/25 09:25:20.52 spid71 0x005725C1 in SQLSERVR.EXE, stuff() +
0x0241
99/10/25 09:25:20.54 spid71 0x0056D35F in SQLSERVR.EXE, ncrid_update()
+ 0x057F
99/10/25 09:25:20.57 spid71 0x0051DD35 in SQLSERVR.EXE, prRESOURCE() +
0x0055
99/10/25 09:25:20.57 spid71 0x00464C65 in SQLSERVR.EXE, genbuiltin() +
0x0445
99/10/25 09:25:20.59 spid71 0x00427B09 in SQLSERVR.EXE,
tbswritecheck() + 0x0969
99/10/25 09:25:20.62 spid71 0x00250FED in opends60.dll
99/10/25 09:25:20.62 spid71 0x0025055B in opends60.dll
99/10/25 09:25:20.62 spid71 0x002414D1 in opends60.dll
99/10/25 09:25:20.62 spid71 0x00241384 in opends60.dll
99/10/25 09:25:20.62 spid71 0x10219D84 in MSVCRT40.dll
99/10/25 09:25:20.62 spid71 0x77F04F3E in KERNEL32.dll
99/10/25 09:25:20.62 spid71 ***END STACK TRACE***



This proceeds my 'crashes', which it looks scary enough to do the trick!



Any idea what could be causing this exception?

Thanks, John

ps. someone else mentioned backup software - we use BackupExec and I
have a sched. task that dumps one of the databases to a network drive 2x
day. But these are running throughout the week...looking over the logs I
really don't see a correlation...(ie. these same processes run on days
that no crash occurs and successfully later in the day that the crashes
do occur (some hours earlier).

-----Original Message-----
From: Gross, Cindy [mailto:CindyGross@hmhs.com]
Sent: Monday, October 25, 1999 2:14 PM
To: SQL 6.5 Discussions
Subject: RE: Weekly server hang



Did you check the SQL Server errorlog (sometimes things are written here
that don't go to the event viewer) and the NT event viewer (application
and
system)?

You could try turning on SQL Trace to see if you can capture a "bad"
query
but depending on how SQL goes down it may not be captured.

If you are auditing successful logons you could take a look to see if
there
is any pattern in who logs in just before SQL restarts.

Any chance someone is actually stopping it on purpose? Or maybe a
program
that is stopping it (maybe a backup system trying to backup the device
files
instead of the dumps)?

Cindy Gross
SQL Server MCP
Texas Health Resources
http://members.tripod.com/cindygross/sqlsrvr.htm

> -----Original Message-----
> From:Driggers, John [SMTP:John_Driggers@spspay.com]
> Sent:Monday, October 25, 1999 12:26 PM
> To: SQL 6.5 Discussions
> Subject: Weekly server hang
>
> List-Unsubscribe: <mailto:leave-mssql-13928C@ls.swynk.com>
> List-Software: Lyris Server version 3.0
> List-Subscribe: <mailto:subscribe-mssql@ls.swynk.com>
> List-Owner: <mailto:owner-mssql@ls.swynk.com>
> X-URL: <http://www.swynk.com/sysapps/sql.asp>
> X-List-Host: swynk.com discussion lists <http://www.swynk.com>
> Reply-To: "SQL 6.5 Discussions" <mssql@ls.swynk.com>
> X-Message-Id: <06F417B00B8CD1119BA400008322DF6D03515E78@spsgex01>
> Sender: bounce-mssql-14964@ls.swynk.com
> Precedence: bulk
>
> This one is bugging the tar out of me. Running SQL 6.5 sp5a, NT4 sp4
on
> the
> server. Either on the weekend or Mon. mornings (happened all 3 days
this
> past week) the SQL service stops on the server. This is during low
usage
> times. The box is a DELL 4300 dual 450 w 512 RAM (250 dedicated to
SQL).
> All
> other services on the server are ok, except for SQL.
>
> I'm thinking maybe a bad query hitting the server (I've seen this
happen
> before) but the programmers claim there is nothing special about these
> time
> periods that something "unique" would be happening. After I restart
the
> server it may not happen until the next week (this past weekend being
an
> exception). I thought maybe I had a memory leak but running perf.
monitor
> before a crash once revealed 99+% data cache, available proced. crash,
low
> CPU usage, low swapping....anything else I could check?
>
> Maybe reinstalling the sp5a? Any suggestions on things to try would be
> most
> appreciated...
>
> Thanks, John
>
>
> ------
> FAQ: http://www.swynk.com/faq/sql/sqlserverfaq.asp
> Please post SQL Server 7.0 questions to the SQL 7 list
> http://ls.swynk.com for list server signup/maint options
> You are subscribed as cindygross@hmhs.com
> Archives: http://www.swynk.com/sitesearch/search.asp
> To unsubscribe send a blank email to leave-mssql-13928C@ls.swynk.com

------
FAQ: http://www.swynk.com/faq/sql/sqlserverfaq.asp
Please post SQL Server 7.0 questions to the SQL 7 list
http://ls.swynk.com for list server signup/maint options
You are subscribed as John_Driggers@spspay.com
Archives: http://www.swynk.com/sitesearch/search.asp
To unsubscribe send a blank email to leave-mssql-13928C@ls.swynk.com

View 1 Replies View Related

AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)

Aug 30, 2007

I have a row that is being used log track plays on our website.

Here's the table:


CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]


There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.

I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:


CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId


When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.

How exactly is the pseudo 'inserted' table formed?

For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.

I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?

Thanks!

View 6 Replies View Related

SQL 2012 :: Snapshot Getting Corrupted After Insert Update Few Million Records Into A Table

Mar 12, 2015

We are facing a weird scenario in which the snapshot is getting corrupted after insertupdate few million records in to a table .

SQL Server 2012
windows server 2008 R2
service pack 1
64-bit OS

View 1 Replies View Related

Transact SQL :: Adding A Column To A Large (100 Million Rows) Table With Default Constraint?

Apr 24, 2013

IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END

It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.

View 4 Replies View Related

SQL Server 2014 :: Count Order On Weekly Basis

Mar 10, 2014

I want the count of orders of a particular table on weekly basis i.e if date given to me is 10/3/2014 then my output should be count of orders from date 10/3/2014 to 09/3/2014(one week) then count of orders from 2/3/2014 to 08/3/2014(another week) and then from 24/2/2014 to 01/3/2014(another week).....

View 5 Replies View Related

SQL Server 2012 :: Query On Grouping Data On Weekly Basis

Oct 6, 2015

I have query on grouping data on weekly basis..

1. Week should start from Monday to Sunday

2. It should not consider current week data(suppose user clicks on report on Tuesday, it should display the data for last week).

3. I want output like below

Week1,week2,week3..... week12,AverageofWeek
12 , 10 ,0.........0 12

View 1 Replies View Related

How Well SQL Server Can Support 300 Million Records...

Nov 16, 2001

How well SQL Server can support 300 million records...
Any body is working on big database like this. can anyone give me some input on this. it's going to be 60GB database size.

View 1 Replies View Related

SQL Server 2014 :: Change Daily Info To Weekly Periods In Pivot Report

Sep 25, 2015

I create a report base on categories and sales of goods. Now I have Daily Info about all Products.

But I Need to present this report base on weekly periods. in pivotal format.

I family with pivotal format but change between daily report and weekly report is ambiguous for me.

View 3 Replies View Related

Help On Updating 1.3 Million Rows On The Production Server

May 4, 2000

I need to update about 1.3 million rows in a table of mine.
I am getting the data from one of the columns of the same table and
updating the new column.
I am doing this using a cursor which I have put in a stored procedure.
As this is a production table which users might be accessing.It is a
web based application and I can't slow the system down.
So I am willing to run the stored prcedure during off peak hours.
However, do I need to put this in a transaction?
If I did put it in a transaction what type of isolation level should I
opt for?
Data integrity is very important for me and I don't mind to compromise
on the performance.
I am doing this because one of the columns which has "short description"
entry is has become too small for business purposes and we want to increase it's
length from varchar(100) to varchar(150).
As this is SQL 6.5, I can't increase the lenght of the column.
So Iadded a new column and will run the stored proc.
What precautions are to be taken?
This is on a high priority basis and very important too.

Thanks in advance...

Stored procedure code:

USE DB_Registration_Dev
GO
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME='usp_update_product' AND TYPE='P')
DROP PROCEDURE usp_update_product
GO
CREATE PROC usp_update_product
AS
DECLARE @short_desc varchar(100)
DECLARE @prod_id int

DECLARE sdesc_curs CURSOR
FOR
SELECT [Product].[product_id] , [Product].[short_description]
FROM Product

OPEN sdesc_curs

FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc

WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Product
SET [Product].[sdesc] = @short_desc
WHERE Product_id=@prod_id
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
IF @@FETCH_STATUS <> 0
PRINT ' Finished Updating the table...go ahead and have fun ...! '
END
DEALLOCATE sdesc_curs
GO

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved