SQL Server 2012 :: Behavioral Difference Between DELETE And TRUNCATE
Apr 4, 2014
I have an issue with Delete statement.In the code given below (its a part of actual proc),if we use TRUNCATE to clean the temp tables, everything goes fine.But if I use DELETE in place of truncate, system skips the IF loop 'if (@script_type = 1 OR @script_type = 2)'I am not able to understand this behavioral difference between DELETE and TRUNCATE.Recently the database is being used for replication, but that should not be a reason.
SELECT @max_rows = COUNT('X') FROM #temp_table1
SET@row_cnt = 1
WHILE @row_cnt <= @max_rows
BEGIN
when i run i have an error with some tables constraints. As you know If we try to TRUNCATE a table, and the table is have a relationship with other table like for exemple Employees_Salary with the constraint of SQL Foreign Key then we can't TRUNCATE the table with SQL Truncate Table command, we need to TRUNCATE the Employees_Salary table first because Employees_Salary is depending on it and that is having some records. That's why we need to TRUNCATE both tables and the sequence is to empty that table which is based on SQL Foreign Key or remove this constraint first.
What i need is a list of all database tables ordered by the right sequence considering the constraints Anyone can help me?
Is there any easy way to truncate a table which has a foreign key restraint? I want to override the default behavior which is to not allow truncate of parent tables. I want to be able to temperarily remove the contraint so I can truncate the temple, how do you do this?
I have a table with a list of jobs along with their start and end datetime values.
I am looking for a function which will return the time taken to process a job using a start date and an end date. If the date range covers a Saturday or Sunday I want the time to ignore the weekends.
Example
Start Date=2014-05-15 12:00:00.000 End Date=2014-05-19 13:00:00.000
Total Time should be: 2 Days, 1 Hour and 0 Minutes
I have a table with appdt as first appointment date and the another record for the same customer# has follow up appointment.
Each customer is uniquely identified by a customer#
I need to find out if the customer came back after 200 days or more when the first appointment date was between jan12014 and Aug 31 2014. I am only interested in first follow up appointment after 30 days or more.
I have a request where i would like to get the start date/time and end date/time and flag (with an int) which hours (24 hour clock) have values between the two dates. Example car comes into service on 2013-12-25 at 0800 and leaves 2013-12-25 at 1400 the difference is 6 hours and i need my table to show
As i'm working away at it i'm trying to figure out how i could use a Time Dimension table for this but dont really see much. So far i have the difference between the two times in hours (hour_diff) and the start hour (min_hour) so i would like to do something where i update the first hour (min_hour) and update columns based on the numbers of hours (hour_diff)
INSERT INTO MAIN VALUES ('1000', '1/1/2014',3000,1000,700,1500) INSERT INTO MAIN VALUES ('1000', '3/5/2014',1000,2000,650,200) INSERT INTO MAIN VALUES ('1000', '5/10/2014',500,5000,375,125) INSERT INTO MAIN VALUES ('1000', '11/20/2014',100,2000,400,300) INSERT INTO MAIN VALUES ('1000', '8/20/2014',100,3500,675,1300)
I want to achieve something like below. It should subtract the '13' row to '6' row and provide another column with the result. the '6' and '13' category code share the same Key.
I have to truncate access table before I insert records to access database table.
I tried using Delete From Table_name or Truncate table Table_NAME and I have used Microsoft Jet 4.0 OLE DB Provider. It does not seem to work. I read some post on forums. None of them seem to work while truncating or deleting records from one of the access database table.
Is there any easy way to truncate access database without using script component and VB scripts. I do not know how to Write VB scripts so trying to find alternatives.
I have @Year and @Month as parameters , both integers , example @Year = '2013' , @Month = '11'
SELECT T.[Year], T.[Month]
-- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run) , (SELECT SUM(Stores) FROM #ABC WHERE [Year] = T.[Year] AND [Month] = T.[Month]) AS [Sum_Stores], (SELECT SUM(SalesStores)
[code]....
What I want to do is to add more columns to the query which show the difference from the last month. as shown below. Example : The Diff beside the Sum_Stores shows the difference in the Sum_Stores from last month to this month.
Both delete the data like rows, columns, or table ? Delete can have Log file, can roll back, also it is DML demand / while Truncate delete the records from log file, can release Memory space, it's DDL statement.
I'm using SQL Server 2012 and I need to run a query against my database that will output the difference between 2 dates (namely, DateOfArrival and DateOfDeparture) into the correct month column in the output.
Both DateOfArrival and DateOfDeparture are in the same table (let's say GuestStay). I will also need some other fields from this table and do some joins on some other tables but I will simplify things so as to solve my main problem here. Let's say the fields needed from the GuestStay table looks like below:
I need my query to output in the following format:
Is there anyway I can turn off delete in SQL server? I want to prevent anyone inadvertently deleting rows in tables. I thought worse case I could have triggers on tables to perform roll back.
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I was just looking at an SSIS package someone else set up and I went into one of the execute SQL tasks and it is calling a stored procedure to truncate a table. There are a lot of places that tables are truncated within this SSIS package. But each one the 'database', 'owner', and 'table' name are hard coded.
Like this: exec dbo.uspTruncateTable 'dbname', 'dbo', 'tblname'
Of course there are different names, just changed for an example.
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int DECLARE @RowCount int DECLARE @NextBatchDate datetime CREATE TABLE #BatchProcess ( EventDate datetime, ApplicationID int,
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales]( [batch_key] [int] NOT NULL, [Company_key] [int] NOT NULL, [customer_key] [char](22) NOT NULL, [Trx_Date] [int] NOT NULL, [account] [nvarchar](35) NOT NULL,
Database is in simple recovery mode, and published with transaction replication push subscription, just one subscriber but the database is huge. I don't want to overwrite the schema at the subscriber either.
I had to run an alter database command on a published database, it created so many logs that an extra drive had to be added along with an extra log file to accommodate all the logs.
The problem I have is I'd like to know clear the file of logs so I can drop the temporary log file, and give the drive back, but I cannot.
I have tried dbcc shrinkfile with the emptyfile option but it never clears, I have also tried it with notruncate and truncateonly options (mainly out of desperation).
I do not need to worry about point in time restore as a full backup is taken before and after the operation. After which the database will be put back into Full recovery mode.
I have looked at log_reuse_wait_desc and the file says 'Replication', so I am now thinking the file cannot empty because replication is keeping one of the VLFs active. I tried dropping and recreating the subscription hoping it might free something up and I could get somewhere, but it made no difference.
Do I have to remove replication completely to get round this? Surely not.
I have also tried putting the database back into full recovery mode, doing a full DB backup, and a transaction log backup, but its made no difference, which is also what makes me think a portion of the log is still active because of replication, and perhaps the transactions have not gone through to the subscriber, which raises another question, why not?
I have not tried restarting SQL server, as I'd like to know a way out of this without having to do that, plus I do not think it would make any difference anyway.
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED ( [QryNum] ASC, [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
I've 2 tables ResumeSkill (Child table) and Skill (Parent table), There are duplicates in the parent table and after removing the foreign key constraint in child table deleted all duplicate values from Parent table. But those deleted duplicate values has references in child table which need to be deleted now.
ResumeSkill Skill
Id SkillId SkillId Name
I want to delete all the records from ResumeSkill that dont have matching skillId in Skill table.
I have created a Dynamic Merge statement SCD2 Store procedure , which insert the records if no matches and if bbxkey matches from source table to destination table thne it updates old record as lateteverion 0 and insert new record with latest version 1.
I am getting below error when I ahve more than 1 bbxkey in my source table. How can I ignore this.
BBXkey is nothing but I am deriving by combining 2 columns.
Msg 8672, Level 16, State 1, Line 6
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.
I have a master table and i need to import the rows into the parent and child table.
Master table name is Flatfile_Inventory Parent Table name is INVENTORY Child Tables name are INVENTORY_AMOUNT,INVENTORY_DETAILS,INVENTORY_VEHICLE, Error details will be goes to LOG_INVENTORY_ERROR
I have 4 duplicate rows in the Flatfile_Inventory which i have already inserted in the Parent and child table.
Again when i run the query using stored procedure,its tells that all the 4 rows are duplicate and will move to the Log_Inventory_Error.
I need is if i have the duplicate rows in the flatfile_Inventory when i start inserting into the parent and child table the already inserted row have the unique ID i must identify it and delete that row in the both parent and chlid table.And latest row must get inserted into the Parent and child table from Flatfile_Inventory.
I am using SQL Server 2012 SE.I am trying to delete rows from a couple of tables (GetPersonValue has 250 million rows and I am trying to delete 50Million rows and GetPerson has 35 Million rows and I am trying to delete 20 million rows). These tables are in TX replication.The plan is to delete data older than 400 days old.
I tried to move data to new tables from the last 400 days and it took me like 11 hours. If I delete data in chunks of 500000 then its taking a long time to rebuild indexes(delete plus rebuild indexes 13 hours). Since I am using standard edition partition wont work.
find ddl below:
GO CREATE TABLE [dbo].[GetPerson]( [GetPersonId] [uniqueidentifier] NOT NULL, [LinedActivityPersonId] [uniqueidentifier] NOT NULL, [CTName] [nvarchar](100) NULL, [SNum] [nvarchar](50) NULL, [PHPrimary] [nvarchar](50) NULL,
In my ETL job I would like to truncate stg table but before truncating stging table, I want to make sure that all the records are inserted in the data model. The sample is as below
create table #stg ( CreateID int, Name nvarchar(10) ) insert into #stg select 1, 'a' union all select 2, 'b' union all
[Code] ....
How can I check among these tables and make sure that all values are loaded into the data model and the staging table can be truncated.