/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER proc [dbo].[ServiceLogPurge]
-- Purge records dbo.ServiceLog older than 3 months: -- Purge records in small portions to avoid locking production tables -- for a long time. The process takes longer, but can co-exist with -- normal usage of the tables.
[Code] ...
*** Getting this error below when executing the code ***
Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45 Incorrect syntax near 'Failed:'.
I have a table with approx 5 million rows and 36 columns. It takes approx 4 minutes to delete 1 row. The table has 3 indexes in addition to it's primary key and has twelve foreign key constraints. We are still using sequel 7. There is a backup run every night as part of the nightly maintenence that reorg/reindexes and checks the database integrity. Any thoughts?
I have a csv file that I need to import daily into a SQL Server 2005 table. Much of the table contents could just be overwritten with the new csv file, however there are a set of Rows within the table that need to be appended to , rather than overwritten. There is no Primary Key in the csv file that can be used. I'm not sure this is the best approach, but what I have been trying to do, is append the entire csv file to the existing table, and then go back and delete the duplicates. When I run the Delete, it does delete the majority of the records, but leaves a couple hundred behind. The number left behind varies with each run, can't seem to identify a pattern here. Running the Delete a second time does clean up the rows left behind in the first execution of the Delete, and gives the result I want. Any thoughts as to why this needs to be run twice? Or is a better approach available? Here is my code - SELECT [Pkg ID], [Elm (s)], [Type Name (s)], [End Exec Date], [End Exec Time], dupcount=count(*) INTO temppkgactions FROM pkgactions GROUP BY [Pkg ID], [Elm (s)], [Type Name (s)], [End Exec Date], [End Exec Time]HAVING count(*) > 1
DELETE TOP (SELECT COUNT(*) -1 FROM dbo.temppkgactions WHERE dupcount > 1 ) FROM dbo.pkgactions DROP TABLE temppkgactions
What I'm trying to do is delete a user and all their related information within the other tables. I'm not wanting to delete the table, just one column with that user and their related information. So my Primary_Key is UserID within the table [alumni] and my three Foreign_Keys are CommentID, PhotoID, and AlbumID within the tables [comments], [photos], and [albums]. Here is some of the code that I have: <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:SoderquistString %>" DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID" SelectCommand="SELECT [UserID], [UserName], [FirstName], [LastName], [State] FROM [alumni] WHERE ([State] = @State)"> <DeleteParameters> <asp:Parameter Name="UserID" Type="Int32" /> </DeleteParameters> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="state" PropertyName="SelectedValue" Type="String" /> </SelectParameters> </asp:SqlDataSource> The users are set up in GridView form. Is there some type of DELETE command that I need to be writing that is different than the one above? I have tried adding onto the following DELETE statment: DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID DELETE FROM [photo] WHERE [UserID] = @UserID; DELETE FROM [album] WHERE [UserID] = @UserID; DELETE FROM [comment] WHERE [UserID] = @UserID; ...but that doesn't work...and doesn't look right. I would really appreciate anyones suggestions or help that you may be able to provide. Thank you!
I have problem in deleting duplicate rows. I have a identity column in my table, if I try to use correlatted sub query with Delete command it gives error.
The other problem I have is I have a date column in my table and update that column with current date and time. If use a query to fetch a records on a particular day , it does not return any rows
select * from rates where ch_date >='02/11/99' and ch_date<='02/11/99'
If I use convert also there is some other problems. Is there any way to force date checkings to be done excluding time.
This is an imaginary problem while discussing ROWID in ORACLE.
Consider a table without primary key, unique key, uniuqe index. A row has inserted into the table many times. I want to delete all but one dulicated rows. With any 'where' clause all rows(duplicated) will be deleted. In ORACLE i can achieve this using ROWID as follows:
Delete from Table_name where < all column values > and ROWID <> ( Select max(rowid) from Table_name where < all column values > )
How can this be achieved in MS SQL Server 6.5 ?
According to Dr. Codd's Golden rules for RDBMS one is that One should be able to reach each data value in the database by using table name, row idenfication value and column name.
Does MS SQL Server 6.5 satisfy this requirement ?
Also How many of Dr. Codd's 13 Golden Rules for RDBMS does MS SQL Server 6.5 Satisfy? Which doesn't ?
I have two tables. Table 1 contains a distinct ID(3,4,5). Table 2 contains multiple ID's(3,3,3,3,3,4,4,4,5,5,5). I want to be able to use a join. If an ID is found in table 2, remove all entries of it. Any ideas? Thanks...
delete b.id from table1 a join table2 b on a.id = b.id
In my database, I have a table "tbl_c_extract" that consists of 4 columns that look the following. I'm looking at a daily batch of around 4000 records, of which 150 are likely to be duplicates.
In the example above, I need to remove 2 of the entries, leaving only the one that with the maximum leave date. In this case, those without a leave date have the 2099 entry.
Using CTE works exactly as I want it to, however SQL Server Agent doesn't seem to like the use of CTE..
Code: WITH CTE (Proprietary_ID, LeaveDate, RN) AS ( SELECT Proprietary_ID, LeaveDate, ROW_NUMBER() OVER(PARTITION BY Proprietary_ID ORDER BY Proprietary_ID, LeaveDate) AS RN FROM tbl_c_extract ) DELETE FROM CTE WHERE RN > 1
Hi, I have a table named "std_attn", where, by some bad coding, lots of duplicated rows have been created. And the table don't have any PK. So Now tell me the way to remove the duplicaies..................
int1 must have the value 1 int2 must have the value 1 int3 must have the value 0 and now we get to the difficult part - first character in the field [Cost No] have to be different from the letter 'B'. [Cost No] have the datatype varchar(20)
I expected the code to look something like this: DELETE Table1 FROM Table 1 WHERE ([Int1] = 1) AND ([Int2] = 1) AND ([Int3] = 0) AND ((LEFT([Cost No]), 1) <> 'B')
But I get this error message: The left function requires 2 arguments
What am I doing wrong and what should the right code look like?
I need to delete the duplicate rows from a table. How to do that in SQL server 7.0 ? If possible write an example, so that it will be much useful for me..
i have 6 read only tables that every night all the data gets dumped and a new updated copy of the data is copied over. the tables range from 500,000 rows to almost 4 million. i have indexes set up on the fields i use to query against. my questions are 1.since i dump all the data every night and replace it, do i need to rebuild the indexes every night or is that done after the data is reentered? 2.i want to use a fill factor on the table since it is read only, but will dumping the data every night and reinputting it have adverse affects with a fill factor? 3.should i be shrinking the database or defragging it everynight cause of my data dumps and reloads?
How can I quickly delete thousands of rows in a table (SQL2000) according a query and without blowing up the log file? For instance executing the query: Delete from transactions WHERE transactiondatestamp < DATEADD (m,-4,GETDATE())
increases my log file to almost 6GB before job was done an normal size was re-obtained. In addition it took a long to time to get the job done. With the command truncate table I cannot use query unfortunately but this would be faster.
I have an SQL tables [Keys] that has various rows such as: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External 3 Key1 Key1 InHouse 4 Key1 Key1 InHouse 5 Key1 Key1 InHouse
Obviously IDs 1,3,4,5 are all exactly the same and I would like to be left with only: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External
I cannot create a new table/database or change the unique identifier (which is currently ID) either. I simply need an SQL script I can run to clean out the duplicates (I know how they got there and the issue has been fixed but the Database is still currently invalid due to all these duplicate entires).
I have an SQL tables [Keys] that has various rows such as: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External 3 Key1 Key1 InHouse 4 Key1 Key1 InHouse 5 Key1 Key1 InHouse
Obviously IDs 1,3,4,5 are all exactly the same and I would like to be left with only:
I cannot create a new table/database or change the unique identifier (which is currently ID) either. I simply need an SQL script I can run to clean out the duplicates (I know how they got there and the issue has been fixed but the Database is still currently invalid due to all these duplicate entires).
im having problems deleting rows in a reference table. is there any tools which tables to delete first before deleting the rows in the table which contains the primary key?
i have a lot of tables let say over 300 so its hard for me to guess which comes first... what should i keep in mind deleting rows with a referential integrity?
I made an application that insert some data to MS SQL 2005 DB Express Edition. One of columns in my database store text. The content of that field must, beyond the existing text, append the current id to its text string. Practically, it means that if on row nr 15 I store text value "15text", on the next row i will store "16text". I figured out that I can get value of max ID by creating following stored procedure.
Once i get the maxID value i simply concat max id and string like:
string.Concat(max_id.ToString(), file_name);
where "max_id is integer return value from stored procedure and "file_name" is a string to rename like "max_id+file_name"
Two problems occur!!!
Problem nr 1.
Since I use to insert 10 new rows each time, the values from 0-9 are appended to text like "(0-9)text"
Problem nr 2.
If I delete some rows, ID does not get update. It means that after deleting all rows from table, next inserted item gets last existed ID before delting +1. New inserted item should get value 1 since table is empty after deleting all rows from it!!!
CREATE PROCEDURE Zakl_AddNewRow @zakl_nazwa VARCHAR(25), @zakl_miasto VARCHAR(20), @zakl_ulica VARCHAR(30) AS INSERT INTO Zaklady (Zakl_Nazwa, Zakl_Miasto, Zakl_Ulica) VALUES (@zakl_nazwa, @zakl_miasto, @zakl_ulica)
Also I've made procedure which deletes the row.
Code Block
CREATE PROCEDURE Zakl_DeleteRow @zakl_id INT AS DELETE Zaklady WHERE Zakl_ID=@zakl_id
So what's the point of my problem?
If I execute the procedure "Zakl_AddNewRow", I will add a new row to ZAKLADY table. Column ZAKL_ID is added automaticly because, as You can see, it is declared as INT IDENTITY(10,10). I have added 5 rows so the ZAKL_ID values will be adequately 10,20,30,40,50. Now I delete the row with ZAKL_ID = 30. After it I add new row once again. The result is that the ZAKL_ID value will be 60.
So the number "30" of the identity sequence became unsigned to any ZAKL_ID. I wanted that new row which was added has ZAKL_ID=30 not 60. The second added row should have ZAKL_ID=60. Is there any possibility to do this.If yes, What should I change in sql code.
I've got a very filesize restricted database. I noticed that when I insert 1000 rows my filesize jumps to 80k, but when I delete all but 50 of those rows...the filesize actually increases to 84k. How do I make sure the filesize of my database shrinks when I delete rows?
I was wondering if anyone had a suggestion as to how to delete duplicate rows from a table. I have been doing this:
SELECT * INTO TempUsersNoRepeats FROM TempUsers2 UNION SELECT * FROM TempUsers3
This way I end up with a total of four tables (the fourth table being the original Users table) and I was hoping that there was a way that I could do this all within the the original Users table and not have to create the three TempUsers tables.
Hi, New to this Database and this forum as I am I would like to ask for a couple of prompts. My SQL2000 tables are ready and I need to schedule Daily upload of .txt files. These contain a rolling 7Days of Stats. Q1: How best to schedule the automiatic uploading of this data to the respective Tables in SQLServer.(Field names are identical), and Q2: How to schedule a Daily Deletion of those rows which are in the tables already (Each day 6 Days must be Deleted and 1 kept)
Is it possible to delete multiple rows from multiple tables based on information specified. Can you write a query that would pull the information if you knew what tables it would need to look in? If anyone know I would greatly appreciate any help I am not sure of this.
I've deleted about 3-4 million rows from one of my tables as the data was old and no longer needed. The problem is that now queries are runnning extra slow. I am in the process of running taras isp_ALTER_INDEX however its taking quite a long time and seems to be slowing things down even further while its running as expected. (It's been running 4 hours already, I have stopped it and will rerun it a slower traffic period for the db server)
Just wondering if I have the right approach here or if anyone else has any suggestions.
I am running a simple merge replication in SQL Server 2000. I have one database that is the publisher, and a second database that is the subscriber. When I add a new row to the subscriber it will replicate to the publisher as expected. However, the new row at the subscriber will then be deleted without explanation. The row will remain at the publisher though.