I have a table with about half a million records, each representing a patient in my county.
Each record has a field (RRank) which basically sorts the patients as to how "unwell" they are according to a previously-applied algorithm. The most unwell patient has an RRank of 1, the next-most unwell has RRank=2 etc.
I have just deleted several hundred records (which relate to patients now deceased) from the table, thereby leaving gaps in the RRank sequence. I want to renumber the remaining recs to get rid of the gaps.
I can see what I want to accomplish by using ROW_NUMBER, thus:
SELECT ROW_NUMBER() Over (ORDER BY RRank) as RecNumber, RRank FROM RPL ORDER BY RRank
I see the numbers in the RecNumber column falling behind the RRank as I scan down the results
My question is: How to convert this into an UPDATE statement? I had hoped that I could do something like:
UPDATE RISC_PatientList_TEMP SET RRank = ROW_NUMBER() Over (ORDER BY RRank);
but the system informs that window functions will only work on SELECT (which UPDATE isn't) or ORDER BY (which I can't legally add).
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales]( [batch_key] [int] NOT NULL, [Company_key] [int] NOT NULL, [customer_key] [char](22) NOT NULL, [Trx_Date] [int] NOT NULL, [account] [nvarchar](35) NOT NULL,
I've 2 tables ResumeSkill (Child table) and Skill (Parent table), There are duplicates in the parent table and after removing the foreign key constraint in child table deleted all duplicate values from Parent table. But those deleted duplicate values has references in child table which need to be deleted now.
ResumeSkill Skill
Id SkillId SkillId Name
I want to delete all the records from ResumeSkill that dont have matching skillId in Skill table.
I need to pull records from single table such that I get a subset defined like this:
( acctcode = 'xh364' and product = 'T&E' )
And return all of the rest of the records:
( acctcode = '%' and product = '%' )
Can I do this within a WHERE clause, or will this require CASE / ELSE? There will be other specific acctcode/product rules that will be added later. I could do this with a UNION, but I need to avoid that if possible.
I would like to know if there is a way to find out who changed a users roles/access WITHOUT using the audit function. For example, if a user account was created and given SA access then changed to read only, how can I find out who made that change? I tried searching for an answer, but kept getting no results. I'm thinking this may tie into the sys.sysusers view?
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
I have a question to ask. I'm new to SQL Server but have been trying to learn it. I have just installed the SQL Server 2000 SP4 and have a dummy Access database that need to be migrated to SQL Server 2000 for a test. I used the DTS wizard provided by the SQL Server 2000. It was successful - no error appears. I have also successfully done the link tables from Access to SQL Server. Now, my question is how do i add/edit/delete some records in SQL Server via Access. I need your help or guidance on this.
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
I have seen the threads already in here about connections and SQL Server but I too am having a connection pool problem.
I am explicitly making a connection, creating a command object, executing a datareader then closing the command and then disposing the connection. The connection just prior to the dispose is definately closed.
However when I do an sp_who in SQL Server the connection is still listed. I have a connection timeout of 30 seconds, so I am considering reducing this to 5 seconds but dont wish to. So therefore when a process is called repeatedly instead of using an existing connection in the pool, it thinks that this connection is still active and therefore creates a new connection. Because all of this is in an ascx, I cant code for one connection to stay open for all 30 instances of the control.
This is a problem because I am creating controls at run time and using page.databind and at that point, I exceed the connection pool and therefore the site crashes out to the "SQL Server Timeout" error.
I have been trying to solve the locking problem from past couple of days. Please help mee!!
Scenario: -------------- I have a SSIS package in which 2 data flow tasks. 1st data flow task deletes records from a 5 tables and the 2nd data flow task should insert records into 1 of the five tables after the success of 1st data flow task. This scenario runs in Transacation.
The above scenrio in the 2nd data flow task hangs in runtime. It does not complete. with sp_who2 command i could see that there is an intent share lock(LK_M_IS) on the table and the status is SUSPENDED.
I dont know how to come out of this locking. Please help.
I am using a SQL Server Agent jobs that run each morning to update the records in a table to match what they should be for that day. I built them and tested it using a test table called "testtable1". It worked fine. But when I switched over to our production table, it fails saying the table has to be decaled. What would be the difference. The production table has a "@" in front of the name, is that causing issues?
USE [Live_build] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO BEGIN DELETE FROM @ZIPLIST INSERT INTO @ZIPLIST SELECT * FROM tblZip3DSWed; END
Using ASP.net 2.0 or Visual Web Developer Edition 2005 I could not find a single example with Datagrid Control or Form View Control, to achieve the basic functionality ofAdd/Edit/Delete Records with Remote SQL Server 2000 Database. When I searched for these examples I found many videos and examples using local database and in learning path of Visual web express editions, very goodexamples and videos using local SQL Server 2005 database, BUT not with the remote database. My question Is it possible to get the basic functionality of Add/Edit/Delete Records with Remote SQL Server 2000 Database using ASP.Net 2.0 Datagrid and FormView controls? This question looks like, a lazy developer question!! but, in my learning path I found GREAT videos in Visual Studio Web Developer Express edition and learned simple way of drag and drop the controls assign create local database and their connections, and few clicks to add/edit/delete records using datagrid and formview controls. In real life those are not much useful because many of web interfaces are with sql server 2000 and we need to convert them into ASP.Net. Similarly I wanted to drag and drop couple of controls and less coding etc. and accomplish basic functionality of add/edit/delete records using remote database sql server 2000. If possible please help me out. Wondering is it possible to get the functionality of add/edit/delete records using datagrid/formview with remote sql server 2000, the way they explained in bigginers learning videos ... I am referring to (http://msdn2.microsoft.com/en-us/express/aa700802.aspx) If possible and you think it is possible please guide me to that URL where I can learn the great functionalty n implement..!!
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
I have a database created in server 2000, and now I have moved it to server 2005.
All works do fine, but there is a user which cannot be removed.
In the user properties window, the assigned schema is empty. The user is a db_owner of the database. When I was trying to update the user, it asked me for the login. The login is empty, but the field is disabled.
Is there anyway I can turn off delete in SQL server? I want to prevent anyone inadvertently deleting rows in tables. I thought worse case I could have triggers on tables to perform roll back.
I have several reports for users to view on our Intranet. After installation of SQL 2005 SP2 patch, I cannot delete user or user's authority from Report in Properties Tab. An error message was shown on the status bar. It indicated that JavaScript Error: 'Return' statement outside of function. Seems something wrong with the 'Delete' funciton in SQL 2005 after update. The other functions worked fine. Could you point me out how to fix it or need to install any updates / hotfix. Thanks a lot!
I have been working around with SQL Server Express 2005 on VISTA. I created a small C++2008 application which interface with SSE 2005. When running my application, it automatically created a user instance which I would like to get rid of. I just forgot from start to set "User Instance = False" in my connectionstring. (oups...)
I'm aware, from readings I did, that a couple of tables as well as log files, etc. are created under directory "utilisateurs...AppDataLocalMicrosoftMicrosoft SQL Server DataSQLEXPRESS". SQLEXPRESS is the instance name chosen during install of SSE2005.
Also using following SQL Statement:
select owning_principal_name, instance_pipe_name, heart_beat from sys.dm_os_child_instances I got following result: owning_principal_name: PC-DE-STEPHANEStéphane instance_pipe_name: \.pipe6172F4E8-622E-4A sqlquery heart_beat: dead (at this moment...)
==> How can I make a good cleanup of all this as well as get rid of the user instance itself?
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I have an issue with Delete statement.In the code given below (its a part of actual proc),if we use TRUNCATE to clean the temp tables, everything goes fine.But if I use DELETE in place of truncate, system skips the IF loop 'if (@script_type = 1 OR @script_type = 2)'I am not able to understand this behavioral difference between DELETE and TRUNCATE.Recently the database is being used for replication, but that should not be a reason.
SELECT @max_rows = COUNT('X') FROM #temp_table1 SET@row_cnt = 1 WHILE @row_cnt <= @max_rows BEGIN
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int DECLARE @RowCount int DECLARE @NextBatchDate datetime CREATE TABLE #BatchProcess ( EventDate datetime, ApplicationID int,
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED ( [QryNum] ASC, [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
I have created a Dynamic Merge statement SCD2 Store procedure , which insert the records if no matches and if bbxkey matches from source table to destination table thne it updates old record as lateteverion 0 and insert new record with latest version 1.
I am getting below error when I ahve more than 1 bbxkey in my source table. How can I ignore this.
BBXkey is nothing but I am deriving by combining 2 columns.
Msg 8672, Level 16, State 1, Line 6
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.
I have a master table and i need to import the rows into the parent and child table.
Master table name is Flatfile_Inventory Parent Table name is INVENTORY Child Tables name are INVENTORY_AMOUNT,INVENTORY_DETAILS,INVENTORY_VEHICLE, Error details will be goes to LOG_INVENTORY_ERROR
I have 4 duplicate rows in the Flatfile_Inventory which i have already inserted in the Parent and child table.
Again when i run the query using stored procedure,its tells that all the 4 rows are duplicate and will move to the Log_Inventory_Error.
I need is if i have the duplicate rows in the flatfile_Inventory when i start inserting into the parent and child table the already inserted row have the unique ID i must identify it and delete that row in the both parent and chlid table.And latest row must get inserted into the Parent and child table from Flatfile_Inventory.
I am using SQL Server 2012 SE.I am trying to delete rows from a couple of tables (GetPersonValue has 250 million rows and I am trying to delete 50Million rows and GetPerson has 35 Million rows and I am trying to delete 20 million rows). These tables are in TX replication.The plan is to delete data older than 400 days old.
I tried to move data to new tables from the last 400 days and it took me like 11 hours. If I delete data in chunks of 500000 then its taking a long time to rebuild indexes(delete plus rebuild indexes 13 hours). Since I am using standard edition partition wont work.
find ddl below:
GO CREATE TABLE [dbo].[GetPerson]( [GetPersonId] [uniqueidentifier] NOT NULL, [LinedActivityPersonId] [uniqueidentifier] NOT NULL, [CTName] [nvarchar](100) NULL, [SNum] [nvarchar](50) NULL, [PHPrimary] [nvarchar](50) NULL,
I am developing a form for a mortgage company. There can be any number of borrowers on a given loan, and the business has asked that this form return only 2 borrowers at a time for a loan. For example, if there are 3 borrowers for a loan, they want the first copy of the form to print the first 2 borrowers and then another copy of the form to print the 3rd. No matter how many copies are printed, they want the borrower information to be labeled as 'Borrower1' xyz and 'Borrower2' xyz. Also, there will be a LOT more fields returned on the real form, so the sample information below is very simplified test data.
I don't want that 2nd record to return. This result is what makes me think of gaps and islands, but I don't know if the 2nd record is really an island since it's (1) not stored this way...it's returning this way because of the query and (2) it's not sequential data..I tried restricting this by putting this into a CTE and then returning only the odd numbered records like I have below. This runs pretty quickly when dealing with one loan. But...I am concerned that the CTE will be slow when we run batches of loans.
Attempt with CTE: --With CTE ;WITH cte AS (SELECT Borrower1 = BorrowerName , Borrower2 = LEAD(BorrowerName) OVER(ORDER BY BorrowerOrder) , RowNumber = ROW_NUMBER() OVER(ORDER BY BorrowerOrder)
[code]...
Is there a better, cleaner way to do this? Or is the CTE the best way to go?
I have idea on SMK, DMK and symmetric and asymmetric keys. I have also idea on TDE. But Is there any way to encrypt all the records of all the columns of a table in a database? actually I need to encrypt the database. Someone .... thinks that when someone will write select query he will get the encrypted records. As per as I am concerned it is not possible. I can encrypt the specific column using symmetric or other keys...
Is there any software or any tool which will provide encrypted records of database?
With this query i get only the records i need, but i would like to output in this way
1 - 20 21 - 30 31 - 40
of course in the real environment the ID are not consecutive, this is just one example of data.
declare @temp table (ID int) declare @i int = 1 while(@i<1000) begin insert into @temp values (@i) set @i=@i+1 end select ID from ( select ID, row_number() over (order by ID) as rn from @temp ) q where (rn % 20=0) OR (rn % 20=1)