I have a table with 35,000 records in it. I want to update a value in column A for only the first 5000 records, leaving the value in Column A for the remaining 30,000 records as it is now. What would be the command I would use to update Column A for the first 5000 records.
I am wondering if its possible to lock a SQL table for specific amount of time, say 5 min.
There is a particular 'Phone' table on the database that should never get locked. Yet, during the development stages we have noticed that the table gets locked at time. The issue since has been resolved to the best of our ability, but, there is still a vague chance that the table can get locked due to the multiple jobs that query the table, when we go live.
If such a situation occurs, we just want to be able flip the switch that will send the server to the mirror mode and the previously mirrored database to become the principal.
So, I just want to recreate a situation by voluntarily locking the table.
I need to show the total amount of rows in a specific table?
The query is as follows:
As part of the planning process to expand the database that supports Northwind operations, the IT manager would like to know how many rows are currently in specific tables so that he can conduct capacity planning.
The results needed include two columns, TableName( containing all the tables in the database and Rows, which contain the total amount of all the rows per table).
Can you create an UPDATE TRIGGER and use some typeof code to figure out which SP just updated the current table?If not how can i achieve what i want?I tried to run SQL Profiler and i don't understand why i can'tsimply have the Profiler filter events only for the specific database idand the table's object id i chose?What am i doing wrong with SQL Profiler? I was testing thisthrough SQL EM. I had the filters chosen for a specific database idand a specific table's object id, yet when i open another table SQLProfiler captures that information too.Thank you
A friend reminded me of a problem we tried to solve a few years ago and were unsuccessful. Below is a copy of the email he sent me. We would very much appreciate any ideas from the community. Thanks!Lets start with a simple schema where you have 4 tables:
I have a situation where deleting old records is blocking updating latest records on highly transactional table and getting timeout errors from application.
In details, I have one table called Tran_table1 in OLTP database. This Tran_table1 is highly transactional table, it will receive data for insert/update continuously
While archiving 2 years old records from Tran_table1 into Tran_table1_archive in batches(using DELETE OUTPUT INTO clause), if there is any UPDATEs on Tran_table1,these updates are getting blocked and result is timeout errors in application.
Is there any SQL Server hints to avoid blocking ..
In one of our forth coming projects, with ASP.Net/C#/MSSQL Server, We have to deal with a Business table having about 15 millions of records. We want to know, that which methodologies should we adopt, both regarding front end and back end perspective, so the site could give optimised performance. Also in place of a Dedicated Server, the Hosting Company provides MSDE (that come with .net). Will this create any problem with this project, that have such a huge table? Should we go for some advanced database technique, such as, Clustering, Spliting Tables, etc.
Followings are the fields that the business table contains:
ID, Category ID (which comes from a Category table, each business is under a category), BusinessName, SignupDate, Address1, Address2, Phone Number, Hours Of Operation, Years in Business, LicenseNumber, DiscountCoupon, Website
I'm having a bit of problem putting together a query that will update records in one table from another table. I've got 2 tables lets call them tblA and tblB.In tblA there is EID(int), QID(int), OID(int) and in tblB OID(int) and QID(int). Also there is an input parameter @EID.What I want to have happen is when someone inputs @EID then tblB gets updated from tblA. To give you a heads up there are no PKs or FKs in either of the tables.If there is an OID in tblA it takes the QID from tblA and places it in tblB where OID from tblA and tblB match.Hopefully this makes sense I thought that I could do something like this:CREATE PROCEDURE test3(@EID int)AS UPDATE tblUsedSET QuestionID = (select QuestionID from tblExamQuestions where ExamID = @EID)WHERE OrderID = (select OrderID from tblExamQuestions where ExamID = @EID)GOBut I get an error that says that there are to many records being returned by the subqueries. Winston
Hi,I have an Access application with linked tables via ODBC to MSSQLserver 2000.Having a weird problem, probably something i've done while not beingaware of (kinda newbie).the last 20 records (and growing)of a specific table are locked - cantchange them - ("another user is editing these records ... ").I know for a fact that no one is editing records and yet no user canedit these last records in the MDB - including the administrator -while able to add new records.Administrator able to edit records in the ADP (mssql server) where thetables are stored.Please help, the application is renedred inert .Thanks for reading,Oren.
I have a problem where I have 2 compare 2 records from the same table. This part looks easy but the problem is for a User there can be multiple records and I have 2 compare each record with its previous instance based on the timestamp. Not only I have to compare I have to perform some analysis. Below is the Table script and sample output.
Givens: All SQL Server 2008 or 2012 tools at your disposal.
Production database contains the following tables (simplified for example: constraints ignored, etc.) associated with a racing video game’s server.
-- A player of our game
-- Table greater than 10 million rows
CREATE TABLE [dbo].[User] ( [UserId] [bigint] NOT NULL ,[country] [int] NULL -- User’s home country ,[name] [nvarchar](15) NULL -- User’s displayable name (‘John’, ‘Bill’) ,[subscriptionTier] [int] NULL ) -- 0 == free, 1 == paid, for instance
Assume that rows get written into the event tables at a rate of 1,000 a minute,are never updated once written and currently are only read on a replica/reporting server.
Question Background: Write up a single query that would return the following: List of users and whose “TotalMoneyEarned” value ever grew (between logon events) at a rate of more than 1,000 per minute (we’d consider these suspicious and flag them for later investigation).
For instance, if the sample data were:
-- example of [Events.UserLogon] data -- not the query output we want
Event 1 is okay because there’s nothing to compare it against
Event 2 is okay because the TotalMoneyEarned only grew 500 in a minute
Event 3 should be flagged, as the value grew 1500 in a minute
Event 4 is okay, as it grew 7,000 in 8 minutes (< 1000 per minute)
Query Output (your query should return data in a format like this):
User Flagged Logon Time Rate Since Last Logon (money/minute) John 2010-10-16 00:21:56 1500 Dave 2010-10-16 00:30:50 3200 Bill 2010-10-16 00:35:23 1000
It is likely that you will need to create sample data for both the User and [Events.Logon] tables. We are looking for a single query that returns data like what is represented in Query Output.
I'm trying to update a checkbox from "False" to "True" within a single table for multiple records. I can update a single record using the script below. However, I'm having trouble applying additional Id's to the string.
(Works) - Update Name_Demo set KEY_CONTACT = 'true' where ID = 225249
(doesn't work) - Update Name_Demo set KEY_CONTACT = 'true' where ID = '225249, 210014, 216543'
It says query executes successfully but returned no rows.
I am trying to update a large table which consists of 45 million records , it is taking more than 2 days to the update , below is my approach
1. The table has only one clustered index and no other indexes on the table. 2. I am updating in batches say 20000 record-wise. 3. Changed the recovery mode to bulk logged and auto-growth size is set to 300MB and there is enough space in my disk for transaction log .
Hi,Currently we're a building a metadatadriven datawarehouse in SQLServer 2000. We're investigating the possibility of the updatingtables with enormeous number of updates and insert and the use ofcheckpoints (for simple recovery and Backup Log for full recovery).On several website people speak about full transaction log and thepace of growing can't keep up with the update. Therefore we want tocreate a script which flushes the dirty pages to the disk. It's notquite clear to me how it works. Questions we have is:* How does the process of updating, insert and deleting works with SQLServer 2000 with respect to log cache, log file, buffer cache, commit,checkpoint, etc?What happens when?* As far as i can see now: i'm thinking of creating chunks of data of1000 records with a checkpoint after the Query. SQL server has thedefault of implicit transactions and so it will not need a commit.Something like this?* How do i create chunks of 1000 records automatically withoutcreating a identity field or something. Is there something like SELECTNEXT 1000?Greetz,Hennie
I have a table which is updated daily using a MERGE statement. As records are insert, updated and deleted, I am saving the OUTPUT from the MERGE statement into a history table with a timestamp and action$ column appended to the record.
Using this history table, I'd like to rebuild the data based on specific past date. I was able to create a stored procedure that inspects each record in the history table and apply it to the data in a temp table. The stored procedure solution uses multiple queries to rebuild the data at a point in time. I was curious if there was an easier and more efficient solution using a table function.
I need to update the ilocationid from Table 1 to all Table 2 records related to Table 1but there is no direct relation from Table 1 to Table 2. I needed Table 3 to make the connection from Table 1 to 2.
I have installed SQL Server Express Edition. I have migrated a set of tables from Oracl10g (by using Microsoft's Migration Tool Kit).While I am trying the following simple update command, the session hangs and it never finishes !!!!!!!!!!!!
Is there any way how to create indexies after insertion of any number of records (I dont want to create index after insertion of every record, but for example after insertion of 1000 records) ?
I heard it should be possible with „bulk insert“ or with transactions. Is it right ? I need do this with MS SQL Server 2005 (Workgroup edition).
I used a data flow task, and when trying to transfer data from a OLE DB Source (records ~ 75 lac) to a destination OLE DB Source, SSIS fails at the middle giving an error saying the Transaction log got filled, try again after clearing the same.
My query is what is the most efficient way to transfer say records more than 50 lac ensuring that it doesn't fail in the middle?
Hello all - I currently have a project that has a gridview on the front end. The user can select multiple items from this grid, hit a button and all of the selected records should be updated in the process. However, this resultset can have a large amount of data coming back, and I'm stumped on how to pass all of the ID's to the sp. I'd rather not call the SP for each record selected, as there could be 1,000 items selected, and well, I'd rather not call the SP 1000 times :p. I thought of generating a comma delimited list as I'm looping through the grid and using dynamic sql, but the IDs are about 6-7 numbers long, and including comma, would take up almost all of the max space in a varchar. Are there any good solutions to this problem? Passing the items as an array? Generating a data table in .NET and passing that? Any help would be appreciated. -Jaime
Is there a way to update a specific cell(s) in an excel? I have an excel with charts and graphs which use as data source a range of cells from another sheet within the same spreadsheet. Is there a way update a specific cell from within the sql using openrowset()... ?
In my datawarehouse fact table I have a column (revenue) that I want to populate based on the values of number of columns, for simplicity, say just 2 columns, 'productid' and 'affiliateid'.
I have a revenue lookup table, with those same 2 columns and the amount. So far so simple, but rather than have one row for every possible combination, I use 0 to mean default. For instance, all the affiliates have the same revenue value apart from a couple, so instead of 200 rows identical except for the affiliateid, I have one row with a '0' for the affiliateid and 4 rows with specific affiliateIDs where it differs from the default.
To update the values, I join to the revenues table twice, one for both columns matching, and once for the default. I.E.
UPDATE facttable SET revenue = ISNULL(rev1.revenue, ISNULL(rev2.revenue,0)) FROM facttable FT LEFT OUTER JOIN revenues rev1 ON FT.AffiliateID = rev1.AffilateID and FT.TypeID = rev1.TypeID LEFT OUTER JOIN revenues rev2 ON rev1.AffilateID = 0 and FT.TypeID = rev1.TypeID (In fact, this is over-simplified, because in fact there are 3 columns, so I have to have 8 joins like this).
This works very well, and cuts down the management of revenues significantly, there are a few 100 rows instead of the more than 100,000 there would be if I put every possible combination of values in its own row.
However, now there is a requirement to increase the granularity of the revenue allocation up to 5 columns, which makes 36 joins and there could well be more columns added later.
Has anyone come across a situation like this (and found a neater solution).
I want to ship 500,000 aged transactions each night to an archive table and delete them from their source table in one or more logical units of work (LUW). Each row is approx 60 bytes and there is only one non clustered index on the source table presently.
I'm trying to weigh the pros and cons of 3 alternatives. One of them would basically insert the non-aged rows into tempdb, ship the aged records, truncate the table and then insert the tempdb records back into their source all in the same LUW.
For this alternative, I'd at least like to turn off logging when the records get inserted into tempdb as I dont see any value in logging that part of the activity. Is this possible?
Currently working on a upload module where in the data from excel file is imported to the destination tables. Data in the excel sheet comes in phases. All excel sheet columns data don't come at first shot. The excel sheet's data is dumped into temporary tables which inturn is looped using cursor's and gets finally updated to the actual tables.
Now, the problem I am facing is how do I update columns of the actual table with the data (i.e NON NULL values) available in the temporary table without tampering the data allready present in actual table.
Ideally what required is, update the actual table column values with the corresponding columns of temporary table ONLY for Non NUll column values of temporary table.
Temporary and Destination tables have 85 columns each. I don't want to write 85 update queries.
The scenario which I am facing is given below with 2 columns as an example.
1. Table 1 :- tbl_source (Temporary Table) has two columns src_Col1 & src_Col2 2. Table 2 :- tbl_destination (Actual Table) has two columns dest_Col1 & des_Col2
Scenario -1 ---------------
tbl_Source Sample Data (after excel import to the temporary table) ------------------------
src_Col1 src_Col2 ------------------ 50 NULL
tbl_Destination Sample Data ------------------------
dest_Col1 dest_Col2 ------------------ 50 NULL
Scenario -2 ---------------
tbl_Source Sample Data ------------------------
src_Col1 src_Col2 ------------------ NULL 100
tbl_Destination Sample Data ------------------------
I want to set a column to 0 if it is set to a certain number. There are several columns to check though, so I am wondering if I can do it all in one query, or if I have to do it in single queries?
I have a question. I know its possible to create a dynamic query and do an exec @dynamicquery. Question: Is there a simpler way to update specific columns in a table at run time without doing if else for each column to determine which is null and which has a value because i'm running into a design dilemna on how to go about it.
FYI: All columns in the table design are set to null by default.
I'm looking for a quick script that someone has already written to update statistics (not to rebuild or re-organise) on specific indices in specific databases - I guess loop though a table comprising of a list of databases and the indices.
I know Ola has one but I'm not look for something that is that complicated. If I cannot find one I'm going to have to write one myself - I want to try and avoid re-inventing the wheel as tomorrow I have to do this work and it's about 7K plus indices in about 10+ databases.
How to INsert and Update Large Amount of Records (4 Lacs) into Destination Table Through Business Intelligence Studio Using SSIS Pacakge .How to Achieve this .i tryed with left outer join & conditional split but the problem its not able to insert & update records simultaneously . can any one give me a sample . Thanks & Regards Jeyakumar.M
I'm having problems constructing a query. I need to get a count of emails in my database, but only the emails that appear 2 or more times. Can anyone help?
Scenerio: Its 3pm and a user comes to me and says, she's deleted an invoice with many associated items. I know the affected tables (foreign keys) and I have last nights backup of the db. However I don't want restore the entire db back from last night just the deleted invoice record/s. What is the best practice procedure for accomplishing this?
I am not sure where i should post this question since it falls both in Report Server and T-Sql but here goes...
I currently need to run a Report that has only specified records that the client/user wants by clicking the check in the check box next to the record they want. They can pick as many or a few of the records that want then run a report only with the records they indicated they wanted... i am thinking they will need some kind of t-sql statement either a function or temp table but i am not sure if even that...
if anyone has any ideas please reply...
Thanks, WoFe
EXAMPLE: Instead of running a report on records 1, 2, 3, 4, 5, 6, 7, 8, 9 they would run the report on records: 2, 5, 6, 9