One Large Update Vs. Many Small
Oct 8, 2007
Hello,
the application will add items into a "bag". That is, the items in one table will refer a record in another table. This will be done in timely manner -- with second or minute delays between adding a new item. There will be up to thouthand of items per bag. The option is to wait until a full bag accumulates and set up all the references at once by using
UPDATE items SET container_ref = bag WHERE id IN [...]
The disadvantage of such all-at-once I see is inability to encapsulate the functionality into a SP -- the problem is to pass a set of IDs. The advantage should be efficiency in terms of total SQL Server load. How mush would it be?
View 3 Replies
ADVERTISEMENT
Dec 22, 2014
when to use table variable and temp table. i told the interviewer that when rows is less like hundreds or thousand then use table variable else use temp table.After that he asked that what do u mean by less data or thousand rows may be there are multiple columns involved with that less rows and make a huge data set.
View 3 Replies
View Related
Oct 23, 2006
Hi Experts
We are debating what is best:
1. To combine all the company's data in one large database, and use schemas and file groups to create logical and physical distribution on drives and namespaces
or
2. Distribute the data into smaller databases with related data - eg. products and product description in one db, Customers in another and orders and orderlines in a third db.
Just what are the pros and cons?
regards
Jens Chr
View 3 Replies
View Related
Mar 23, 2008
System.OverflowException: Value was either too large or too small for an
Int32. Why does this error originate in the following line?"SqlCommand cmd = new SqlCommand("SELECT Count(*) FROM Contacts", conn)........ ..........DataSetContacts.ContactsRow row = ds.Contacts.NewContactsRow();..................row["ContactNumber"] = Convert.ToInt32(txtContactNo.Text);" ContactNumber field is SqlDbType.Int.
View 3 Replies
View Related
Aug 2, 2007
HiThis is a question of "what does it cost me".Lets say I have an integer value which would fit into a smallint fieldbut the field is actually defined as int or even larger as bigint.What would that "cost" me ? How would definitions larger than I need forthe values in the field affect me ?Its obvious that the volume of the database would grow but with the sizeof resources etc that we have nowadays disc space isn't a problem likeit used to be and i/o is much faster and many people would tell me "whocares" , or IS it a problem ?How does it affect performance of data retrieves ? Searches ? Updatesand inserts ? How would it affect all db access if tables are pointingat each other with foreign keys ?Thanks !David Greenberg
View 3 Replies
View Related
Jun 14, 2001
HI There,
Generally speaking, is it better to use a large or small stripe size for a Raid 5 array (4 drives) ? I would appreciate any specifics also.
Thanks in advance.
Charlie
View 1 Replies
View Related
Jul 18, 2006
Hi,
Please could you tell me how big sql tables are when people refer to them as small, medium and large? Preferably in terms of disk space or rows (each row in my table will contain a standard length job advert and 20 additional columns with an average of 8 characters)
Thanks for your help! :-)
Stu
View 3 Replies
View Related
Jul 16, 2007
Hi ,
Is there any method by which I can divide the large flat file into certain number of small files keeping the header in each of the sub files?
Regards,
Prash
View 4 Replies
View Related
Oct 1, 2015
I have a small number of rows in a dataset, Table 1. There is a CLOB on a large dataset, Table 2. They join on a PK. I would like to retrieve this CLOB and add it to the data flow for Table1. In short I want to emulate the following:
Table 1: Small table without CLOB, 10 rows.
Table 2: Large table with CLOB, 10,000,000 rows
select CLOB
from table2
where pk = (select pk from table1)
I want this to return the CLOBs for the small number of rows in Table 1. The PK is indexed obviously so it should be a fast look up.
Table 1 and Table 2 live on different Oracle databases. How do I perform this operation efficiently in SSIS? It seems the Lookup and Merge Join wont do this.
View 2 Replies
View Related
Jan 24, 2001
I am trying to run a program that works on another site but with a copy of the database(SQL6.5) and the vb front end I keep getting a timeout error.
The vb bit is trying to run an update statement on a table with just over 2000 rows. I cannot amend the vb side because this is an .exe
Is there something I might not have done on the server that could be causing this not to work, i.e configuration issues, bigger logs
Please help!!
View 2 Replies
View Related
Sep 6, 2007
Which is more efficient? One large view that joins >=10 tables, or a few smaller views that join only the tables needed for individual pages?
View 1 Replies
View Related
Dec 25, 2006
I want to store a small cirle in a text field. Can anyone tell me how I can enter it in ascii code.
Thanks
View 4 Replies
View Related
Apr 21, 2008
I've got a table that I have to update in preparation for our environment move (2k to 2005 SP2). The developers that designed the application created a table called schemas, which holds the contents of an XML file inside of an ntext field named Data.I need to parse through the field and do a find/replace to replace all instances of www.site.com with www7.site.com. It's all over the place in the file. The problem is, that the datalength() of each of the fields (there are 2 rows) are above 15000.normally, I'd run something like this:update schemas set data=replace (cast(Data as varchar(max)),'www.site.com','www7.site.com') where data like '%www.site.com%'Smaller columns it works great - but it won't work on these because they're too big (the update will chop anything beyond the varchar(max) value). I could do it manually, but this DB will be refreshed from production on a weekly basis and I'd like to script as many of the environment changes to the DB as much as possible. Any ideas?
View 1 Replies
View Related
May 1, 2001
I have a table which when certain columns are updated, need a trigger to fire to update a next schedule date in that same table for that record. I can write the trigger, but my question for performance and efficiency is which approach would be better. Separate triggers fo the 8 columns, or a large trigger with an If to check if these columns are updated.
Thanks
View 1 Replies
View Related
May 26, 2004
I'm updating the name data in a large user database with the following UPDATE statement. The staging table was bulk loaded from a flat file and contains 10 million records. The production table (Recipients) contains 15 million records. This worked correctly but this single update statement took an entire ten hours to run which is way too long. While it was running the server was clearly 100% disk bound. CPU activity was near nothing. We've just upgraded RAM from 1GB to 2GB but we expect data sizes to grow significantly and we can't keep adding RAM. Absolutely nothing else is running on this server. Any ideas how I can optimize this?
UPDATE Recipients
SET [First] = Stages.[First]
, [Last] = Stages.[Last]
FROM
Stages
INNER JOIN Recipients ON
(Stages.UserName = Recipients.UserName
AND Stages.DomainID = Recipients.DomainID)
WHERE
(CASE WHEN Stages.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Stages.[Last] IS NULL THEN 1 ELSE 0 END)
<=
(CASE WHEN Recipients.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Recipients.[Last] IS NULL THEN 1 ELSE 0 END)
Text execution plan. I've made small annotations with the % information from the graphical execution plan:
|--Clustered Index Update(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), SET:([Recipients].[First]=[Stages].[First], [Recipients].[Last]=[Stages].[Last]))
|--Top(ROWCOUNT est 0)
|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))
14% |--Merge Join(Inner Join, MANY-TO-MANY MERGE:([Stages].[DomainID], [Stages].[UserName])=([Recipients].[DomainID], [Recipients].[UserName]), RESIDUAL:(([Recipients].[UserName]=[Stages].[UserName] AND [Recipients].[DomainID]=[Stages].[Domain
25% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Stages].[IX_Stages]), ORDERED FORWARD)
61% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), ORDERED FORWARD)
Everything I've heard on the subject suggests you change the index scans to index seeks. How do I do this?
Any other tuning advice is greatly appreciated.
Here are the exact statements I used to create the tables:
CREATE TABLE Recipients (
ID INT IDENTITY (1, 1) NOT NULL,
UserName VARCHAR (50) NOT NULL,
DomainID INT NOT NULL,
First VARCHAR (24) NULL,
Last VARCHAR (24) NULL,
StreetAddress VARCHAR (32) NULL,
City VARCHAR (24) NULL,
State VARCHAR (16) NULL,
Postal VARCHAR (10) NULL,
SourceID INT NULL,
CONSTRAINT PK_Recipients PRIMARY KEY CLUSTERED (DomainID, UserName)
)
CREATE TABLE Stages (
ID INT NULL,
UserName VARCHAR(50) NOT NULL,
DomainID INT NULL,
Domain VARCHAR(50) NOT NULL,
First VARCHAR(24) NULL,
Last VARCHAR(24) NULL,
StreetAddress VARCHAR(32) NULL,
City VARCHAR(24) NULL,
State VARCHAR(24) NULL,
Postal VARCHAR(10) NULL
)
CREATE CLUSTERED INDEX IX_Stages ON Stages (DomainID, UserName)
View 11 Replies
View Related
Mar 28, 2006
We have a simple UPDATE query, joining two tables, that takes much longer than 10 hours to run, but if we break the table in six (10 million rows in each table), it takes only fifteen minutes to run each part.
Why? And how can we tell in advance whether a query will cross the threshold into l.o.n.g.r.u.n.n.i.n.g query? Or, how can we prevent it?
The system is Windows XP Pro with 4GB RAM (/3GB switch), and SQL Server Standard 2005. Log files, swap files, dbf files are on separate drives. The system is dedicated to SQL Server. No other queries are running at the same time. The database is in Simple logging mode. Each table is a few GB with 60 million rows.
An example problem query is: (updating fewer than 10 bytes)
UPDATE bigtable
SET bigtable.custage = scores.custage, bigtable.custscore = scores.custscore
FROM bigtable
JOIN t2 ON bigtable.custid = scores.custid
In this case, each table has 60 million rows. 'custid' is a sequential, unique integer. SCORES table is clustered on 'custid' and is 1.5GB in size. BIGTABLE has an index on 'custid', and is 6GB in size. There is a one-to-one match between the tables on 'custid', but not enforced. The SCORES table was created by exporting a few fields (but all 60 million records) from BIGTABLE, updating the values in a separate program, then importing back in SQL Server into the SCORES table.
The first time this query was run, we stopped it after it ran 16 hours. When we broke up the bigtable into 10 million record chunks (big1, big2, big3..., big6) each update only took 15 minutes, for 90 minutes total.
* How can in we tell in advance that the full chunk would take more than a few hours?
* Why is it taking SO MUCH LONGER than in smaller chunks?
* When a query is taking that long to run, is there any way to tell where in the plan it is?
* What should we do differently?
Thanks for any help; this is a real head scratcher for us.
View 9 Replies
View Related
Feb 8, 2004
I have to modify the table structure where the table have a lot of data already. The log is getting full due to uncommitted transactions, there is a lot of data being updated in large bulks, not all of the transactions are committed, the update task cannot be completed.
However, there is no more spare disk space for it to commit the transaction. Anyone can help?
View 2 Replies
View Related
Feb 5, 2015
Currently our database size is around 350G. It will grow up to 1.5 TB
We have the
Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False
at database level
we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.
Previously they tested with sampling but instead of full scan running with the sampling effected the queries.
Is there option to avoid the long time job duration.
If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases
View 9 Replies
View Related
Feb 1, 2007
I'm a newbie to Replication and recently setup the following.
Publisher and Distributor on the same SQL2005 server, then I've got 7 subscribers(SQL2000 servers) and I'm using push subscriptions. I'm replicating 5 SQl tables which don't have too many changes and these are scheduled to run every 3 hours. In a few days a large one off SQL update with add an additional 10,000 rows to one of the replicated tables. I was wondering what impact this would have on the above setup i.e are there any sort of limitations here. I'm assuming not but thought I would check. I'm thinking it will just cause additional overhead on the server, but the update is being applied when no users will be using the database.
Any feedback greatly appreciated.
Thanks
View 1 Replies
View Related
Jul 20, 2005
Hi all,I am doing some large updates,that may update 10,000 plus rows.This works fine when I execute the SQL directlyin Query Analyzer.If I set the timeout on my VB connection to 0 (zero)the connection should not time out????But it does.If I set the time out to a high value, say 1200,I get the same problem well within 1200 seconds.Also, I am getting the problem that the log fills up,but it is set to auto grow????Any ideas would be appreciated.ThanksGreg
View 1 Replies
View Related
Dec 10, 2014
I need to update a large table, about 55 million rows, without filling the transaction log, in the shortest time as possible. The goal is to alter the table and change the data type for Text column from VARCHAR(7900) to NVARCHAR(MAX).
Since I cannot do it with an ALTER TABLE statement (it would fill up the transaction log) I'm thinking to:
- rename column Text in Text_OLD
- add Text column of type NVARCHAR(MAX)
- copy values in batches from Text_OLD to Text
The table is defined like:
create table DATATEXT(
rID INTEGER NOT NULL,
sID INTEGER NOT NULL,
pID INTEGER NOT NULL,
cID INTEGER NOT NULL,
err TINYINT NOT NULL,
[Code] ....
I've thought about a stored procedure doing this but how to copy values in batch from Text_OLD to Text.
The code I would start with (doing just this part) is the following, but maybe there are more efficient ways to do it, or at least there's a better way to select @startSeq in the WHILE loop (avoiding to select a bunch of 100000 sequences and later selecting the max).
declare @startSeq timestamp
declare @lastSeq timestamp
select @lastSeq = MAX(sequence) from [DATATEXT] where [Text] is null
select @startSeq = MIN(Sequence) FROM [DATATEXT] where [Text]is null
BEGIN TRANSACTION T1
WHILE @startSeq < @lastSeq
[Code] ....
View 1 Replies
View Related
Mar 28, 2015
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a:
insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
View 9 Replies
View Related
Jun 1, 2006
Hi ,
How to INsert and Update Large Amount of Records (4 Lacs) into Destination Table Through Business Intelligence Studio Using SSIS Pacakge .How to Achieve this .i tryed with left outer join & conditional split but the problem its not able to insert & update records simultaneously . can any one give me a sample .
Thanks & Regards
Jeyakumar.M
View 3 Replies
View Related
Nov 14, 2007
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
View 6 Replies
View Related
Aug 6, 2005
How to retreive the value of last identity has been updated in a database (SQL Server)?
View 1 Replies
View Related
May 29, 2008
in command promt how to come out c drive to d drive
View 2 Replies
View Related
Mar 10, 2007
in my table have two column one post(varchar) and another one views(decimal).in table have no.of data's.......
sample data's
Ex i want this Result
post views post views
std 400 std 400
abc 100 abc 100
dbn 10 sdfe 75
sdfe 75 dbn 10
.... .....
..... ....
....
i need one query....... condition view best top 10 post based on views column
View 2 Replies
View Related
Sep 12, 2007
Hi,
I need to display only the char having start with 'ACCT -AMOUNT',Now problem is that some records having the lower case character like 'acct amount'.
But i want to display only the upper case char start with 'ACCT-AMOUNT'.
I have to used the 'like ' statement but it is showing all the row inculding the lower case also.
Please give me some clue reg. the issues.
Thanks,BPG
View 3 Replies
View Related
Sep 26, 2007
Hi,Probable there is a simple solution for this, hopefully someone candirect me in the right direction.I have a table with a persons firstname, lastname, birthdate andaddress. However, I want to select only one person per address, namelythe eldest of all persons living on the same address.Can anyone provide me a solution?Thanks in advance.Duncan
View 2 Replies
View Related
Jul 20, 2005
HelloI have a case where Partners are some kind of Super-Users and arestored in a SQL Server database. Best is IMO to put both in the sametable:table Customers:CustomerID[pr.key][blabla]PartnerIDBut of course I have to reference the partnerid from another table andI want SQL Server to maintain the integrity rules. I could splitCustomers en Partners into different tables, but that would not bewise i think.Or I could just reference the CustomerID from the other table and-know- that we are talking about a partner, but in that case it itpossible to reference a customer that is not a partner, and i want toavoid that.Any ideas?Freek Versteijn
View 4 Replies
View Related
Nov 27, 2007
Consider a recordset consisting of
Vendor, Invoice, InvAmt, Item,ItemAmt
1, 'A12345', 100.00,'Item1', 25.00
1, 'A12345', 100.00,'Item2', 50.00
1, 'Z22222', 200.00,'Item1',100.00
1, 'Z22222', 200.00,'Item3', 50.00
2, 'A12345', 300.00,'Item4,' 250.00
I have a report that groups by vendor and then by invoice within a vendor.
I want to create a report totals that contains
Number of vendors=2
Number of invoices=3
Total Inv Amounts=600.00
Total Item Amounts=475.00
How?
Number of vendors is =count(distinct vendors) ??
Cant do that for # Invoices because of possible duplicate invoices used by different vendors.
How do i get the total invamt? Each occurrence of the invamt fld is in a list2 as =first(invamt).
I really need the sum of each first(invamt).
Thanx Up Front.
Jerry C
View 9 Replies
View Related
Aug 17, 2007
i need to concatenate this two database fildes
PATNT_REFNO_NHS_IDENTIFIER defined as varchar
PATNT_REFNO defined as numeric
out put of these tewo colomns like
PATNT_REFNO_NHS_IDENTIFIER = NPA0123
PATNT_REFNO = 0125487
so i need to get a result like
NPA01230125487
any idea
regards
Niranga
View 1 Replies
View Related
Jun 18, 2007
I currently use JET 4.0 as my database in a small app I distribute. SQL Server Express is way way too big to distribute. I am looking to move to a better database.
Is there a version of SQL that gives simple database CRUD support, but is super small to distribute? I am also considering Firebird as it's a full database and only 3MB on the client as an embedded tool.
Ian
View 1 Replies
View Related