Performance Hit When Tables Are Segregated Into Separate DBs?
Jan 5, 2001
We have some tables that we have spread across two databases. The segregation isn’t essential, but the entities involved were disparate enough that we thought it made sense. However, our client app regularly & frequently requires information that can only be answered by queries to tables in both databases. It has been suggested that segregating the tables as we have introduces a performance hit. At this stage, it would be relatively easy to re-combine the tables into one DB.
I am trying to add 2 separate columns from separate tables i.e column1 should be added to column 2 when inserted and I want to use a trigger but i don't know the syntax to use...
I am currently running a windows 2000 machine with asp, sql server,mail server, ftp server etc all on the same box.The site runs several hundred ecommerce stores. Recently theprocessor utilization has been spiking and I have decided to getanother server and use sql server on one and asp on the other.So now I have a new windows 2003 server that I have setup all of theasp code on. Problem is that when I run the asp code from the newwindows 2003 server it is extremely slow compared to the code runningon the old windows 2000 server which is where the sql server databaseis also located.From everything I have read the best way to optimize your site is touse 2 separate servers one for iis/asp and one for sql server.Am I doing something wrong here or is this normal??Could this possibly be just because the old server is still servingmany requests and is pushing the requests from the new server to theback of the line?Does anyone have any ideas?The syntax I am using to open the connection string is:db_ConnectionString = "Driver=" & db_Driver & ";Server=" & db_Server &";UID=" & db_UIN & ";PWD=" & db_pwd & ";Database=" & db_Database & ";"conn_store.Open db_ConnectionStringwhere db_server is the ip address of the windows 2000 serverIs there a better way to do it across a network??Any help or ideas would be much appreciated.
I am new to T-SQL and triggers Any help will be appreciated
I am trying to change this code to insert firstname, surname (taken from employee table on db A) to firstname, surname on customer table of DB B but also create cust_id on customer table and DB B. currently I am getting all rows of customer.cust_id filled with the same data whenever a new data is inserted into (firstname,surmname of employee table)
Create trigger gen_cust_id ON employee for insert AS Update customer SET cust_id =( SELECT Replicate('0',(4-DATALENGTH(CONVERT(varchar(10),i.id)))) + Convert (varchar(10),i.id) + Substring(i.lastname,1,3) + Substring(i.firstname,1,1) from employee C INNER JOIN inserted i on i.id=c.id) from employee C INNER JOIN inserted i on i.id=c.id
How would i write a single sql statement where i can get that counts how many bookIDs are listed for each custoemrID and how many magzaineIDs are listed for each customerID and have it return one table that looks like this:
Hello, I am working with a database that among other things uses multipart keys as the unique indexes which are not consistent from say one table where a parent record resides to another table which contains related child records. For example I am working with two tables right now, one that contains content that I'll call Contents and the other which contains Usage information about the contents (number of view, a rating and comments give by a customer) which I'll call ContentsUsage. The system that manages the data for the tables has a versioning system by which, whn a content item is added (first time) a "unique" id (guid) and a version number of 1 is created along with the rest of data items in the Contents table and likewise in the ContentsUsage table (essentially a one to one mapping) on the like named fields in that table. Now, each time a given record in the Contents table is updated a new version, with the same guid is created in the Contents and ContentsUsage table. So one side I have:ContentGUID > AAAAVersion > 1ContentGUID > AAAAVersion > 2And the other table (ContentsUsage)ContentGUID > AAAAVersion > 1ContentGUID > AAAAVersion > 2 While both of these tables have a quasi-unique record (row_id) of type char and stored as a guid neither obviously are the same in the two tables and having reviewed the database columns for these tables I find that the official unique key's for these tables are different (table 1, Contents combines the ContentGUID and Version) as the composite / mutli-key index, while table ContentsUsage uses the RowGUID as it's unique index. Contents RowGUID (unique key)ContentGUIDVersionViewsRatingComments................RowGUID ContentGUID (unique key)Version (unique key)Description..... Bearing this in mind I am unable of course to link directly the two tables by using the just the ContentGUID and have to combine the additional Version to I believe obtain the actual "unique" record in question. The question is in terms of writing queries, what would the most efficient query be? What would be the best way to join the two in a query? And are there any pitfalls with the current design that you can see with the way this database (or specifically these tables are defined)? It's something I inherited, so fire away at will on the critique. Having my druthers I would have designed these tables using a unique key of type int that was autogenerated by the database. Any advice, thoughts or comments would be helpful. Thanks,P.
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks. What does this mean exactly and how would you configure this? Is this a common practice in high-performance real-world environments (or should it be)?
I would like to 'one table' record to separate 'two or three tables' . I just know use the DTS , try to import and export again and agian. So trouble.
Could you give me some suggestions for me? For example , 'Cursor' write in new table . But I try to SQL Server Books Online which is not suitable for me solving problems. One table separate two or three tables. Can you wirte the detail example for me? Thx a lot.
Hi, We are building an application for online system for people to place ADs for selling various used items like Car, Electronics, Houses, Books etc. If someone selling a car then he can fill out headline, year, make, model, mileage, transmission, condition, color, price, description, contact etc. Similarly if someone selling a digital camera he will fillout headline, memory, zoom, megapixel, maker, model, color, batter, description etc. Option 1: I can have a main table to hold the common attributes of all different types of ADs (headline, images, contact, price, color, condition, description) + 1 table to store string values of all ADs (car: maker, model, square feet (if house), memory, megapixel (camera) etc) + 1 table to store the droplist select values(car: transmission, door, seat etc; house: year_built) pros: single table for all ADs. unique IDs for all ADs, easy to extend as new attributes can be dropped easily. cons: lot of physical reads of 2nd and 3rd table from join. 10 times physical reads compared to option 2 when reading 5000 records. Option 2: have different set of table for each AD type. Car will have its own main table + 1 table to store multiselect list box values. Similarly housing will have its own set of tables pros: 10% less physical read than option 1. cons: hard to add new attributes. We have to modify the main table by adding one column. Query will go to different table based on the category. Do you have any suggestions on which way to go?Thanks
We have a large Datawarehouse and the size is 50TB.. The tables are placed in filegroups based on the schema like fact, dimensions, raw data each sit on seperate filegroups. I am thinking will it make sense to seperate the large facts which are having billions of rows so that they reside on filegroups on their own..
I am trying to tie together tables that show quantities of a product committed to an order and quantities on hand by a location.
My end result should look like the below example.
Item Location QtyOnHandByLocation SumQtyCommitTotal Prod1 NJ 10 10 Prod1 NY 10 0 Prod1 FL 0 0 Prod1 PA 0 0
So I can see I have 10 items in NJ On Hand and Committed to an order. 10 available in NY but not on an order. Then the other two locations have no quantities.
Below is the CTE but it produces inaccurate results. I've tried running it several different ways by playing with the grouping but have no luck thus far.
--create the temp table Create table #SalesLine ( Novarchar (50) not null , LocationCodevarchar (50) not null , QtyCommitint not null ) create table #ItemLedgerEntry
[code]....
I am close to the desired results but can't find a way.
I have a report that's created each day as a flat textfile.Because I came from the Access world, I created a macro that importsit with a schema that gives meaningful names to the various columns,and then uses a query to massage some of the data for me (deletes thefirst blank row and does a couple of calculations)Then I use DTS to import the Access query as a table.the textfile has a column called "File_num", and among several others,a column called "Serial_num". (the file numbers represent shipments,and sometimes there are more than one serial number in the shipment,etc., so there is a separate line for every serial number)Naturally, I would like to split this info into two tables..one thatdoes not contain the serial numbers and has a primary key on the"File_num" column, and another table that would contain just the"File_num" and "Serial_num" columns. That way I could relate themlater...but most importantly, it will give me a table where I can usethe "File_num" as my primary key.What would be the best way to import these two tables from one sourcetextfile? The other thing that gives me problems is that the textfile has no column names, and the first row is always blank.I'm very new to SQL and DTS and would appreciate any direction.Thanks,Larry- - - - - - - - - - - - - - - - - -"Forget it, Jake. It's Chinatown."
In database DB1, I have table DB1.dbo.Suppliers1. This table has an ID column of type INT named SUPPL1_ID
In database DB2, I have table DB2.dbo.Suppliers2. This table has an ID column of type INT named SUPPL2_ID I would like to update DB2.dbo.Suppliers2 based on values from DB1.dbo.Suppliers1 joining on SUPPL1_ID = SUPPL2_ID.
How can I do this in SSIS?
Assumptions:
linked servers are not an option, as I want the SSIS package to be portable and not dependent on server environments. TIA.
Request is to merge or join or case stmt or union or... from up to four unique columns all in separate tables to new combined table (matrix) of results from said.
Currently we have one customer database containing various tables. As part of requirements for a new client, we need to manage their data in a totally separate database. The tables and structure are exactly the same but we would be loading data into a separate database.
I am looking for a way to combine tables with the same name in each database when I run queries, rather than having to query each database separately. Currently we actually have many queries set up in MS Access which use an ODBC link to query the data off SQL server. I am aware it is possible to apply a UNION SELECT in Access from 2 separate ODBC connections, but this is extremely slow.So my initial question is - is there a way to provide access to the tables from both databases over the same ODBC link? If this cannot be done over ODBC I guess we can consider more "modern" methods, but ideally we want to keep this in MS Access as that is where our existing queries are based. I was hoping that some kind of view can be treated as an ODBC connection.I mentioned ideally we want to keep the reporting queries in MS Access.
SET @RowCnt = 1 SET @date = CONVERT(CHAR(10),GETDATE(),110) SET @ArchPath = '\D$EDATAWorkFoldersSendSendData' SELECT @TotalRows = count(*) FROM table1 --select @ArchPath
WHILE (@RowCnt <= @TotalRows) BEGIN SELECT @AccountNumber = AccountNumber, @output_filename FROM table1 WHERE Identity_Number = @RowCnt --PRINT @AccountNumber --test SELECT @sql = N'bcp "SELECT h.HeaderText, d.RECORD FROM table2 d INNER JOIN table3 h ON d.HeaderID = h.HeaderID WHERE d.ccountNumber = ''' + @AccountNumber+'''" queryout "'+@ArchPath+ @output_filename + '.txt" -T -c' --PRINT @sql EXEC master..xp_cmdshell @sql SELECT @RowCnt = @RowCnt + 1 END
I am planning an application where ~1000 companies will be accessing data. Should I use a key to identify the company and place all data in one table i.e (WHERE company =123) or should the application create company specific tables i.e should I have 1000 small tables with 100 records in each, or one table with 100,000 records?
I have been researching some performance problems in a very large application and I have a couple of questions about temp tables. (SQL 7.0 SP2)
I have one large procedure that I have been using as a test case. Originally this procedure was a cursor with lots of processing steps involving writing to, reading from and deleting in temp tables inside the cursor. I remember reading that temp tables inside a cursor were a potential performance problem, so I rewrote the procedure, replacing the cursor with a While Loop.
Doing this showed no increase in performance. Since Profiler was showing .5 second duration times on statements in the procedure accessing the temp tables I tested some more. I moved all the create statements to the top of the procedure, as I know these statements after processing steps can cause recompiles to happen. Still no performance increase.
Finally I replaced all the temp tables with actual tables, just to see what would happen. With no other changes the performance increased by more than 500%.
Can someone give me some clues as to what is happening here, because if this is a symptom of something I don't understand, the potential performance problems from other places where temp tables are similarly used in the application are enormous.
I have a table lets call it TABLE_A that has +- 100 million rows , obviously inserts into this table take some time as it has 1 clustered and 3 non clustered indexes.
I have another table lets call it TABLE_B, it is identical to TABLE_A and it holds 100,000 rows that must be inserted into TABLE_A.
As you can imagine a : INSERT INTO TABLE_A select * from TABLE_B takes alot of time.
What is the best way to speed this up? (Dopping indexes in not an option).
I know bulk insert gives the best performance, but can you bulk insert between tables ? Bulk insert in from a flat file source.
It seems redundant to write an ssis package to extract the data out of TABLE_B to file simply to bulk insert in back into the database?
So in a nutshell what is the fastest way to get the rows from TABLE_B in TABLE_A?
Hi everyone I need a solution for this query. It is working fine for 2 tables but when there are 1000's of records in each table and query has more than 2 tables. The process never ends. Here is the query (select siqPid= 1007, t1.Gmt909Time as GmtTime,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as EngValue, t1.Loc1Time as locTime,t1.msgId into #temp5 from #temp1 as t1,#temp2 as t2,#temp3 as t3,#temp4 as t4 where t1.Loc1Time = t2.Loc1Time and t2.Loc1Time = t3.Loc1Time and t3.Loc1Time = t4.Loc1Time) I was trying to do something with this query.
But the engValues cant be summed up. and if I add that in the query, the query isnt compiling. (select siqPid= 1007, t1.Gmt909Time as GmtTime, t1.Loc1Time as locTime,t1.msgId,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as engValue --into #temp5 from #temp1 as t1 where exists (Select 1 from #temp2 as t2 where t1.Loc1Time = t2.Loc1Time and exists (Select 1 from #temp3 as t3 where t2.Loc1Time = t3.Loc1Time and exists (Select 1 from #temp4 as t4 where t3.Loc1Time = t4.Loc1Time))))
I need immediate help on that, I would appreciate an input on it.
I have a db which I have little control over most of it's makeup because of the vendor supplied tools. We currently have over 700 tables and 19000 columns. Has anyone seen a problem or saturation pont with these kinds of numbers? The database delivered to the clients will be from 2-50 gig depending on the site. I can probably through hardware at problems, but if anyone has been down this road any suggestions are appreciated.
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
I have a database with more 50 tables and 25 tables are having more than 10 lakhs records which includes history records.I have two data files for this database under PRIMARY FILE GROUP.Now i want to transfer these history records to some other database. I wanted to know if this kind of activity will boost the database performance?.If yes how should i configure my new database. On what factors of partitioning my performance will boost.
I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?
Will you recommend the usage of temporary tables in a SQL server database ? AFAIK, it boosts the performance. But recently I read one article in SQL Server performance.com[^] which confused me. Any insights on this would be helpful ?
After upgrading to SQL 7 (SP1), we have several SP's that have gone from taking 2-3 min to take 15-20. Each of these SP's creates at least one temp table, inserts into that table, then updates the records in that table. From our research, we can tell that the creation and inserts into the temp tables are fine. It is the updating of these tables that causes the problem. We can observe that the problem is happening by watching the processors go to and stay above 90%. If it were just a few SP's, we could easily fix it and go on, but because of 6.5's limit of 16 tables referenced in a SP, we had to use this method many times. Is there a fix out there for this or a configuration change I can make?
The following code should insert into 3 tables based on conditions. There's something screwy in my syntax and I'm pretty new at this can anyone help with transforming this in terms of performance and being syntactically correct? Thanks a million!
IF Not Exists (SELECT [Artist] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist) BEGIN INSERT INTO [integration].[dbo].[tblMusic_Artist] ( [Artist], [Genre], [NLink])
VALUES ( @Artist, @Genre, @NLink)
SET @NewArtistID = @@IDENTITY
INSERT INTO [integration].[dbo].[tblMusic_Albums] ( [Album]
VALUES ( @Album)
SET @NewAlbumID = @@IDENTITY
INSERT INTO [integration].[dbo].[tblMusic_Song] ( [Song], [ArtistID], [AlbumID], [SLink])
VALUES ( @Song, @NewArtistID, @NewAlbumID, @SLink) END
ELSE BEGIN IF Not Exists (SELECT [Album] FROM [integration].[dbo].[tblMusic_Album] WHERE [Album] = @Album) BEGIN INSERT INTO [integration].[dbo].[tblMusic_Albums] ( [Album]
VALUES ( @Album)
SET @NewAlbumID = @@IDENTITY SET @NewArtistID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist)
INSERT INTO [integration].[dbo].[tblMusic_Song] ( [Song], [ArtistID], [AlbumID], [SLink])
VALUES ( @Song, @NewArtistID, @NewAlbumID, @SLink) END END ELSE BEGIN SET @NewAlbumID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Album] WHERE [Album] = @Album) SET @NewArtistID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist)
INSERT INTO [integration].[dbo].[tblMusic_Song] ( [Song], [ArtistID], [AlbumID], [SLink])
VALUES ( @Song, @NewArtistID, @NewAlbumID, @SLink) END
We have a need to retrieve Sybase data within a MS SQL Serverapplication. We are using SQL Server's linked database feature withthe Sybase 12.0 OLE DB driver. It takes 5 minutes to run a query thattakes 2 seconds from isql.Any suggestions?Thanks