SQL Performance- Lots Of Little Tables Or One Big One?

May 25, 2004

I am planning an application where ~1000 companies will be accessing data. Should I use a key to identify the company and place all data in one table i.e (WHERE company =123) or should the application create company specific tables i.e should I have 1000 small tables with 100 records in each, or one table with 100,000 records?

View 2 Replies


ADVERTISEMENT

Performance Problem, Lots Of Disk Activity, Running Out Of Memory

Jul 20, 2005

Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.

View 4 Replies View Related

Deleting Duplicate Records From Lots Of Tables

Aug 29, 2006

Hi All,

So.. I'm a complete newb to SQL stuff.

I managed to find the 'Deleting Duplicate Records' from SQLTeam.com (thanks, by the way!!).. I managed to modify it for one of my tables (one of 14).


-- Add a new column

Alter table dbo.tblMyDocsSize add NewPK int NULL
go

-- populate the new Primary Key
declare @intCounter int
set @intCounter = 0
update dbo.tblMyDocsSize
SET @intCounter = NewPK = @intCounter + 1

-- ID the records to delete and get one primary key value also
-- We'll delete all but this primary key
select strComputer, strATUUser, RecCount=count(*), PktoKeep = max(NewPK)
into #dupes
from dbo.tblMyDocsSize
group by strComputer, strATUUser
having count(*) > 1
order by count(*) desc, strComputer, strATUUser

-- delete dupes except one Primary key for each dup record
deletedbo.tblMyDocsSize
fromdbo.tblMyDocsSize a join #dupes d
ond.strComputer = a.strComputer
andd.strATUUser = a.strATUUser
wherea.NewPK not in (select PKtoKeep from #dupes)

-- remove the NewPK column
ALTER TABLE dbo.tblMyDocsSize DROP COLUMN NewPK
go

drop table #dupes


Now that I've got that figured out, I need to write the same thing to fix the other 13 tables (with different column info)- and I'll need to run this daily.

Basically I've put together some vbscript that gathers inventory data and drops it into an MSDE db (sorry - goin for 'free' stuff right now). Problem is it has to run daily so that I'm sure to capture computers that turned on at different times etc which ever-increases my database 'till I bounce off the 2GB limit of MSDE.

So the question is, what would be the best way to do this? Can I put the code into a stored procedure that I can execute each day?


Thanks for your help....

View 4 Replies View Related

Temp Tables And Performance

Mar 17, 2001

I have been researching some performance problems in a very large
application and I have a couple of questions about temp tables. (SQL 7.0
SP2)

I have one large procedure that I have been using as a test case.
Originally this procedure was a cursor with lots of processing steps
involving writing to, reading from and deleting in temp tables inside the
cursor. I remember reading that temp tables inside a cursor were a
potential performance problem, so I rewrote the procedure, replacing the
cursor with a While Loop.

Doing this showed no increase in performance. Since Profiler was showing .5
second duration times on statements in the procedure accessing the temp
tables I tested some more. I moved all the create statements to the top of
the procedure, as I know these statements after processing steps can cause
recompiles to happen. Still no performance increase.

Finally I replaced all the temp tables with actual tables, just to see what
would happen. With no other changes the performance increased by more than
500%.

Can someone give me some clues as to what is happening here, because if this
is a symptom of something I don't understand, the potential performance
problems from other places where temp tables are similarly used in the
application are enormous.

Thanks.

View 1 Replies View Related

Joining Tables - Performance

Sep 9, 2003

In a simple join query, such as
SELECT *
FROM a, b
WHERE a.id = b.id

if table 'a' has 1 million and table 'b' has few thousands of records, will the order of the tables in from class will make any difference?

View 1 Replies View Related

Best Performance Between 2 Tables Insert?

Nov 20, 2007

Hi There

I have a table lets call it TABLE_A that has +- 100 million rows , obviously inserts into this table take some time as it has 1 clustered and 3 non clustered indexes.

I have another table lets call it TABLE_B, it is identical to TABLE_A and it holds 100,000 rows that must be inserted into TABLE_A.

As you can imagine a : INSERT INTO TABLE_A select * from TABLE_B takes alot of time.

What is the best way to speed this up? (Dopping indexes in not an option).

I know bulk insert gives the best performance, but can you bulk insert between tables ? Bulk insert in from a flat file source.

It seems redundant to write an ssis package to extract the data out of TABLE_B to file simply to bulk insert in back into the database?

So in a nutshell what is the fastest way to get the rows from TABLE_B in TABLE_A?

Thanx

View 1 Replies View Related

How To Improve Performance If Inner Join Has More Than 2 Or 3 Tables

Aug 15, 2006

Hi everyone
     I need a solution for this query. It is working fine for 2 tables but when there are 1000's of records in each table and query has more than 2 tables. The process never ends.
Here is the query
(select siqPid= 1007, t1.Gmt909Time as GmtTime,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as EngValue,
 t1.Loc1Time as locTime,t1.msgId
 into #temp5
 from #temp1 as t1,#temp2 as t2,#temp3 as t3,#temp4 as t4
 where t1.Loc1Time = t2.Loc1Time and t2.Loc1Time = t3.Loc1Time and t3.Loc1Time = t4.Loc1Time)
 I was trying to do something with this query.
 
But the engValues cant be summed up. and if I add that in the query, the query isnt compiling.
(select siqPid= 1007, t1.Gmt909Time as GmtTime,
 t1.Loc1Time as locTime,t1.msgId,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as engValue
 --into #temp5
 from #temp1 as t1
 where exists
(Select 1
  from #temp2 as t2
 where t1.Loc1Time = t2.Loc1Time and
  exists
(Select 1
  from #temp3 as t3
 where t2.Loc1Time = t3.Loc1Time and
   exists
(Select 1
 from #temp4 as t4
 where t3.Loc1Time = t4.Loc1Time))))
 
 
I need immediate help on that, I would appreciate an input on it.
 
Thanks
-Sarah

View 15 Replies View Related

Performance Hit When Tables Are Segregated Into Separate DBs?

Jan 5, 2001

We have some tables that we have spread across two databases. The segregation isn’t essential, but the entities involved were disparate enough that we thought it made sense. However, our client app regularly & frequently requires information that can only be answered by queries to tables in both databases. It has been suggested that segregating the tables as we have introduces a performance hit. At this stage, it would be relatively easy to re-combine the tables into one DB.

View 1 Replies View Related

When Or Where Does To May Columns Or Tables Affect Performance?

May 4, 2001

I have a db which I have little control over most of it's makeup because of the vendor supplied tools. We currently have over 700 tables and 19000 columns. Has anyone seen a problem or saturation pont with these kinds of numbers? The database delivered to the clients will be from 2-50 gig depending on the site. I can probably through hardware at problems, but if anyone has been down this road any suggestions are appreciated.

View 1 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Tables Partitioning -- Performance Boost

Nov 13, 2007

Hi

I have a question about the partitioning a table.

I have a database with more 50 tables and 25 tables are having more than 10 lakhs records which includes history records.I have two data files for this database under PRIMARY FILE GROUP.Now i want to transfer these history records to some other database.
I wanted to know if this kind of activity will boost the database performance?.If yes how should i configure my new database.
On what factors of partitioning my performance will boost.

Thanks in advance

Regards
Arvind

View 1 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

Can Too Many Tables In The Database Affect Performance?

May 23, 2008

We have more than 2000 tables in the database. Can existence of so many tables affect the performance on SQL SERVER 2005?

View 1 Replies View Related

Temporary Tables In A Performance Perceptive

Nov 20, 2007

Will you recommend the usage of temporary tables in a SQL server database ? AFAIK, it boosts the performance. But recently I read one article in SQL Server performance.com[^] which confused me. Any insights on this would be helpful ?

Thanks

View 3 Replies View Related

Updating Temporary Tables Causes Slow Performance

Jul 8, 1999

After upgrading to SQL 7 (SP1), we have several SP's that have gone from taking 2-3 min to take 15-20. Each of these SP's creates at least one temp table, inserts into that table, then updates the records in that table. From our research, we can tell that the creation and inserts into the temp tables are fine. It is the updating of these tables that causes the problem. We can observe that the problem is happening by watching the processors go to and stay above 90%. If it were just a few SP's, we could easily fix it and go on, but because of 6.5's limit of 16 tables referenced in a SP, we had to use this method many times. Is there a fix out there for this or a configuration change I can make?

View 2 Replies View Related

Insert, Conditions, 3 Tables, Syntax, Performance

Jun 13, 2004

The following code should insert into 3 tables based on conditions. There's something screwy in my syntax and I'm pretty new at this can anyone help with transforming this in terms of performance and being syntactically correct? Thanks a million!


CREATE PROCEDURE [insert_vwMusic]
(@Artist [nvarchar](50),
@Genre [nvarchar](50),
@NLink [nvarchar](50),
@Album[nvarchar](50),
@Song[nvarchar](50),
@ArtistID[nvarchar](50),
@AlbumID[nvarchar](50),
@SLink[nvarchar](50))

AS

DECLARE @NewArtistID VarChar(50),
DECLARE @NewAlbumID VarChar(50)

IF Not Exists (SELECT [Artist] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist)
BEGIN
INSERT INTO [integration].[dbo].[tblMusic_Artist]
( [Artist],
[Genre],
[NLink])

VALUES
( @Artist,
@Genre,
@NLink)

SET @NewArtistID = @@IDENTITY

INSERT INTO [integration].[dbo].[tblMusic_Albums]
( [Album]

VALUES
( @Album)

SET @NewAlbumID = @@IDENTITY

INSERT INTO [integration].[dbo].[tblMusic_Song]
( [Song],
[ArtistID],
[AlbumID],
[SLink])

VALUES
( @Song,
@NewArtistID,
@NewAlbumID,
@SLink)
END

ELSE
BEGIN
IF Not Exists (SELECT [Album] FROM [integration].[dbo].[tblMusic_Album] WHERE [Album] = @Album)
BEGIN
INSERT INTO [integration].[dbo].[tblMusic_Albums]
( [Album]

VALUES
( @Album)

SET @NewAlbumID = @@IDENTITY
SET @NewArtistID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist)

INSERT INTO [integration].[dbo].[tblMusic_Song]
( [Song],
[ArtistID],
[AlbumID],
[SLink])

VALUES
( @Song,
@NewArtistID,
@NewAlbumID,
@SLink)
END
END
ELSE
BEGIN
SET @NewAlbumID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Album] WHERE [Album] = @Album)
SET @NewArtistID = (SELECT [ID] FROM [integration].[dbo].[tblMusic_Artist] WHERE [Artist] = @Artist)

INSERT INTO [integration].[dbo].[tblMusic_Song]
( [Song],
[ArtistID],
[AlbumID],
[SLink])

VALUES
( @Song,
@NewArtistID,
@NewAlbumID,
@SLink)
END

GO

View 5 Replies View Related

Performance Problem With Linked Sybase Tables

Jul 20, 2005

We have a need to retrieve Sybase data within a MS SQL Serverapplication. We are using SQL Server's linked database feature withthe Sybase 12.0 OLE DB driver. It takes 5 minutes to run a query thattakes 2 seconds from isql.Any suggestions?Thanks

View 1 Replies View Related

Please Help! I Have Lots Of Questions.

May 6, 2007

In case some of you have read my previous posts, you may be aware that I'm writing a webboard application as a replacement for the old one.The old one currently have approximately 50000 topics in the database, each has on average 10 replies (I just check recently. I though it was only 7000 topics).I need to provide paging and sorting feature to the topic list. But I can't just SELECT all of them and let GridView do the paging/sorting, right?I have been using stored procedures to store my SQL statement for several projects now. I know how to deal with the paging feature (ROW_NUMBER), but the sorting requires me to change to change the "ORDER BY" clause.1. Can somebody tell me how to change the ORDER BY clause in the stored procedure(s) at runtime? Or does anyone have other approach?
Currently I'm thinking about moving back from store procedures to hard-code SQL statements, and then modify/generate the SQL statement for each paging / sorting. But I've learn that stored procedures give more performance and security.2. According to the situation I provided, is it worth moving from stored procedures to hard-code SQL?I'm also using 3-tier architecture approach + OOP. But I reach a conflict in my thoughts. You see, according to OOP, I'm supposed to create classes that reflect the actual objects in the real-world, right? In my case the classes are "Board, Topic, Reply, ...." According to this and 3-tier approach, I intend to use ObjectDataSource as a bridge between Presentation Logic and Business Logic. But I wonder what my datasource class should return3. Should my data source class return data objects like1st approach[DataObject(True)]pubic class TopicDataSource{         public static Topic[] GetTopicList() { }}or should it return DataSet / DataTable / DataReader like2nd approach [DataObject(True)]public class TopicDataSource{          public static DataTable GetTopicList() {}}Personally I think approach 1 is more OOP and allow for more extendability, but approach 2 might be faster.4. If I go with approach 1, how should I control which property of my data objects is read-only after it's has been inserted/created? Can I just set my data object's property to be readonly? Or do I have to set it at page level (i.e. GridView-> Columns -> BoundField -> ReadOnly=True)? Or do I set it and the page level and write a code to throw an exception in the rare case the application / user try to change it's value? Or else?Please help. These questions slow me down for days now.If there's any concepts that I misunderstood, please tell me. I'm aware that I don't know as much as some of you.I will be extremely grateful to anyone who answer any of my questions.Thanks a lot.PS. For those who think my questions are stupid, I'm very, very sorry that I bother you.

View 3 Replies View Related

Do Lots Of COUNTs

Sep 19, 2006

Hello :)

I seem to have somehow got myself into a situation where I'm having to run the following SELECTs, one after another, on a single ASP page. This is not tidy. How can I join them all together so I get a single recordset returned with all my stats in different columns?

SELECT COUNT(*) FROM tblQuiz WHERE [q3] = '5 years +' OR [q3] = '2 - 4 years'
SELECT COUNT(*) FROM tblQuiz WHERE [q4] <> '' AND [q4] IS NOT NULL
SELECT COUNT(*) FROM tblQuiz WHERE [q5] = 'Unhappy'
SELECT COUNT(*) FROM tblQuiz WHERE [q6] = 'Yes'
SELECT COUNT(*) FROM tblQuiz WHERE [q7] = 'Yes'
SELECT COUNT(*) FROM tblQuiz WHERE [q8] <> '' AND [q8] IS NOT NULL

View 8 Replies View Related

Lots Of Queries For My Db

Jul 20, 2005

Hello all,I have a database in SQL Server that should save data from a CRM-likeapplication.The database consists of tables like products, services, customers,partners etc. Problem is that the users should be able to find theseitems on different properties and with or without substring finding(SQL: LIKE). Example: I want the users to be able to find a customer,providing a customerID, but also providing a customername, zipcode orjust a part of those strings.This will result in a lot of queries. I bet there are some nicesolutions to this, since I will not be the first with this situation.If anyone can help, please.Thank you in advance.Regards,Freek Versteijn

View 3 Replies View Related

Impact Of Empty Tables On Database Size And Performance

Jun 21, 2007

Hello All,

When creating my database I have modeled some of the tables after the Adventureworks sample database.

There are some fields or entire tables in Adventureworks that I do not see an imediate use for, however; I would hate to ommit them to find out later they would have been benificial. (.eg territory table).



In general terms what would the impact be on size and performance of a database which contains tables or fields that do not contain data.



Thanks for your help!

View 1 Replies View Related

Increase Performance In Query With Big Tables (milions Of Records)

Apr 4, 2008



Hello,

I have 3 tables (A, B, C) with milions of records (A ca 5 milions, B and C ca 10 milions).
I have created a join betwenn them

select some fields (A, B, C)
FROM
A as a
JOIN
B as B
on
a.a1 = b.a1
and
a.a2 = b.a2
JOIN
C as c
ON
b.b1 = c.b1
and
b.b2 = c.b2
Where fieldtime <= date/time

But it takes to much time: aftre 2 hours and half is still running.

Do you know how to increase the performance?

Thank

View 7 Replies View Related

Inconsistent Performance Results With Large Partitioned Tables.

Dec 5, 2007

I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.

The first query completed in around 7s and has 47,000 logical reads.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime


The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime



The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.


My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?

One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.

Thanks,

Tim

View 5 Replies View Related

Deleting Lots Of Records

May 21, 2004

Apparently, deleting 7,000,000 records from a table of about 20,000,000 is not advisable. We were able to take orders at 8:00AM, but not at 7:59.

So, what's the best way of going about deleting a large number of records? Pretty basic lookup table, no relationships with other tables, 12 or so address-type fields, 4 or 5 simple indexes. I can take it down for a weekend or night, if needed.

DTS the ones to keep to another table, drop the old and rename the new table?
Bulk copy out, truncate and bring back in?
DTS to text, truncate and import back?
Other ways?

Never worked with such a large table and need a little experienced guidance.

Thanks for the help

View 1 Replies View Related

Does Sql Server Slow Down If It Has Lots Of Db's?

May 7, 2008

Hi,

You know how there are lots of hosted applications out there, many of them provide you with your own database (not shared).

1. If a server has 1K databases on it, will this slow down the server just due to the # of databases? (each user has their own database, but they won't be accessing it that much really).

A seperate database is required for security purposes usually.

2. Can you still open up EM with 1K+ databases?

View 3 Replies View Related

Lots Of Transactions, But No Activity

Jan 30, 2007

Hi all,

I have a strange situation. Performance monitor shows that SQLServer:Transactions Transactions value is 125, but SQL Server Profiler does not show any activity.

I ran sp_who2 and I have a bunch of processes with SUSPENDED status. Would those be counted in Performance monitor?

When does a transaction get suspended?

Thanks.

Alec

View 1 Replies View Related

Trying To Restore Having Lots Of Troubles....

Sep 20, 2007

I used MS SQL Server Management Studio Express to back up my SQL Server 2005 database called "PMDB" on a server in my office to a jump drive. Then, I removed the jump drive from the server and plugged it into my laptop. I then tried using MS SQL Server Management Studio Express to restore "PMDB" to my laptop SQL Server 2005 Express Edition (instance "Primavera") to a database called "pmdb$primavera". I've had several issues:

1. pmdb$primavera is now "locked up" "in the middle of a restore". I can't seem to unlock it even by rebooting my laptop.
2. when I execute the restore (I've used several command sequences...) I get a message that there are additional "families" restore is waiting for. I don't understand that.
3. I tried restoring to a completely new database (in the same instance...), but got the "family" problem. The query just hangs forever. When I cancel it, it hangs the database (see #1).
4. I now have two databases in the "Primavera" instance on my laptop in the "restoring" state. I can't get to either one of them.

I'm desperate. I have a customer presentation on Thursday where I must have the database working. Help!

Here's one of the queries I executed.... though I tweaked items here and there for 3 and 4 above.


restore database pmdb

from disk='f:databasebackupPMDB-BAK.BAK'

with recovery,

move 'pmdb_Dat' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_DAT.MDF',

move 'pmdb_log' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_LOG.LDF'

go



The issue seems to be #2 above each time.

View 4 Replies View Related

SS2005 Log Has Lots Of These Messages.

Nov 14, 2007

Every time a transaction log is dumped we see the following message in the log file:

BackupDiskFile:penMedia: Backup device '\s-sqlbkups-1g$myserverlogmy_databasemydatabase_backup_200711071430.trn' failed to open. Operating system error 2(The system cannot find the file specified.).


Source spid139
Message
Error: 18204, Severity: 16, State: 1.



And yet, the actual log dump appears fine and the file is found on the share. The dump is done with a maintenance plan.

Any ideas?

View 2 Replies View Related

Query Using Mathematical Function Of Values From 2 Tables Has A Performance Problem

Aug 2, 2007

When I am executing a query that uses a mathematical function on values from 2 tables the query takes much longer than the same query that uses values from 1 table, even though the join remains the same.

Why is this happening?
Is there a way to bypass this problem?

Long query ( values from 2 tables ) :
SELECT
MAX ( ( SIGN ( attribute.keyValue- ( -2027587559 ) ) *SIGN ( attribute.keyValue- ( -2027587559 ) ) -1 ) *-1*data.val ) AS maxVal
FROM
DATA data,
ATTR attribute,
TREE_ELEMENT elm,
TREE_ELEMENT subject
WHERE
data.elmId=elm.id
AND attribute.keyValue IN ( 345647222,1569153803,1569146115,-2027587559 )
AND subject.id=elm.subjectId
AND subject.name = ‘test’


Short query ( values from 1 table ) :
SELECT
MAX ( ( SIGN ( data.keyValue- ( -2027587559 ) ) *SIGN ( data.keyValue- ( -2027587559 ) ) -1 ) *-1*data.val ) AS maxVal
FROM
DATA data,
ATTR attribute,
TREE_ELEMENT elm,
TREE_ELEMENT subject
WHERE
data.elmId=elm.id
AND attribute.keyValue IN ( 345647222,1569153803,1569146115,-2027587559 )
AND subject.id=elm.subjectId
AND subject.name = ‘test’


Long query execution plan:
Execution Tree
--------------
Stream Aggregate ( DEFINE: ( [Expr1004]=MAX ( ( sign ( [attribute].[keyValue]--2027587559 ) *sign ( [attribute].[keyValue]--2027587559 ) -1 ) * ( -1*[data].[val] ) ) ) )
|--Nested Loops ( Inner Join )
|--Hash Match ( Inner Join, HASH: ( [elm].[id] ) = ( [data].[elmId] ) , RESIDUAL: ( [data].[elmId]=[elm].[id] ) )
| |--Nested Loops ( Inner Join, OUTER REFERENCES: ( [subject].[id] ) )
| | |--Index Seek ( OBJECT: ( [TREE_ELEMENT].[TREE_ELEMENT_NAME_IDX] AS [subject] ) ,
SEEK: ( [subject].[name]=’test’ ) ORDERED FORWARD )
| | |--Index Seek ( OBJECT: ( [TREE_ELEMENT].[TREE_ELEMENT_APP_ID_IDX] AS [elm] ) ,
SEEK: ( [elm].[subjectId]=[subject].[id] ) ORDERED FORWARD )
| |--Clustered Index Scan ( OBJECT: ( [DATA].[PK__DATAS_SAMPL__485B9C89] AS [data] ) )
|--Table Spool
|--Index Seek ( OBJECT: ( [ATTR].[TREE_Z_IDX] AS [attribute] ) ,
SEEK: ( [attribute].[keyValue]=-2027587559 OR [attribute].[keyValue]=345647222 OR [attribute].[keyValue]=1569146115 OR [attribute].[keyValue]=1569153803 ) ORDERED FORWARD )



Short query execution plan:
Execution Tree
--------------
Stream Aggregate ( DEFINE: ( [Expr1004]=MAX ( [partialagg1005] ) ) )
|--Nested Loops ( Inner Join )
|--Stream Aggregate ( DEFINE: ( [partialagg1005]=MAX ( ( sign ( [data].[keyValue]--2027587559 ) *sign ( [data].[keyValue]--2027587559 ) -1 ) * ( -1*[data].[val] ) ) ) )
| |--Hash Match ( Inner Join, HASH: ( [elm].[id] ) = ( [data].[elmId] ) , RESIDUAL: ( [data].[elmId]=[elm].[id] ) )
| |--Nested Loops ( Inner Join, OUTER REFERENCES: ( [subject].[id] ) )
| | |--Index Seek ( OBJECT: ( [TREE_ELEMENT].[TREE_ELEMENT_NAME_IDX] AS [subject] ) ,
SEEK: ( [subject].[name]=’test’ ) ORDERED FORWARD )
| | |--Index Seek ( OBJECT: ( [TREE_ELEMENT].[TREE_ELEMENT_APP_ID_IDX] AS [elm] ) ,
SEEK: ( [elm].[subjectId]=[subject].[id] ) ORDERED FORWARD )
| |--Clustered Index Scan ( OBJECT: ( [DATA].[PK__DATAS_SAMPL__485B9C89] AS [data] ) )
|--Index Seek ( OBJECT: ( [ATTR].[TREE_Z_IDX] AS [attribute] ) ,
SEEK: ( [attribute].[keyValue]=-2027587559 OR [attribute].[keyValue]=345647222 OR [attribute].[keyValue]=1569146115 OR [attribute].[keyValue]=1569153803 ) ORDERED FORWARD )

View 1 Replies View Related

Best Way To Populate A Page With Lots Of SQL Data

Sep 8, 2006

I have a page that has about 8 dropdown boxes that need to be populated from sql tables.  What is the best way to populate these boxes. Here is how I have it nowconn = New SqlConnection(ConfigurationManager.AppSettings("SQLString")) ''''''''''''' Fill in DropDownList Status '''''''''''''''''''strSelect = "SELECT * FROM Requests_Status"cmdSelect = New SqlCommand(strSelect, conn) conn.Open()dtrSearch = cmdSelect.ExecuteReader()ddlRequestStatus.DataSource = dtrSearchddlRequestStatus.DataTextField = "RequestStatusName"ddlRequestStatus.DataValueField = "RequestStatus"ddlRequestStatus.DataBind()ddlRequestStatus.Items.Insert(0, New ListItem("-- Select Below --", -1)) cmdSelect.Cancel()dtrSearch.Close()conn.Close()''''''''''''' Fill in DropDownList Container '''''''''''''''''''strSelect = "SELECT * FROM Containers"cmdSelect = New SqlCommand(strSelect, conn) conn.Open()dtrSearch = cmdSelect.ExecuteReader()ddlContainer.DataSource = dtrSearchddlContainer.DataTextField = "ContainerName"ddlContainer.DataValueField = "ContainerID"ddlContainer.DataBind()ddlContainer.Items.Insert(0, New ListItem("-- Select Below --", -1)) cmdSelect.Cancel()dtrSearch.Close()conn.Close()'''''''''''''''''''''''''''''I then repeat the same commands as above for the other 6 dropdowns.  This seems like a bad way to have to do all this.ThanksCraig

View 3 Replies View Related

Easiest Way To Update Lots Of Records

Apr 26, 2007

I have a database where several thousand records have NULL in a binary field.  I want to change all the NULLs to false.  I have Visual Studio 5, and the database is a SQL Server 5 database on a remote server.  What is the easiest way to do this?  Is there a query I can run that will set all ReNew to false where ReNew is Null?  This is a live database so I want to get it right.  I can't afford to mess it up.Diane 

View 2 Replies View Related

Lots Of Txt Files Has To Be Loaded (hints??)

Aug 18, 2006

have a dts package that does txt -> sql server.
i have 200 txt files with the same exact format.

just want to know if i can write a SP passing a parameter that loads this txt files. because i dont wanna create 200 packages or 200 sources to load 200 txt files.


say:
exec SP_loadTXT txt1

or should i use bulk insert?

any approaches are fine. any suggestions are fine too.

View 14 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved