I'm having trouble doing backups of several databases (on a single server) to one device (a disk file).
I created a script with each DUMP statement and when I run it from the query window, it works
just fine. But when I create a stored procedure out of the same script, I get errors because the
second DUMP statement is trying to access the device that is already being written to by the
first DUMP statement.
If I split them apart into different stored procedures, then they seem to overwrite each other and
I end up with only the last database backed up.
I'm trying to put this into a task and that is why I need to put it into stored procedures.
Is there a synchronous/asynchronous setting or parameter that I should be using? For now, I'm
just dumping each to separate devices, but this is a little sumbersome, since I have four
databases to backup for each day of the week. Which gives me a total of 28 separate
devices.
I'm sure there is a better way of doing this. Does anyone have any suggestions. Thank you in
advance.
Suppose I am appending to my transaction log dump device every half hour,thus adding 48 log dumps per day. How can I purge my transaction dump device to only keep the last 1 week's worth of these logs. I do not wanted to issue an INIT in the command, because this will wipe it completely. I noticed an EXPIREDATE and RETAINDAYS parameter for the DUMP command. Can I use these to selectively purge the backup device or will these allow the device to be wiped clean also?
When you create a dump device, then add backups with the NOINIT arg, the space taken up on the device grows, is there a way of removing individual backups from the device without (i guess) purging the whole lot by running WITH INIT? I can get a list of the files by running the RESTORE WITH HEADERONLY, but I can't see how to remove individual ones.
I'm trying to capture the info that gets passed back from the 'load headeronly from logbackup' command in a script that I'm writing. The MS Transact-SQL reference specifies that this info is passed back in a table, a row for each dump on the given dump device. I'm looking for the SQL construct that will allow me to capture this info as variables in my script for processing. I've tried to create a temp table that mirrors the info returned and insert into this table by executing the load command but syntactically it is not correct(with that syntax the compiler assumes that the load command is a stored procedure and can't locate it) Specifically, I'm trying to capture info from the dump transaction with no_truncate command. It appears the dump info does not go the the msdb..sysbackuphistory table when the no_truncate option is used. This dump info can only be obtained thru the load headeronly command. Any thoughts on how to code the sql to capture this info?
I issued the following statement on my database as my log file showed 100% full. However after running this command and also running it again after expanding the size to almost double the value of both my database log and data devices it still shows the log as being 100% full.
DUMP TRANSACTION <database> WITH TRUNCATE_ONLY (and also used the NO_LOG option).
So can anyone tell me whats going wrong here? I should note that I am still able to enter data into the database without any errors.
acutally my network dept. has change the backup file server IP address.. i m now hving problem for taking backup. i have around 85 backup that runs everyday.. what i m doing now running every single command to drop backup device and then adding again.. but it's taking agess to do..
is there any simple script that just update the device path folder..
If a Select is done on a column whose data type is nvarchar(16) and contains only numerals (UPC numbers) the select does not return the record.
1. Query with numerals in nvarchar column works as long as multiple records are returned (LIKE '012%') 2. Numeric (INT only one tested) columns works as expected 3. String columns with alpha data works as expected 4. Problem only exist when running in Device Emulator and/or actual device. 5. Same test on desktop app runs as expected. 6. Windows Mobile 6, Vista Ultimate 7. Same results when when connection to device from SSMS 8. SQL Servers comes on
Previous thread discussion of this problem (I thought that Parameters corrected problem, but not in all cases???)
Please provide advice on how to upgrade from v6.5 to v.7.0 where v.6.5 databases are sharing same devices. What do i need to do to put db's on their own device/file. Should I upgrade first or fix on 6.5 side? Does SQL 7.0 rebuild the DB on its own file during upgrade? Thank you....
I have created a single FULLTEXT on col2 & col3. suppose i want to search col2='engine' and col3='toyota' i write query as
SELECT
TBL.col2,TBL.col3 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col2,'engine') TBL1 ON
TBL.col1=TBL1.[key] INNER JOIN
CONTAINSTABLE(TBL,col3,'toyota') TBL2 ON
TBL.col1=TBL2.[key]
Every thing works well if database is small. But now i have 20 million records in my database. Taking an exmaple there are 5million record with col2='engine' and only 1 record with col3='toyota', it take substantial time to find 1 record.
I was thinking this i can address this issue if i merge both columns in a Single column, but i cannot figure out what format i save it in single column that i can use query to extract correct information. for e.g.; i was thinking to concatinate both fields like col4= ABengineBA + ABBToyotaBBA and in search i use SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABBToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key] Result = 1 row
But it don't work in following scenario col4= ABengineBA + ABBCorola ToyotaBBA
SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABB*ToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key]
Result=0 Row Any idea how i can write second query to get result?
I have a database in development in SQL Server 6.5 that needs to be occasionally deleted and rebuilt from a script when table structures are changed. I found that when very complex queries were performed, the 2 MB default size of tempdb filled up and returned errors, so I went to the Enterprise Manager to expand tempdb, learned that I had to first expand a device to expand tempdb into, and foolishly chose to expand tempdb into the same device space used by my application, instead of into one of the system databases. Now when I try to delete the device in preparation for its rebuild, the Enterprise Manager responds with an error message saying the device can't be deleted because it contains system tables. Is there any way to get the expanded portion of tempdb out of my application device so that the device can be deleted, without reinstalling SQL Server?
Hi! I have a general SQL CE v3.5 design question related to table/file layout. I have an system that has multiple tables that fall into categories of data access. The 3 categories of data access are:
1 is for configuration-related data. There is one application that will read/write to the data, and a second application that will read the data on startup.
1 is for high-performance temporal storage of data. The data objects are all the same type, but they are our own custom object and not just simple types.
1 is for logging where the data will be permanent - unless the configured size/recycling settings cause a resize or cleanup. There will be one application writing alot [potentially] of data depending on log settings, and another application searching/reading sections of data. When working with data and designing the layout, I like to approach things from a data-centric mindset, because this seems to result in a better performing system. That said, I am thinking about using 3 individual SDF files for the above data access scenarios - as opposed to a single SDF with multiple tables. I'm thinking this would provide better performance in SQL CE because the query engine will not have alot of different types of queries going against the same database file. For instance, the temporal storage is basically reading/writing/deleting various amounts of data. And, this is different from the logging, where the log can grow pretty large - definitely bigger than the default 128 MB. So, it seems logical to manage them separately.
I would greatly appreciate any suggestions from the SQL CE experts with regard to my approach. If there are any tips/tricks with respect to different data access scenarios - taking into account performance, type of data access, etc. - I would love to take a look at that.
greetings,i was wondering is it better to have multiple small stored procedures or one large store procedure ???? example : 100 parameters that needs to be inserted or delete ..into/from a table.............is it better to break it up into multiple small store proc or have 1 large store proc....thanks...............
I'm doing a BCP of a large table 37 million rows. On a single CPU server, SQL 7, sp 3, with 512 meg of RAM, this job runs in about 3 hours. On a 8 way server with 4 Gig of RAM, SQL 7 Enterprise, this job runs 12 hours and is only a third done. The single CPU machine is running one RAID 5 set while the 8 way server is running 4 RAID 5 sets with the database spread out over two of them.
Is there something obvious that a single CPU box would run this much faster?
Basically I've been using Visual Studio 2005 for a few weeks now moving a Pocket PC project from 2003 to 2005. When I hit the Start Debugging Button every time until today the project would rebuild and deploy to my pocket PC allowing me to debug etc but now I get
The remote connection to the device has been lost.
Please verify the device conection and restart debugging.
I used to get this problem in VS2003 sometimes and just like the numerous posts on different sites that I've looked at the problem eventually goes away and I'm none the wiser. One guy said that he found that if he went to bed the problem was resolved when he came back!
My PDA running Windows 2003 2nd Edition is directly connected to my PC via a USB port. I've rebooted my PC and done a soft reset on the PDA but it didn't help. I'm using ActiveSync 4.1.
backup database web to disk = 'c:inetpubwwwrootackupmybakup.bak' with format
I m Getting Error like :
Server: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:inetpubwwwrootackupmybakup.bak'. Device error or device off-line. See the SQL Server error log for more details. Server: Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.
This error is Generated only when i m trying to access folders within "wwwroot" but not in any other folders , even command runs success fully for "wwwroot" folder . !!
hi iam totally new to databases , as a project i have to design a database of users...they have to register first like any site..so i used stored procs and made entries to database using insert command...its working for now..now every user will search and add other users in the database..so every user will have a contact list...i have no idea how to implement this...so far i created a table 'UserAccount' with column names as UserName as varchar(50)Password as varchar(50)EmailID as varchar(100)DateOfJoining as datetimeUserID as int ---> this is unique for user..i enabled automatic increment..and this is primary key..so now every user must have a list of other userid's.. as contact list..Any help any ideas will be great since i have no clue how to put multiple values for each row..i didnt even know how to search for this problems solution..iam sorry if this posted somewhere else..THANK YOU !if it helps..iam using sql server express edition..and iam accessing using asp.net/C#
Im having a hard time deciding what approach should I take. The scenario is this: I have developed various systems (inventory, HR, accounting, etc.). All this systems are (and should be) tightly integrated with one another. At present, for all these systems, i've used a single DB prefixing the tables with the systems name (eg. Inventory.Items).
My question is: did I did the right (and practical) thing? Or should I create a DB for each system to organize them? The problem with multiple DBs is some system uses the other system's table(s). Example, if i created a separate DB for accounting, and a separated DB for inventory, and another for HR, how am I going to relate inventory and HR's accounts to the accounting DB's table? I want a single instance for each table; I don't want to create another account table for inventory or HR so I can enforce integrity. And if different DBs, is there a performance impact on this?
Or is there another way? My concern is performance and manageability. Please help. Thanks!
I am looking for a simple way to do multiple values in one single parameter in my simple Stored Procedures. Let's say for example I have a column called RoomNumber and the value data type is INT. Here is my Stored Procedures:
CREATE PROC ROOMVACANCY @RoomNumber int, SELECT vacancy, roomnumber FROM hoteldb WHERE Vacancy IN (@RoomNumber) END
The value for roomnumber has 100 records. I want to be able to select for more than selection when I execute this stored procedures. How do I do that in the simple way?
I have 2 systems that will send data to each other. Each system will originate a particular set of messages, there will be no overlap. This scenario has transaction data going to a reporting system and management operations going back to the transactional system. It is semi-related data.
I have two patterns in mind, a combined stream of all messages or 2 streams of segregated messages.
A. A single service on each instance would originate a set of messages, process the responses, and receive the other instance's messages. The responses and original message from the other system would mix on the same Q. The activation procs would have to handle all message types. The same infrastructure (message types, contracts, Qs, activation stored procs) would be created on both systems. Although, distinct service names and ports would be used on each instance.
B. Two services and two Qs. 1 service would originate a set of messages, process the responses. The other service would process messages from the other system. The responses and original message from the other system would be on the separate Qs. There would be 2 infrastructures created. There could be separate activation stored procs.
There is just one message type and message validation is only WELL_FORMED_XML.
Which is pattern is better for management and performance? Should I create 1 service or 2 on each instance? Either way should work about as well as the other? 2 services are twice as complex to set up. Separation is not necessary, but I like the idea. 1 service will send many more messages (>10x) than the other. Any thoughts?
There will be one UniqueID for each row. We'll get the uniqueID and PK1 and PK2 in a file. Imp: We need to generate the Sequence_Id depending on number of Issue_dates or Issue_amounts or Issue_Categories or Issue_Rejects as in the above table.
Can we do this without using cursors? This is going to be one time process.
I've a temp variable where I'm moving some columns like below:
id value type1 type2
0 ab type1val1 type2val1
0 cd type1val1 type2val1
0 ef type1val1 type2val1
1 ab type1val2 type2val2
1 cd type1val2 type2val2
1 ef type1val2 type2val2 What I want to do is group these by their id and get the following o/p ab,cd,ef type1val1 type2val1 ab,cd,ef type1val2 type2val2
The grouped values need to be separated by commas.
What I'm doing currently: I'm using a temp variable to put all these values but am unable to coalesce and get the desired o/p.
Hi,I am wondering if anyone has any examples of how to run multiple sql statements from a file using .net? I want to automatically install any stored procedures or scripts from a single file on the server when my web application runs for the first time. Thanks for your help in advance!
Hi. I want to return multiple rows into a single row in different columns. For example my query returns something like thisThe query looks like thisSelect ID, TYPE, VALUE From myTable Where filtercondition = 1ID TYPE VALUE1 type1 121 type2 152 type1 16 2 type2 19Each ID will have the same number of types and each type for each ID might have a different value. So if there are only two types then each ID will have two types. Now I want to write the query in such a way that it returnsID TYPE1 TYPE2 VALUE1 VALUE21 type1 type2 12 152 type1 type2 16 19Type1, Type2, Value1, and Value2 are all dynamic. Can someone help me please. Thank you.