Taking Backup
Jul 10, 2007just as we export in db2 database
as db2move databasename export -u username -p password
how we will export in sql server
just as we export in db2 database
as db2move databasename export -u username -p password
how we will export in sql server
Hi All
I am having a serious problem which I need some help with regarding our SQL Server backup.
Basically it has started to take ages (as in 48hrs +), when it should only take about 4 hrs. The database is only 380GB and up until monday our backups have not been completing. When I check the activity monitor I have seen that the 'BACKUP DATABASE' process is set to suspended with a huge wait time and the wait type is ASYNC_IO_COMPLETION.
I am not sure how to solve this, but I am going to have to!
So if anyone has any ideas please help me! If you need any othe info please let me know.
Thanks
Gopher
How to generate sql script(database backup file) including insert statements(that contains data from all the tables in the database) ?
View 1 Replies View RelatedHow to generate sql script(database backup file) including insert statements(that contains data from all the tables in the database) ?
View 3 Replies View RelatedHi,
Can anyone write a stored procedure for me to take backup the database in certain location of the harddisk(Example D:MyProjectBackup).
Thanks.
Regard
Kashif Chotu
Hi,
I want to take a backup of the solution created on reporting server. Do i have to take backup of individual reports or can i take backup of the whole solution.
Can you please tell me how to take backup?
Thanks in advance,
Siddharth
Hi guys.
I am having trouble in time issues while backuping my database.
My database size is around 50GB. It is taking around 5hrs.
Is there any way to reduce the 5 hr backup time to 3 or less.
Thanks in advance
MAK
In order to take automated backup of all user databases below is the query. This query will eliminate use of manual backups for user databases, in order to fully automate this just create a SQL Agent job and write this query in the job and forget about taking any manual DB backups.
DECLARE @name VARCHAR(50) -- database name
DECLARE @path VARCHAR(256) -- path for backup files
DECLARE @fileName VARCHAR(256) -- filename for backup
DECLARE @fileDate VARCHAR(20) -- used for file name
SET @path = 'C:DB_BKPUP'
[Code] .....
i buyed hosting for my site , i am using sqlserver 2000 as backend. hosting compony allow to connect to my database through queryanalyzer not from enterprise manager.hosting compony charge me for taking database backup on there server. so i want to know how can i take databse backup from remote sql server 2000 to my local sql server 2000,any tool process by which it is possible to take databse backup at my own computers sql server 2000.
View 10 Replies View Related
Hi All,
My Full backups are taking longer than the usual time on sundays.
I know this has nothing to do with the SQl Server storage engine or Database engine.
i hvae checked there are no jobs ruuning at this time..and this across all the servers sharing the SAN.
How can prove that some thing else is reponsible for this Behvaior and not SQL server.
are there any counters (perfmon) or tools or some sniffers which can tell me what is causing this.
please help.
Thanks in advance.
Hi all
Is it possible taking Diff backup of master database,If recovery model is FULL..
I keep getting below stack dump errors whenever I try to take Full/T-Log backup.
2015-11-26 05:18:03.44 spid79 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION
2015-11-26 05:18:03.44 spid79 * Access Violation occurred reading address 00007FFFA6CF9C60
I used debugger and got below stack trace.
0:048> kC 1000
Call Site
sqlmin!GetObjOffsets
sqlmin!PerfmonManager::AddInstance
sqlmin!BackupPerfmonCounter::AddInstance
[Code] ....
Hi,
I have SQL 2005 running on OS 2003. I am trying to take backup on a network drive (NAS box).
I have logged into the OS2003 machine as ADSadministrator and the same for SQL 2005. I have given full control to ADSadministrator on the network drive of the NAS box.
When I try to take a backup, I get the following error :
Operating system error = 5(Access is denied)
Any idea what is wrong?
Thanks in advance.
anirban
Hello,
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
Hi there
I'm getting this message on my third automated backup of the transaction logs of the day. Both databases are in full recovery mode, both successfully backed up at 01.00. The transaction logs backed up perfectly happily at 01:30 and 05:30, but failed at 09:30.
The only difference between 05:30 and 09:30's backups is that the log files were shrunk at 08:15 (the databases in question are the ones that sit under ILM2007, and keeping the log files small keeps the system running better).
Is it possible that shrinking the log files causes the database to think that there hasn't been a full database backup?
Thanks
Jane
I'm taking 70-228 this Saturday.Any advice from anyone who has taken it recently?Thanks in advance,Joe in Florida
View 2 Replies View RelatedHello.
I have a query that takes 1,5second to execute, but only 150ms of CPU. The query is quite simple, just one where statement against a clustered index.
SQL Server Execution Times:
CPU time = 156 ms, elapsed time = 1595 ms.
SELECT column1, column3, column4, ..., column10 FROM table WHERE column2 IN (37, 41, 43, 45, 49, 53, 55) ORDER BY column3 DESC
|--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
|--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=37 OR [u].[searchparamid]=41 OR [u].[searchparamid]=43 OR [u].[searchparamid]=45 OR [u].[searchparamid]=49 OR [u].[searchparamid]=53 OR [u].[searchparamid]=55 OR [u].[searchparamid]=59) ORDERED FORWARD)
I have tried to rewrite the query to an INNER JOIN instead.
|--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
|--Nested Loops(Inner Join, OUTER REFERENCES:([spal].[number]))
|--Index Seek(OBJECT:([MP].[dbo].[__search_parameters_lookup].[IX___search_parameters_lookup] AS [spal]), SEEK:([spal].[hash]=-1726604993) ORDERED FORWARD)
|--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=[spal].[number]) ORDERED FORWARD)
but the query still takes 1,5 seconds.
It spends 59% (according to execution plan) of sorting. 14% for the index seek of the __search_parameters_lookup table and then 24% of a clustered index seek of the __searchtest table.
How come it only uses that small of CPU but it still takes 1,5 seconds? It seems to be reading from memory as well so it shouldnt be an IO-problem?
The index I have on the table is a clustered index on (column 2).
Any ideas of how I can improve this? I have tried with DTA, also with a non clustered index on column3.
If I remove some columns from the SELECT-list the query will execute alot faster:
SQL Server Execution Times:
CPU time = 32 ms, elapsed time = 32 ms.
Booth the CPU and the elapsed time goes down and now appears to be more normal.
So there seems to be a problem caused by data transfer.
I tried to do a remake and normalize the table and when I do that I get the query execute with a speed of 400ms CPU and 400ms total. And this is still the exact same result, so why does it only spend 400ms of "rendering" or fetching the data when the tables are normalized but 1500ms when its denormalized?
Any ideas?
I am running Microsoft SQL Server 2000 - 8.00.2039
I'm performing an insert and I not only need to remove the << character, but also need to take one field and dump it into two fields. So in essence -
KAREL>>MONTES
needs to look like
Col1 Col2
Karel Montes
Thanks :)
I have a custom .net application that uses SQL 2000 server. All users are compaining performance issues and white-outs while they are using the application. I am almost certain that it's the SQL server that is the curprit. All the other components involved in the application hardly has any CPU or memory usage when I check the performance in the task manager.
On SQL server, I see that the process sqlserver.exe is taking like 2.8GB of memory. Is there a way to find out which exact SQL query or process is taking so much of memory? I belive there may be a bad SQL process that is stuck and taking all the memory? Is there a way to find out?
Thanks
Hello.
I have a query that takes 1,5second to execute, but only 150ms of CPU. The query is quite simple, just one where statement against a clustered index.
SQL Server Execution Times:
CPU time = 156 ms, elapsed time = 1595 ms.
Code Snippet
SELECT column1, column3, column4, ..., column10 FROM table WHERE column2 IN (37, 41, 43, 45, 49, 53, 55) ORDER BY column3 DESC
Code Snippet
|--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
|--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=37 OR [u].[searchparamid]=41 OR [u].[searchparamid]=43 OR [u].[searchparamid]=45 OR [u].[searchparamid]=49 OR [u].[searchparamid]=53 OR [u].[searchparamid]=55 OR [u].[searchparamid]=59) ORDERED FORWARD)
I have tried to rewrite the query to an INNER JOIN instead.
Code Snippet
|--Sort(TOP 1000, ORDER BY:([u].[LastActivityDate] DESC))
|--Nested Loops(Inner Join, OUTER REFERENCES:([spal].[number]))
|--Index Seek(OBJECT:([MP].[dbo].[__search_parameters_lookup].[IX___search_parameters_lookup] AS [spal]), SEEK:([spal].[hash]=-1726604993) ORDERED FORWARD)
|--Clustered Index Seek(OBJECT:([MP].[dbo].[__searchtest].[cix___searchtest_] AS [u]), SEEK:([u].[searchparamid]=[spal].[number]) ORDERED FORWARD)
but the query still takes 1,5 seconds.
It spends 59% (according to execution plan) of sorting. 14% for the index seek of the __search_parameters_lookup table and then 24% of a clustered index seek of the __searchtest table.
How come it only uses that small of CPU but it still takes 1,5 seconds? It seems to be reading from memory as well so it shouldnt be an IO-problem?
The index I have on the table is a clustered index on (column 2).
Any ideas of how I can improve this? I have tried with DTA, also with a non clustered index on column3.
If I remove some columns from the SELECT-list the query will execute alot faster:
SQL Server Execution Times:
CPU time = 32 ms, elapsed time = 32 ms.
Booth the CPU and the elapsed time goes down and now appears to be more normal.
So there seems to be a problem caused by data transfer.
I tried to do a remake and normalize the table and when I do that I get the query execute with a speed of 400ms CPU and 400ms total. And this is still the exact same result, so why does it only spend 400ms of "rendering" or fetching the data when the tables are normalized but 1500ms when its denormalized?
Any ideas?
I am running Microsoft SQL Server 2000 - 8.00.2039
HI allI want to take a back of my serverthere is 40 data base into my server and i have to take back of every database every day is there any way which can take back up of hole server at one time.regard
View 1 Replies View RelatedI dont know what this is called in the technical world. But you know when you do a search on amazon or play.com and the search results contain the first sentance or so of the items describtion rather than the whole thing. How is this done? thanks si!
View 6 Replies View RelatedI have the below query which returns thousands of records. can I optimize the returned result set faster without changing the structure of the database?
SELECT dbo.tblComponent.ComponentID, dbo.tblComponent.ComponentName, dbo.tblErrorLog.ShortErrorMessage, dbo.tblErrorLog.LongErrorMessage, dbo.tblErrorLog.LogDate, dbo.tblErrorLevel.Description,dbo.tblErrorLog.ErrorLogIDFROM dbo.tblErrorLevel INNER JOIN dbo.tblErrorLog ON dbo.tblErrorLevel.ErrorLevelID = dbo.tblErrorLog.ErrorLevelID INNER JOIN dbo.tblComponent ON dbo.tblErrorLog.ComponentID = dbo.tblComponent.ComponentID
Thanks.
I have a VB application which uses SQL server as the database and uses Crystal reports for reporting.We are using a stored procedure to create a report and we pass ID from the vb side to run the stored procedure.
In boston the report shows up in 4 sec.But in california it takes 7 min.
We have a very good network(T3).Why it is taking more time in california ?.
Any ideas ?
Hi,
I am running this query and it is taking over 3 minutes.
"select * from table1 where CONVERT(varchar(10),dated,5) = '13-09-01' "
Table1 has a column called dated which is datetime datatype.
Any suggestions how can i optimize this query?I tried Non-clustered index on Dated column and time came down to less than 3 but still more than 2min.
TIA.
I have been dealing with an intermittent problem for several months that manifests itself on my computer as well as a customers computer. It is happening so often, upon booting the computer, that I just open and then minimize the Task Manager so that it will be in the Tool Tray and the bargraph will be visible.
From time to time the processor bargraph will "Max out" and when I open Task Manager and click on CPU in the Processes Tab, SQLServr.exe is using 99% of the CPU.
In Enterprise Manager I have set maximum Memory to 25% of the available system memory. I have tried this in both Fixed mode as well as Dynamic mode, no change.
I was told that there was a SQL Server version that was susceptible to a WORM that caused this. I have since upgraded to SQL ver. 8.0.194. I'm not sure of the version that I replaced, but I thought that the previous version was the one that was susceptible to the worm.
Has anyone fought this battle and if so can you offer any experience or advice?
Thanks very much for your help,
Doc
hi i am taking date from user.
if i want to take only date or only time from user and save into database than what to do?
i am using MS SQL Server which takes datatype as 'datetime' which takes both date and time as value to store in database.
Hi all,
2 weeks ago I deleted about 200GB of data from a 300GB+ database. It's a custom DB we want to use to test few things. We wanted a smaller size DB for our testing and since we didn't have any we grabbed a production backup, removed sensitive data and ran a large archiving script on it... Anyway so far so good but our data file was still the same size as before.
So we started a shrinkdatabase... it has been running for 2 weeks now! After about 1 week I interrupted the shrinkdatabase process and ran a
dbcc shrinkdatabase('DB', truncateonly)
just to see if the data file will get reduced a bit or not. It did get reduced by about 20GB. I assume that
dbcc shrinkdatabase('DB', 0)
has free up enough pages at the end of the data file so a truncateonly was able to free up some space... Anyway after this we started the
dbcc shrinkdatabase('DB', truncateonly)
again... still running...
The database was never shrank before and every index is highly fragmented... Is that why it's taking so long? Am I actually going to have to wait for another few weeks before that thing finishes??
Anyone has experience running shrink on large DBs?
thanks!
I have following common table expression query which is taking like 15 hours to run. would someone suggest what can I do to speed this thing up..
; with
a as (select proj_id, proj_start_dt,proj_end_dt, case when charindex('.', Proj_ID) > 0 then left(Proj_ID, len(Proj_ID) - charindex('.', reverse(Proj_ID))) end as Parent_Proj_ID from ods32.dbo.Proj a), --add Parent_Proj_ID column
b as (select proj_id, proj_start_dt,proj_end_dt,Parent_Proj_ID from a where PROJ_START_DT is not null and PROJ_END_DT is not null --get all valid rows
union all
select a.Proj_Id, b.PROJ_START_DT, b.PROJ_END_DT, a.Parent_Proj_ID from b inner join a on b.Proj_Id = a.Parent_Proj_ID where a.PROJ_START_DT is null or a.PROJ_END_DT is null) --get all invalid children of valid rows and give them the dates of their parents
update a set PROJ_START_DT = b.PROJ_START_DT, PROJ_END_DT = b.PROJ_END_DT
from WPData a left outer join b on a.Proj_ID = b.Proj_ID -- join up and update
thanks
im rstoring a db the file is 7gig. Its taking more than 10 minuts..
how do i know if the backup file is ok to restore it
=============================
http://www.sqlserverstudy.com
what can I do?
all queries that used to work are taking forever now???
what can I do?
is there a max size for the db that I may have reached
please advise asap
Hi:
I have issued the following ALTER TABLE CHECK ADD CONSTRAINT on a table which has around 100K rows and it is taking long time (it's been more than 30 mins the alter table is running) to add the constraint. Is this normal or should I kill the process.
ALTER TABLE [dbo].[tblAbsHeqAnalyticOutputSimulationPathValues]
WITH CHECK ADD CONSTRAINT [CK_tblAbsHeqAnalyticOutputSimulationPathValues_1]
CHECK ([dbo].[svfConstraintVerifyTableUniqueActiveEntryFacade]('tblAbsHeqAnalyticOutputSimulationPathValues')<=(1) AND [dbo].[svfConstraintVerifyTableUniqueActiveEntryFacade]('tblAbsHeqAnalyticOutputSimulationPathValues')>=(0))
Thanks !
Dear Experts,
i've one table named table11. in this perticular table, i've 30 columns and 40,000 rows of data.
this table is taking 35 sec for select * from table11.
defnetly it will take more time if i used this in some places like procedures and functions or views like that.
where is the problem? generally it takes that much of time or is there any problem?
guidence please.....
Vinod
Even you learn 1%, Learn it with 100% confidence.