SQL 2012 :: Importing Xer Format Files Through Wizard Takes Longer Time?
Aug 9, 2015
We are importing xer formats through the wizard to sqlserver database and It takes upto 35-45 mins for each import (single project), any option to reduce the time.Is they any other import options - which can give us faster results.
View 0 Replies
ADVERTISEMENT
Feb 13, 2001
Has anybody come across situations where queries take longer to execute the second time? The server is a dedicated sql server box with 1gb memory.
Thanks in advance.
Praveena
View 2 Replies
View Related
Mar 1, 2005
Hi we have a table with about 400000 records in it. It starting to take longer and longer to add a new record. I was thinking of creating another identical table and archiving off most of the records every month (we are now adding about about 4000 records a day) . Is this the best thing to do?
I don't know a lot about sql server so any help or suggestions would be great
View 4 Replies
View Related
May 5, 2006
I saw this post by dterrie in the Wishlist thread and I just wanted to second it:
"How about bringing back a simple dBase import. The SSIS guys are clearly FAR out of touch with reality if they think people who handle data no longer need to work with dbf files. I've seen alot of dumb stuff in my day, bit this is just sheer brilliance. I just love the advice of first importing into Access and then importing the Access table. Gee, why didn't I think of such a convenient solution. I could have had a V-8."
I've been struggling with this the last couple days and finally decided to import the dBase III file into Access and then import that into SQL Server 2005. Imagine my surprise when I discovered this was the current recommended method.
That's just ridiculous. Can someone tell me why they would reduce some of the functionality of SQL Server from 2000 to 2005? This was a very easy process in SQL Server 2000...
View 3 Replies
View Related
Mar 14, 2008
Hi, Is there any way to audit or record in SQL Server 2000 what queries are the ones that consume more resources in the server so I can focus and improve them?
Thanks
View 1 Replies
View Related
Mar 28, 2008
I could use a little help here. We have a stored procedure that runs on SQL2000 and for a large dataset only takes 1-2 minutes. On SQL2005 however, it takes around 25 minutes. Any advice or insight anyone could give would be great.
Here's the stored procedure:
CREATE PROCEDURE daa_upd_relationship_balance_hist
AS
begin tran
insert fldarts..daa_relationship_bal_hist
select <-- list snipped -->
from daa_relationship_bal drb, daa_user_review dur
where
drb.acct_no = dur.acct_no and
drb.control_2 = dur.control_2 and
drb.nb_gl_cost_ctr = dur.nb_gl_cost_ctr and
drb.nb_dda_sav_type = dur.nb_dda_sav_type and
drb.acct_no+drb.control_2+drb.nb_gl_cost_ctr+drb.nb_dda_sav_type+convert(char(10),dur.activity_date, 101)
not in
(select acct_no+control_2+nb_gl_cost_ctr+nb_dda_sav_type+convert(char(10), activity_date, 101)
from fldarts..daa_relationship_bal_hist)
if @@error = 0
commit tran
else
begin
rollback tran
print '!!!Error (daa_relationship_bal_hist) : Relationship Balance History not updated'
end
return
GO
So we have three tables. Here's a schema for each and the indexes on them. I've omitted columns from the tables that are not utilized in this query.
daa_relationship_bal:
CREATE TABLE [daa_relationship_bal] (
[control_2] [char] (3) NOT NULL ,
[nb_gl_cost_ctr] [char] (7) NOT NULL ,
[acct_no] [char] (14) NOT NULL ,
[nb_dda_sav_type] [char] (3) NOT NULL
)
index:
idx_upd_balance_hist nonclustered located on PRIMARY acct_no, control_2, nb_gl_cost_ctr, nb_dda_sav_type
daa_user_review:
CREATE TABLE [daa_user_review] (
[control_2] [char] (3) NOT NULL ,
[nb_gl_cost_ctr] [char] (7) NOT NULL ,
[acct_no] [char] (14) NOT NULL ,
[nb_dda_sav_type] [char] (1) NOT NULL ,
[activity_date] [datetime] NULL
)
index:
PK_daa_user_review_1__37 nonclustered, unique, primary key located on INDEXES control_2, nb_gl_cost_ctr, acct_no, nb_dda_sav_type
daa_relationship_bal_hist:
CREATE TABLE [daa_relationship_bal_hist] (
[control_2] [char] (3) NOT NULL ,
[nb_gl_cost_ctr] [char] (7) NOT NULL ,
[acct_no] [char] (14) NOT NULL ,
[nb_dda_sav_type] [char] (3) NOT NULL ,
[activity_date] [datetime] NOT NULL
)
index:
PK_daa_rel_bal_hist_1__37 nonclustered, unique, primary key located on PRIMARY control_2, nb_gl_cost_ctr, acct_no, nb_dda_sav_type, activity_date
Any help on this would be great. If more information is needed, please let me know.
View 5 Replies
View Related
Jul 20, 2005
I'm running an ISP database in SQL 6.5 which has a table 'calls'. When thenew month starts I create a new table with the same fields and move the dataof previous month into that table and delete it from calls. So 'calls' holdsthe data of only the current month. for example at the start of november2003 I ran the queriesCreate Table Oct2003Calls {................................}/* Now insert data of october into new table */INSERT Oct2003CallsSELECT *FROM callsWHERE calldate < '11/1/03'/* Finaly delete october data from calls table */DELETE FROM callsWHERE calldate < '11/1/03'The problem is that while the insert query takes about 2 minutes to executethe delete queries takes over 10 minutes to affect the same no. of rows. Whyis that?This causes problems because user authentication stops when this query isrunning which means users cant connect to the internet.
View 4 Replies
View Related
Mar 3, 2005
Hi there... I wrote a SP to check for different types of exceptions in a few database tables. When I was writing the scripts, everything seemed to execute fairly quickly and I was satisfied with the performance. When I completed the scripts and compiled them into a stored procedure and ran it (using Exec), it took a lot longer to run than I thought it would. So I went through each section of the script and ran each portion individually to see which part was taking so long.... but all the scripts ran very quickly. The individual scripts, run separately, took a combined total of 0:26 to run.... but the SP was taking 1:30 to run. (????) So then I took ALL the script contained in the SP and ran it by itself in the Query Analyzer.... it took 0:27 to run. (??????)
So basically... the script that I wrote takes 27 seconds to execute, when run by itself in the Query Analyzer... but when I take that very same script and turn it into a Store Procedure and run it, it takes a minute and a half.
Any ideas why?? I thought SP's were supposed to run faster because they're pre-compiled.
WATYF
View 1 Replies
View Related
Aug 1, 2007
sql 2005 stnd on a server of decent spec.
dbase in question is only about 5GB on a 450GB partition.
at the begining of the month I run:
BACKUP LOG [objectstore] TO DISK ='D:BackupsProdackup_objectstore.BAK'
WITH NOFORMAT , INIT , NAME = N'objectstore backup'
and then every 10 minutes (within working hours) for the rest of the month I
run:
BACKUP LOG [objectstore] TO DISK ='D:BackupsProdackup_objectstore.BAK'
WITH NOFORMAT , NOINIT , NAME = N'objectstore backup'.
The amount of data that gets backed up is the same through out the month and
the loading on the server as a whole also stays constant throughout the month
- NOTHING increases throughout the month that would affect this server in any
way, yet at the begining of the month the backup takes 10 seconds, and at the
end, it gets up to 5-6 minutes.
why?
THanks
Alastair Jones.
"A computer once beat me at chess - but it was no match for me at kick boxing" - Emo Phillips.
View 11 Replies
View Related
Jun 20, 2013
Problem Summary: Merge Statement takes several times longer to execute than equivalent Update, Insert and Delete as separate statements. Why?
I have a relatively large table (about 35,000,000 records, approximately 13 GB uncompressed and 4 GB with page compression - including indexes). A MERGE statement pretty consistently takes two or three minutes to perform an update, insert and delete. At one extreme, updating 82 (yes 82) records took 1 minute, 45 seconds. At the other extreme, updating 100,000 records took about five minutes.When I changed the MERGE to the equivalent separate UPDATE, INSERT & DELETE statements (embedded in an explicit transaction) the entire update took only 17 seconds. The query plans for the separate UPDATE, INSERT & DELETE statements look very similar to the query plan for the combined MERGE. However, all the row count estimates for the MERGE statement are way off.
Obviously, I am going to use the separate UPDATE, INSERT & DELETE statements. The actual query plans for the four statements ( combined MERGE and the separate UPDATE, INSERT & DELETE ) are attached. SQL Code to create the source and target tables and the actual queries themselves are below. I've also included the statistics created by my test run. Nothing else was running on the server when I ran the test.
Server Configuration:
SQL Server 2008 R2 SP1, Enterprise Edition
3 x Quad-Core Xeon Processor
Max Degree of Parallelism = 8
148 GB RAM
SQL Code:
Target Table:
USE TPS;
IF OBJECT_ID('dbo.ParticipantResponse') IS NOT NULL
DROP TABLE dbo.ParticipantResponse;
[code]....
View 9 Replies
View Related
Nov 29, 2006
(Applies to SQLServer 2005 SP1)
We have found that using the SSIS "Import and Export Wizard" using the "Microsoft Excel" data source that there appears to be a maximum column length of 255 characters for any row.
Even when defining the destination table columns as nvarchar(4000), the wizard fails with the errors shown below.
We have found no workaround except manually changing the imput data. There doesn't appear to be any "Advanced" options for the Excel importer as there are for the flat-text importer. So, no question here, just posting the bug so that *next* time someone searches the web for an answer, this post comes up
MessagesError 0xc020901c: Data Flow Task: There was an error with output column "English String" (18) on output "Excel Source Output" (9). The column status returned was: "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task: The "output column "English String" (18)" failed because truncation occurred, and the truncation row disposition on "output column "English String" (18)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task: The PrimeOutput method on component "Source - Sheet1$" (1) returned error code 0xC020902A. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "SourceThread0" has exited with error code 0xC0047038. (SQL Server Import and Export Wizard) Error 0xc0047039: Data Flow Task: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "WorkThread0" has exited with error code 0xC0047039. (SQL Server Import and Export Wizard)
edit: After searching further this is documented under "Excel Source"
in BOL which provides a registry-based workaround. I guess the issue
is that the wizard considers truncation to be a 'fail' case and
there's no easy way to override this behaviour, specify the column
types nor determine which line is in error)
Truncated text. When the driver determines that an Excel column contains
text data, the driver selects the data type (string or memo) based on the
longest value that it samples. If the driver does not discover any values longer
than 255 characters in the rows that it samples, it treats the column as a
255-character string column instead of a memo column. Therefore, values longer
than 255 characters may be truncated. To import data from a memo column without
truncation, you must make sure that the memo column in at least one of the
sampled rows contains a value longer than 255 characters, or you must increase
the number of rows sampled by the driver to include such a row. You can increase
the number of rows sampled by increasing the value of TypeGuessRows under
the HKEY_LOCAL_MACHINESOFTWAREMicrosoftJet4.0EnginesExcel registry
key.
)
View 21 Replies
View Related
May 23, 2014
I am using SSIS to load raw files into database. In my files I have columns Date which has format
1/1/2010 12:00:00 PM.
I want to load this column in format 1/1/2010 24:00:00. I mean in 24 hour format.
View 5 Replies
View Related
Mar 10, 2015
I am looking to be able to add 15 minutes to a time value that is in character format. here is sample data:
0530
0545
0600
0615
0630
0645
0700
0715
0730
0745
Whenever there is a leading zero, I need to preserve that as well. Here is an example of what I an looking for:
0545 + 15 = 0600
0600 + 15 = 0615
1345 + 15 = 1400
View 9 Replies
View Related
Oct 20, 2006
Hi All,
I have become frustrated and I am not finding the answers I expect.
Here's the gist, we support both Oracle and SQL for our product and we would like to migrate our Clients who are willing/requesting to go from Oracle to SQL. Seems easy enough.
So, I create a Database in SQL 2005, right click and select "Import Data", Source is Microsoft OLE DB Provider for Oracle and I setup my connection. so far so good.
I create my Destination for SQL Native Client to the Database that I plan on importing into. Still good
Next, I select "Copy data from one or more tables or views". I move on to the next screen and select all of the Objects from a Schema. These are Tables that only relate to our application or in other words, nothing Oracle System wise.
When I get to the end it progresses to about 20% and then throws this error about 300 or so times:
Could not connect source component.
Warning 0x80202066: Source - AM_ALERTS [1]: Cannot retrieve the column code page info from the OLE DB provider. If the component supports the "DefaultCodePage" property, the code page from that property will be used. Change the value of the property if the current string code page values are incorrect. If the component does not support the property, the code page from the component's locale ID will be used.
So, I'm thinking "Alright, we can search on this error and I'm sure there's an easy fix." I do some checking and indeed find out that there is a property setting called "AlwaysUseDefaultCodePage" in the OLEDB Data Source Properties. Great! I go back and look at the connection in the Import and .... there's nothing with that property!
Back to the drawing board. I Create a new SSIS package and figure out quickly that the AlwaysUseDefaultCodePage is in there. I can transfter information from the Oracle Source Table to the SQL Server 2005 Destination Table, but it appears to be a one to one thing. Programming this, if I get it to work at all, will take me about 150 hours or so.
This make perfect sense if all you are doing is copying a few columns or maybe one or two objects, but I am talking about 600 + objects with upwards of 2 million rows of data in each!!
This generates 2 questions:
1. If the Import Data Wizard cannot handle this operation on the fly, then why can't the AlwaysUseDefaultCodePage property be shown as part of the connection
2. How do I create and SSIS Package that will copy all of the data from Oracle to SQL Server? The source tables have been created and have the same Schema and Object Names as the Source. I don't want to create a Data Flow Task 600 times.
Help!!!
View 8 Replies
View Related
Dec 15, 2006
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open()
sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table
scb = New SqlCeCommandBuilder(sda)
sda.Update(dataset)
cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open()
Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
cmd.CommandType = CommandType.Text
cmd.ExecuteNonQuery()
cn.Close()
StartTime = Environment.TickCount
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
View 1 Replies
View Related
Nov 12, 2007
Hi,
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp all have been tested to produce the same results */
/*
SELECT GETDATE()
GO
SELECT CURRENT_TIMESTAMP
GO
SELECT {fn Now()}
GO
*/
/* Use these statements to delete the tables to allow recreate of the tables */
/*
DROP TABLE COMMIT_TEST
DROP TABLE COMMIT_TEST_UPDATE
*/
/* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */
CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */
GO
CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */
GO
/* Use these statements to delete the triggers to allow reinsert */
/*
drop trigger LOG_COMMIT_TEST_INSERT
drop trigger LOG_COMMIT_TEST_UPDATE
drop trigger LOG_COMMIT_TEST_DELETE
*/
/* Create insert, update and delete triggers */
create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as
begin
declare @time datetime
select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE)
select PKEY, getdate()
from inserted
end
GO
create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as
begin
declare @time datetime
select @time = getdate()
update COMMIT_TEST_UPDATE
set LAST_UPDATE = getdate()
from COMMIT_TEST_UPDATE, deleted, inserted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
GO
/* In our application deletes should never occur so we dont log when they get modified we just delete them from the UPDATE table */
create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as
begin
if ( select count(*) from deleted ) > 0
begin
delete COMMIT_TEST_UPDATE
from COMMIT_TEST_UPDATE, deleted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
end
GO
/* Delete any previous inserted record to avoid errors when inserting */
DELETE COMMIT_TEST WHERE PKEY = 1
GO
/* What is the current date/time */
SELECT GETDATE()
GO
BEGIN TRANSACTION
GO
/* Insert a record into the primary table */
INSERT COMMIT_TEST (PKEY) VALUES (1)
GO
/* Simulate additional processing within this transaction */
WAITFOR DELAY '00:00:10'
GO
/* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */
COMMIT TRANSACTION
GO
/* get the current date to show us what date/time should have been committed to the database */
SELECT GETDATE()
GO
/* Select results from the table we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */
/* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */
SELECT * FROM COMMIT_TEST
GO
SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
Regards,
Mark
View 8 Replies
View Related
May 15, 2008
Hello All,
Below carry takes too much time while execution
Select
'PIT_ID' = CASE WHEN Best_BID_DATA.PIT_ID IS NOT NULL THEN Best_BID_DATA.PIT_ID ELSE Best_OFFER_DATA.PIT_ID END,
Best_Bid_Data.Bid_Customer,
Best_Bid_Data.Bid_Size,
Best_Bid_Data.Bid_Price,
Best_Bid_Data.Bid_Order_Id,
Best_Bid_Data.Bid_Order_Version,
Best_Bid_Data.Bid_ProductId,
Best_Bid_Data.Bid_TraderId,
Best_Bid_Data.Bid_BrokerId,
Best_Bid_Data.Bid_Reference,
Best_Bid_Data.Bid_Indicative,
Best_Bid_Data.Bid_Park,
Best_Offer_Data.Offer_Customer,
Best_Offer_Data.Offer_Size,
Best_Offer_Data.Offer_Price,
Best_Offer_Data.Offer_Order_Id,
Best_Offer_Data.Offer_Order_Version,
Best_Offer_Data.Offer_ProductId,
Best_Offer_Data.Offer_TraderId,
Best_Offer_Data.Offer_BrokerId,
Best_Offer_Data.Offer_Reference,
Best_Offer_Data.Offer_Indicative,
Best_Offer_Data.Offer_Park
from
(
Select PITID PIT_ID, CustomerId Bid_Customer, Size Bid_Size, Price Bid_Price, orderid Bid_Order_Id, Version Bid_Order_Version,
ProductId Bid_ProductId, TraderId Bid_TraderId, BrokerId Bid_BrokerId,
Reference Bid_Reference, Indicative Bid_Indicative, Park Bid_Park
From OrderTable C
Where
version = (select max(version) from OrderTable where orderid = c.orderid)
and BuySell = 'B'
and Status <> 'D'
and Park <> 1
and PitId in (select distinct pitid from MarketViewDef Where MktViewId = 4)
and Price =
( Select max(Price) From OrderTable cc
where version = (select max(version) from OrderTable where orderid = cc.orderid)
and PitId = c.PitId
and BuySell = 'B'
and Status <> 'D'
and Park <> 1
)
and Orderdate =
( Select min(Orderdate) From OrderTable dd
where version = (select max(version) from OrderTable where orderid = dd.orderid)
and PitId = c.PitId
and BuySell = 'B'
and Status <> 'D'
and Price = c.Price
and Park <> 1
)
and OrderId = (select top 1 OrderId from OrderTable ff
Where version = (select max(version) from OrderTable where orderid = ff.orderid)
and orderid = ff.orderid
and PitId = c.PitId
and BuySell = 'B'
and Status <> 'D'
and Price = c.Price
and Orderdate = c.Orderdate
and Park <> 1
)
) Best_Bid_Data
full outer join
(
Select PITID PIT_ID, CustomerId Offer_Customer, Size Offer_Size, Price Offer_Price, orderid Offer_Order_Id, Version Offer_Order_Version,
ProductId Offer_ProductId, TraderId Offer_TraderId, BrokerId Offer_BrokerId,
Reference Offer_Reference, Indicative Offer_Indicative, Park Offer_Park
From OrderTable C
Where
version = (select max(version) from OrderTable where orderid = c.orderid)
and BuySell = 'S'
and Status <> 'D'
and Park <> 1
and PitId in (select distinct pitid from MarketViewDef Where MktViewId = 4)
and Price =
( Select min(Price) From OrderTable cc
where version = (select max(version) from OrderTable where orderid = cc.orderid)
and PitId = c.PitId
and BuySell = 'S'
and Status <> 'D'
and Park <> 1
)
and Orderdate =
( Select min(Orderdate) From OrderTable dd
where version = (select max(version) from OrderTable where orderid = dd.orderid)
and PitId = c.PitId
and BuySell = 'S'
and Status <> 'D'
and Price = c.Price
and Park <> 1
)
and OrderId = (select top 1 OrderId from OrderTable ff
Where version = (select max(version) from OrderTable where orderid = ff.orderid)
and orderid = ff.orderid
and PitId = c.PitId
and BuySell = 'S'
and Status <> 'D'
and Price = c.Price
and Orderdate = c.Orderdate
and Park <> 1
)
) Best_Offer_Data
ON Best_Bid_Data.Pit_Id = Best_Offer_Data.Pit_Id
Can any one please help me?
Thanks
Prashant
View 2 Replies
View Related
Apr 15, 2014
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores
2. Data files and logfiles are on separate partitions away from the other databases.
3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
View 3 Replies
View Related
May 16, 2007
We moved a 2000 database to another platform by restoring the database. It took a lot longer than I expected. Would it take less time to restore it a second time to the same target database since the allocations are already there?
Thanks
View 1 Replies
View Related
Sep 20, 2000
How is it possible that a 133MB SQL7 database, the backup of the database itself takes 2 seconds, the transaction log backup takes 25 minutes??? We are doing log backup every 10 minutes, and appending.
Thanks.
a
View 2 Replies
View Related
May 23, 2007
Hello All,
I have SQL Server 2005 installed on my machine and I am firing following query to insert 1500 records into a simple table having just on column.
Declare @i int
Set @i=0
While (@i<1500)
Begin
Insert into test2 values (@i)
Set @i=@i+1
End
Here goes the table definition,
CREATE TABLE [dbo].[test2](
[int] NULL
) ON [PRIMARY]
Now the problem with this is that on one of my server this query is taking just 500ms to run while on my production and other test server this query is taking more than 25 seconds.
Same is the problem with updates. I have checked the configurations of both the servers and found them to be the same. Also there are no indexes defined on either of the tables. I was wondering what can be the possible reason for this to happen. If any of u people has any pointers regarding this, will be really useful
Thanks in advance,
Mitesh
View 6 Replies
View Related
Oct 27, 2007
Hi all,
I have MS Time Seeries model using a database of over a thousand products each of which has hundreds of cases. It amazingly takes only a few minutes to finish processing the model, but when I click Mining Model Viewer to view the models, it takes many hours to show up. Once the window is open, I can choose model for different products almost instantly. Is this normal?
View 1 Replies
View Related
Jul 20, 2005
We are trying to import data from a sql 7 machine into sql 2000 usingimport wizard in DTS. One of the tables has in excess of 80 millionrows.The first time we did this it worked fine no problems. However we hadto recreate the database, and it has not worked since.The error message is reported as ' the log file for 'dbname' is full'.This happens regardless of the fact that there is 100GB free on disk,and that the database data and log files are both set to autogrow. Therecovery model is set to simple.When imported the data.mdf file should be around 20GB.Would anyone know what is causing this or how to get around thiswithout going down the SQL7 install, restore and upgrade to sql2000route?Any help would be appreciated greatly.
View 1 Replies
View Related
Dec 26, 2002
Hi Guys,
We have a database with 20 gig and with huge transactions. The transaction log backup is scheduled every one hour
from 3.00 AM to 9.00 PM.
We take a full backup in the disk at 9.00 PM and again a full backup in the tape at 2.00 AM
It works fine in the day from 6.00 AM and complete within seconds and the size is approx. 50 to 200 MB.
But the very first transaction log backup at 3.00 AM is running like 3 hrs and the size is approx. 11 gig whick is almost equivalent to the Full backup size. There are some dts packages that are running in the night and as usual reindex, intergrity checks are running and there no large user traffic during night. But I have no idea which the very first transaction log backup in the morning takes longer time and has this bug size. Is there any work around to fix this proble.
Please advise.
Thanks,
Anu
View 7 Replies
View Related
Dec 28, 2005
When Importing data from Access 2000 to SQL Server 2000 I get an error when the column's data type in Access is Memo if I try to set the column's data type in SQL Server to text or ntext. Here is the error that I am getting.
Query-based insertion or updating of BLOB values is not supported.
I have tried to change this to a varchar data type but the fields can be very large and some are over the 8000 limit.
Short of programmatically adding the data, is there another way to do this in SQL Server? I cannot use the Access upsize wizard.
Miranda
View 1 Replies
View Related
Feb 28, 2006
i have a table which contains a text field
and basically i have to check if a text or phrase exist in that text
select count(*) from MYtable where TxtField like '%MYPHRASE%'
there are 700,000 records in that table and whenever i query it takes 9 seconds to give me the recordcount.
what i am doing wrong
View 3 Replies
View Related
Nov 22, 2007
Hi all,
I have a stored procedure that is called from a VB.NET application that takes an enormously long time to execute. In the QA it only takes 10sec but in the application it takes ages. The stored procedure is as follows:
PROCEDURE NAME IS SPTOPTWENTYUSERS
SELECT TOP 20 STRUSERNAME,SUM(INTBYTESRECVD) AS INTDOWNLOAD FROM TBLISAWEBLOGS
WHERE DTELOGDATE BETWEEN @BEGINDATE AND @ENDDATE
GROUP BY STRUSERNAME
ORDER BY INTDOWNLOAD DESC
The code that runs it is as follows:
sSQLString = SPTOPTWENTYUSERS
Using cnn As New SqlConnection(GetPath)
Try
Dim cmd As New SqlCommand(sSQLString, cnn)
Dim dr As SqlDataReader
With cmd
.CommandType = CommandType.StoredProcedure
.CommandTimeout = 0
.Parameters.Add("@BEGINDATE", SqlDbType.DateTime)
.Parameters.Add("@ENDDATE", SqlDbType.DateTime)
.Parameters("@BEGINDATE").Value = dtpStartDate.Value
.Parameters("@ENDDATE").Value = dtpEndDate.Value
End With
cnn.Open()
dr = cmd.ExecuteReader
Any help on why this happens would be much appreciated.
thanks
View 1 Replies
View Related
Jul 20, 2005
We have a re-indexing all DBs schedule job in our SQL 2000 box,normally it took 7 hours to complete but all of the sudden now ittakes more than 20 hours.What do you think it cause this problem? We have no clue.
View 2 Replies
View Related
May 16, 2008
Hi,
I have a package designed as bring data tables over to SQL Server. There are 9 data flow tasks that runs parallel, to bring 9 datatables over. In BIDS, when I execute the package, it runs like 8 minutes. Or if I start the scheduled job manually, it runs around 8 minutes too. But it runs about 30 minutes at the scheduled time at midnight.
I wonder what I can do to speed up the scheduled job.
Thanks
View 13 Replies
View Related
Jul 11, 2015
Getting "The specified network name is no longer available" during SQL Backups dumping to Network share.
- problem started occurring on July 3, 2015
- our Ola Hallengren backup solution deployed to over 150 SQL Servers.. was running fine for almost 2 years
- Occurring in multiple SQL Server environments: W/Server 2012 Ent, W/Server 2008 Ent, SQL 2012 Ent, SQL 2008 R2 Ent
- We're utilizing latest Ola Hallengren backup solution (Jan 2015 release)
- Dumping to network share (jobs running under SQL Agent account w/ local admin & sysadmin on server and full rights to Network Share
View 1 Replies
View Related
Nov 15, 2007
Is there a way ( using an included SSIS task rather than coding a script task) to detect whether a package has run longer than a specified period of time?
So I can send an email to operators notifying them that a job is taking longer than usual.
Thanks
View 12 Replies
View Related
Jan 31, 2007
Hi,
cube processing is taking more time in a new server while same cubes takes less time in another server.
the cubes are processed through DTS package
can anybody help finding out the possible reasons for this.
Regards
Naseem
View 5 Replies
View Related
Jul 23, 2005
HelloI have these tables:CREATE TABLE [dbo].[COREAttribute] ([oid] [uniqueidentifier] NOT NULL ,[CLSID] [uniqueidentifier] NOT NULL) ON [PRIMARY]CREATE UNIQUE CLUSTERED INDEX [COREAttributeOidIndex] ON[dbo].[COREAttribute]([oid], [CLSID]) WITH FILLFACTOR = 90 ON[PRIMARY]CREATE TABLE [dbo].[COREBstrAttribute] ([oid] [uniqueidentifier] NOT NULL ,[iid] [uniqueidentifier] NOT NULL ,[dispid] [int] NOT NULL ,[value] [nvarchar] (1024) COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY]CREATE CLUSTERED INDEX [COREBstrAttributeOidIndex] ON[dbo].[COREBstrAttribute]([oid]) WITH FILLFACTOR = 90 ON [PRIMARY]Now when I try this query, it's taking 8-10mins.Declare @t TABLE (oid uniqueidentifier primary key,[Description] nvarchar(1024) NULL,[Name] nvarchar(1024) NULL,[UID] nvarchar(1024) NULL)DECLARE @COREBSTRAttribute TABLE (oid uniqueidentifier, dispid intNULL, value nvarchar(1024) NULL)INSERT INTO @COREBSTRAttribute select oid, dispid, valueFROM dbo.COREBSTRAttributeWHERE iid ='{1449DB20-DB97-11D6-A551-00B0D021E10A}'INSERT @tSELECT distinctc0.oid,c1.Value,c2.Value,c3.ValueFROM(SELECT oid FROM dbo.COREAttributeWHERE CLSID IN ('{1449DB2B-DB97-11D6-A551-00B0D021E10A}','{1449DB2D-DB97-11D6-A551-00B0D021E10A}','{1449DB2F-DB97-11D6-A551-00B0D021E10A}','{1449DB31-DB97-11D6-A551-00B0D021E10A}','{1449DB33-DB97-11D6-A551-00B0D021E10A}','{1449DB35-DB97-11D6-A551-00B0D021E10A}','{1449DB37-DB97-11D6-A551-00B0D021E10A}','{1449DB39-DB97-11D6-A551-00B0D021E10A}','{1449DB3B-DB97-11D6-A551-00B0D021E10A}','{1449DB3D-DB97-11D6-A551-00B0D021E10A}','{1449DB3F-DB97-11D6-A551-00B0D021E10A}','{1449DB43-DB97-11D6-A551-00B0D021E10A}','{1449DB45-DB97-11D6-A551-00B0D021E10A}','{1449DB47-DB97-11D6-A551-00B0D021E10A}','{1449DB49-DB97-11D6-A551-00B0D021E10A}','{1449DB4B-DB97-11D6-A551-00B0D021E10A}','{1449DB4D-DB97-11D6-A551-00B0D021E10A}','{1449DB51-DB97-11D6-A551-00B0D021E10A}','{DAA598D9-E7B5-4155-ABB7-0C2C24466740}','{6921DAC3-5F91-4188-95B9-0FCE04D3A04D}','{128F17D4-2014-480A-96C6-370599F32F67}','{9F3A64C9-28F3-440B-B694-3E341471ED8E}','{2E3AB438-7652-4656-9A18-4F9C1DC27E8C}','{B69E74A7-0E48-4BA2-B4B7-5D9FFEDC2D97}','{2BB836D3-2DC1-4899-9406-6A495ED395C3}','{9CFFDC3A-5DF5-4AD8-B067-6EF5A9736681}','{E18E470B-B297-43D2-B9CD-71AF65654970}','{9BDCDA97-1171-409D-B3AB-71DA08B1E6D3}','{0E91AC62-7929-4B42-B771-7A6399A9E3B0}','{C8BAE335-CCB7-4F1D-8E9D-85C301188BE2}','{97E6E186-8F32-42E6-B81C-8E2E0D7C5ABA}','{BE5B6233-D4E7-4EF6-B5FC-91EA52128723}','{4ECDAAE1-828A-4C43-8A66-A7AB6966F368}','{19082B90-EF02-45CC-B037-AFD0CF91D69E}','{6F76CEF7-EBC0-48C6-8B78-C5330324C019}','{18492042-B22A-4370-BFA3-D0481800BBC7}','{A71343AD-CC09-4033-A224-D2D8C300904A}','{EC10BD0A-FDE3-4484-BEA6-D5A2E456256C}','{F7F8A4E1-651A-4A48-B55A-E8DA59D401B2}','{A923226F-B920-4CFA-9B0D-F422D1C36902}','{A95ACA6A-16AC-47E4-A9A6-F530D50A475A}','{C31DB61A-5221-42CF-9A73-FE76D5158647}')) AS c0LEFT JOIN @COREBSTRAttribute AS c1ON (c0.oid = c1.oid)AND c1.dispid = 28LEFT JOIN @COREBSTRAttribute AS c2ON (c0.oid = c2.oid)AND c2.dispid = 112LEFT JOIN @COREBSTRAttribute AS c3ON (c0.oid = c3.oid)AND c3.dispid = 192Any help is greatly appreciated.thanksSunit
View 4 Replies
View Related