I would like to perform an
INSERT INTO LINKEDSVR.dbo.xyz.abc
SELECT ... FROM dbo.dfg
where LINKEDSVR is a linked server on another machine. Both servers are
running SQLServer 2000 and have the DTC running.
When I run this batch from QueryAnalyzer without explicitly using
transactions, it works well (takes about 5 sec) - however, when I
enclose it using
begin [distributed] tran/commit tran
the query runs forever.
I also tried to use the local server as linked server (loopback) but it
did not work either.
I found the following link which seems to be the problem we're experiencinghttp://support.microsoft.com/kb/937517
The link includes a workaround which is the following: "To prevent the SQLNCLI provider from sending an attention signal to the server, use the SQLNCLI provider to consume fully any rowsets that the OLE DB consumer creates. "
How do I use the SQLNCLI provider to fully consume any rowsets?
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
I have a stored procedure that is called from a VB.NET application that takes an enormously long time to execute. In the QA it only takes 10sec but in the application it takes ages. The stored procedure is as follows:
PROCEDURE NAME IS SPTOPTWENTYUSERS
SELECT TOP 20 STRUSERNAME,SUM(INTBYTESRECVD) AS INTDOWNLOAD FROM TBLISAWEBLOGS WHERE DTELOGDATE BETWEEN @BEGINDATE AND @ENDDATE GROUP BY STRUSERNAME ORDER BY INTDOWNLOAD DESC
The code that runs it is as follows:
sSQLString = SPTOPTWENTYUSERS Using cnn As New SqlConnection(GetPath) Try Dim cmd As New SqlCommand(sSQLString, cnn) Dim dr As SqlDataReader
With cmd .CommandType = CommandType.StoredProcedure .CommandTimeout = 0 .Parameters.Add("@BEGINDATE", SqlDbType.DateTime) .Parameters.Add("@ENDDATE", SqlDbType.DateTime) .Parameters("@BEGINDATE").Value = dtpStartDate.Value .Parameters("@ENDDATE").Value = dtpEndDate.Value End With cnn.Open() dr = cmd.ExecuteReader
Any help on why this happens would be much appreciated.
Recently my system encounter some problem when retrieving certain record from MSSQL. For an example i have a database which contains 1.5 million of members. so i have a perl scripts that will execute to query based on certain range.
the schedule like below: 1 script - 1-250k (Query finish less than 5 mins) <interval 5 mins> 1 script - 250k-500k (Query finish less than 5 mins) <interval 5 mins> 1 script - 500k-750k (Query finish less than 5 mins) <interval 5 mins> 1 script - 750k-1M (Query finish in 1++ hours) <interval 5 mins> 1 script - 1M-1.25M (Query finish in 1++ hours) <interval 5 mins> 1 script - 1.25M-1.50M (Query finish in 1++ hours) END
After the 4th query, the query seems to work very slow, and this problem only raise on windows 2003 with mssql 2005, current server that run smoothly is win2k with mssql2000.
anyone have any idea on this problem either cause by operating system and database or related to something else?
Hi,I've a strange problem with a INSERT query. It's taking a long time toexecute. The format is like this :INSERT INTO table1SELECT ..FROM table2Executing the SELECT .. FROM table2 is taking 30 seconds. The resultis nothing: no records are selected.When i include the INSERT part it will take 12 hours to completeINSERT INTO table1SELECT ..FROM table2There's is an index on the table and when i delete it, it gives stillthe problem.Keh?Greetz,Hennie
Hi there,I've a table with 18 millions of recordes shaped like this :Code nvarchar(80) , State int , school int , class int , Term nvarchar(80)The following query takes too long to run ( more than 2 hours )select State , school , class , term , count (term) as freqGroup by state , school , class , termHow may I speed up the query?My Pc is PIV (3.6 GHz) Intell , Win2003 Server , 512 MB of RAM, 80 GB of HDRegards,M.Mansoorizadeh
It seems inserting records takes a relatively long time. My guess is it needs time to allocate disk space for the extra space needed. Assuming this is true, are there any DB settings that allow auto space allocation in bigger chunk? I am looking for something like "DB growth factor" or " Table growth factor"
Dear friends, Our package which at the time of normal execution takes 2-2:30 mins for fetching data on VPN with some select queries. But some times its job runs for hours and hours. What could be the exact reason behind it? I guess its queries are stuch somewhere, but not when we run from the BIDS or run the job manually. Please help. Thanks.
I tried to test Rebuildm.exe on my local server 1. Stoped Sql 2. Renamed master.mdf to master.mdf_old 3. Tried to start Sql (Does start of couse) 4. Run Rebuildm.exe ... and "Gonfigurating Sql server" message goes forever (I was waiting 1 1/2 hrs ... progress bar was running )
I have a database with 2 almost identical tables. Each one is about 1.2 Billion rows, 0.5 Tb each. I've truncated one of them and started SHRINK DB WITH REORGANIZE PAGES It is running already for 3 days. When should I start to worry :)
Hi There,I have an update statement to update a field of a table (~15,000,000records). It took me around 3 hours to finish 2 weeks ago. After thatno one touched the server and no configuration changed. Untilyesterday, I re-ran it again and it took me more than 18hrs and stillnot yet finished!!!What's wrong with it? I can ran it successfully before. I have triedtwo times but the result was still the same.My SQL statement is:update [all_sales] aset a.accounting_month = b.accounting_monthfrom date_map bwhere a.sales_date >= b.start_date and a.sales_date < b.end_date;An index on [all_sales].sales_date is built successfully.A composite index on ([date_map].start_date, [date_map].end_date) isbuilt successfully.My server config is:SQL Server 2000 with Service Pack 3Windows 2000 with Service Pack 4DELL PowerEdge 6650 ServerDUAL XEON 1900MHz Processors2G RAM2G Page File on Drive C2G Page File on Drive DDELL Diagnostics on all SCSI harddisks were all PASSED.Any experts could simly give me a help????Thanks x 1,000,000,000
Hi, Its a common issue with reporting services, exporting a report with huge data to excel takes long time to render. Im facing the same issue. Trying to export a SSRS 2005 report to Excel 2003 takes very long time. Problem 1: The time taken to generate the excel report is pretty long (about 10 minutes) as the report runs to hundreds of pages. (The excel has about 30,000 rows and is ~15 MB) Problem 2: Once I open the excel and close it the size reduces to half of it. This is understood because of lot of characters values in the report that can be represented as single byte string, that€™s probably where Excel is making a difference. In Reporting Services it always write strings as 2-byte Unicode, but Excel will always try to compress to single byte when possible.
Am majorly concerned about the Problem 1. Any solutions would be highly appreciated. Thanks in advance.
I have a report which is fairly simple but takes a very long time..
It involves the incidents being counted by categories hence it has several Union All.
Also the report numbers are generetd through 2 tables hence within every Union All tehre is a left or an Inner join.
sample code:
SELECT 1 Sort_Order, COUNT(*) AS Call_Count, 'Incident Resolved at Level 1' AS Count_Type FROM HOUAPPS237.CallsAndIncidents.dbo.PROBSUMMARYM1 T1 INNER JOIN HOUAPPS237.CallsAndIncidents.dbo.PROBSUMMARYM2 T2 ON T1.NUMBERPRGN = T2.NUMBERPRGN
WHERE PROBLEM_STATUS = 'closed' AND T2.THIRD_ASSIGNEE IS NULL AND T2.THIRD_ASSIGNMENT IS NULL AND T1.SECONDARY_ASSIGNEE IS NULL AND HAL_FIRST_RES='t' AND DATEPART(mm, DATEADD(hour, -@offset, CAST(T1.OPEN_TIME AS DATETIME))) = @MONTH AND DATEPART(yy, DATEADD(hour, -@offset, CAST(T1.OPEN_TIME AS DATETIME))) = @YEAR AND T1.OTI_ORIGINATOR IN (SELECT Userid FROM HOUAPPS286.HALServiceDesk.dbo.ServiceCenterAgents)
UNION ALL
-- Calls RESOLVED BY L2 SELECT 2 Sort_Order, COUNT(*) AS Call_Count, 'Incidents Resolved at Level 2 or 3' AS Count_Type FROM HOUAPPS237.CallsAndIncidents.dbo.PROBSUMMARYM1 T1 LEFT JOIN HOUAPPS237.CallsAndIncidents.dbo.PROBSUMMARYM2 T2 ON T1.NUMBERPRGN = T2.NUMBERPRGN WHERE (HAL_FIRST_RES<>'t' OR HAL_FIRST_RES IS NULL) AND PROBLEM_STATUS = 'closed' AND DATEPART(mm, DATEADD(hour, -@offset, CAST(T1.OPEN_TIME AS DATETIME))) = @MONTH AND DATEPART(yy, DATEADD(hour, -@offset, CAST(T1.OPEN_TIME AS DATETIME))) = @YEAR AND T1.OTI_ORIGINATOR IN (SELECT Userid FROM HOUAPPS286.HALServiceDesk.dbo.ServiceCenterAgents)
UNION ALL
could you suggest what might be the reason why teh report churns for so long.
I have a vb.net application using report services that has a big delay when I set the parameters with which to call the report.
I create a new reporting.reportviewer.
I set the ReportServerCredentials.NetworkCredentials, ReportServerUrl, ProcessingModem, ReportPath and everything is fine.
When I call SetParameters with a very simple parameter set, I get a delay of between 0.5 and 2.5 seconds. That delay is very noticible to the users. Below is an extract of a sql profiler trace to a database showing the start time, end time, event class and data text of the sql. I've marked the area with the delay in red.
I have no idea what is happening at that time, but Is there anything I can do to get rid of that delay?
It seems that it could be the first time the my application has had to interface with reporting services.
I need to modify existing table in my database to varchar(max) from varchar(2000)
This table contains 30 million plus rows and has more than 70 columns.
now when i am running alter command for this it take too long(more than 9 mins) which is not acceptable. . Is their any way to reduce this execution time
Following is the query i am using for this
ALTER TABLE Receipt ALTER COLUMN CUSTOM VARCHAR(MAX) NULL
Please let me know if you have any suggestion to improve this
I have a table tblCustTrans which contains custid int transid int startdate datetime value int
the custid, transid and startid are composite primary key.
the table contains more than 10 million records. Now i want to fetch record for select * from tblcusttrans where startdate > = 10/10/2006 10:00:000 and startdate <= 10/10/2006 11:00:000
This statement is taking more than 2 hours to fetch the data. is there a way to fetch the record with less time
I'm trying to rebuild my master database for sql server 2000. The process of rebuilding stared fine. But it is almost 4 hours since it got started. Performing it on a test system. Got doubtful and started the same on another test system. Issue is same and it is almost 2 hours. The Db size is less than 100 MB in both cases. IS IT NORMAL? I've tried the same for SQL SERVER 2005 and it got finished in couple of minutes. Please advise.
we've created a ata Flow task to execute several aggregations. Our Task access database using OLE DB source and selects data out of our staging tables (we've analyzed the query using MS SQL Management Studio which didn't showed any issues). But when we try to run our dataflow task using SSIS (debug mode and DTEXEC from command line) we experince that tasks seem to stop during processing.
Unfortunately we didn't found a way to see long logfile entries which explain the issue to us. We do use several aggregation tasks divided in 4 sequences. Unfortunately we just see one logical processor out of 4 logical processors working. It is a Windows 2003 SP2 machine with SQL 2005 SP2 on top of it.
Is there any solution to use all processors to one package for parallel execution?
So basically we experience two issues: - SSIS seems to stop somewhere in thre middle - SSIS just uses one processor instaed of all four
I broke up my cube into 24 partitions. There are about 630M total fact rows in that cube.
When I open the cube to browse in BIDS or SQL Management Studio it takes very long time to open (I think 30 minutes).
Profiler does not show that it's running a query, but messages like this keep appearing throughout the time it's opening to browse:
Progress Report Begin, 14- Query, Started reading data from the 'p0' partition. Progress Report End, 14- Query, Finished reading data from the 'p0' partition. Progress Report Begin, 14- Query, Started reading data from the 'p10' partition. Progress Report End, 14- Query, Finished reading data from the 'p10' partition.
The scenario is the data comes from various sources and its staged into staging database. From this staging database it goes into data warehouse database. Everyday this staging database is truncated and repopulated from various sources. I've a dimension table called DimCustomers which consists of around 300,000 rows and has lots of different types of SCD columns. It takes around 4-5 hours to load data from staging to this dimension table. Currently I'm using a For Loop container which uses a store proc to extract 15000 rows each time and populate my dimension tables. First couple of loops it goes off quickly but as and when the number reaches half of the count it slows down and hence it takes around 4-5 hours to load data.
What would be the best approach to populate this kind of dimension table.
I have a table having 220 lakhs of records and one of the column is Full Text enabled.We have used ContainsTable() to search for data, but we are unable to get results as expected. so we done rebuild.During Index Rebuild, population is failed.I have found this error in error log and it is saying to do resume population.So I want to know how long it takes to complete Resume population process.
look at the below more details about FT Index table.
Row count - 22155112
Index space - 1,903.250 MB (1.9 GB)
Data space - 87,552.258 MB (87 GB)
sqlserver2008 R2
and the below query we have used
HTML Code: SELECT Distinct top 50 cal.case_id,cal.cas_details From g_case_action_log cal (READUNCOMMITTED) inner join containstable(es.g_case_action_log, cas_details, ' ("235355" OR "<br>235355" OR "235355<br> ") ') as key_tbl on cal.log_id = key_tbl.[key] Where cal.product_id = 38810 ORDER By cal.case_id DESC
This query is not going to search in recent inserted/updated rows. this is the actual issue we are facing.
how to fix this error and if population need to be resume, then how long takes to do resume population.
I have SQL 2014. When I try to restore a user database using SSMS GUI, the Restore Database Pop up box never pops up. This happens for any database on this server at any time. Sometimes I get the pop up, some times I dont get.
So I tried to click on Databases on Top and Restore Database, and then select the db that I need to restore from Drop down, then it shows "creating restore plan selecting backups" but it takes forever.
We have full backup and trn log backups every 30 mins. So is it trying to get all these backup files in the background causing this issue? If yes then how to overcome this?
We have a proc that adds some fields to a few tables of ours and normally there are no issues. For one of our client databases this process is taking anywhere from 5-10 minutes to add the fields. This causes an issue where the app will timeout waiting. After plugging around and looking at the proc and trying different items i found it to only be for this one database and ONLY when there is data in the table. If i truncate the table and run the same procedure everything is fine. Tables all have same index on 4 columns and the columns being added are not indexed because of the stupid hoops we have to jump thru to pre-pivot data for our reporting package.
I have problem with JDBC 2005 (1.1) running against SQL 2005 Express edition (SP2). Sometimes, the statement takes long time (more than 10 seconds). Sometimes, the same statement takes just a few seconds. It is very unpredictable. The query that we have problem is most of the time is join sql statement.