I have SQL stored proc that calls a CLR function. This function does a "select ... for xml" statement, manipulates the XML a little, and returns the manipulated XML to the stored proc.
This all works fine when I call the stored proc from a query window, but when I have BizTalk call the stored proc, the CLR function fails. I have a feeling this may have to do with BizTalk using MSDTC , but I am not sure.
Here's a code snippet from where CLR function fails:
SqlConnection conn = new SqlConnection("Context Connection=true");
conn.Open();SqlCommand cmd = new SqlCommand("Select * From Items FOR XML AUTO",conn);XmlDocument xdoc = new XmlDocument();xdoc.Load(cmd.ExecuteXmlReader());
Under BizTalk, the last line fails with: System.InvalidOperationException "Invalid command sent to ExecuteXmlReader. The command must return an Xml result."
Now to see why XmlReader doesn't like the returned data, i changed the last 2 lines of that snippet to this:
SqlDataReader dr = cmd.ExecuteReader();
dr.Read();Object obj = dr[0];
If i have a breakpoint after that last line, obj is of type string when i call the proc myself, but it is a byte[] under Biztalk. If i look at the bytes themselves, its close to the expected xml...but with some nontext bytes sprinkled around. I can't seem to cast or encode the byte array into anything useful.
Anyone have any idea what is going on here? Why would the same code return different types based on a) who is calling it, or b) the type of transaction used?
I have a report where I use Globals!ReportName in the header of the report for the report title. In Development and on SSRS stand alone the value for Globals!ReportName is in mixed case and the file extension is omitted. When the report is published to a MOSS server integrated with SSRS the value for Globals!ReportName is all in lower case and the file extension is included.
Is there any reason for this change in behavior and is there a way I can put back the mixed case and omit the file extension?
My DTS package performs the following: 1. 4 transformations to transform data from Sybase tables A,B,C,D to the temp tables in MSSQL tmpA, tmpB,tmpC,tmpD 2. Next, I have a task to run 4 stored procedures to load the 4 tmp* tables to actual tables A,B,C,D (so the task is "exec spA, exec spB, exec spC, exec spD". 3. There are 9/26/280/10000 records in the tables A,B,C,D 4. each stored procedure basically checks whether the record in the tmp* tables exist in the actual table baesd on the primary key and then perform an insert/update.
The strange thing is: 1. All 9 records in the tmpA is loaded to A. Only 17 records from tmpB is loaded to B 2. The same codes "exec spA, exec spB, exec spC, exec spD", if copied to query analyzer, they all run to completion, ie all 10000+ records are loaded. so there is no pbm with the data. 3. If I "split" the task such that task1 loads A and task2 loads B,C,D (and task2 runs after task1) now again all data for A is loaded, but 25 records from B is loaded 4. I tried to catch the @@error in the stored procedure for the insert/update statements but there's no error. Most importantly, the stored procedures run fine in query analyzer
Is there some sort of timeout or buffer issue here that is causing this strange behaviour.
i want to create a new measure that will behave based on the dimension dropped,ex. if i added the employee dimension only it will aggregate data from the #Calls Count but if i added the product dimension it should display # Product Calls at the product level and #Calls Count at the employee level as shown in the screen shot.
I created a PDA application with a database, which has a table with a uniqueidentifier field and primarykey.
While doing the bulk insert from dataset into sql mobile database, It is inserting the record but it is not inserting the id which was entered into the sql server 2005 database, instead the id by creating a new id and the code is as below.
conAdap = new SqlCeDataAdapter(strQuery, conSqlceConnection);
SqlCeCommandBuilder cmdBuilder = new SqlCeCommandBuilder(conAdap);
I'm hoping someone can help me out - at least by pointing me to who I can ask, if not answering the question directly. I have some encrypted values in a SQL Server 2000 database that I unencrypt and use in a website that I just converted from .NET 1.1 to .NET 2. The data is pulled from the database using standard ADO with no changes between the .NET 1.1 version and the .NET 2 version - yet for some data entries, when the identical value is pulled by the .NET 2 code it is changed or shorted. For example: 1.1 code traces out a value pulled from the db as: ᒪ࢖淨�d�把���媑쬹�䜻ꖉ��� The same value pulled from the database by .NET 2 looks like this: ᒪ࢖淨�d�把媑쬹�䜻ꖉ Do you know why the database value would be interpreted diferently by .NET 2 than by .NET 1.1? How can I bring this in sync so that both 1.1 sites and 2 sites can use the same data?
There is an int filed in my table called "WeekNo" and when I use order by WeekNo Desc, I am getting the following result. 9 8 7 7 6 5 4 3 2 18 17 16 15 15 14 13 13 12 11 10 10 1
This does not seem right, can anyone comment why i am getting this result.
Our Transactions/sec counter jumped quite a bit when we moved to SQL Server 2005. The move coincided with increased load so we didn't think anything of it until recently. Upon further review, the counter just seems too high.
There was an article in SQL Server magazine a few years ago by Brian Moran where he states, "Transactions/sec doesn't measure activity unless it's inside a transaction. Batch Requests/sec measures all batches you send to the server even if they don't participate in a transaction." He goes on to say that Transactions/sec will be skewed lower because it is a subset of Batch Requests/sec. (http://www.sqlmag.com/Article/ArticleID/26380/sql_server_26380.html)
The article was written for SQL Server 2000. We conducted tests in 2000 and found what he said to be right on the money. SELECT statements increased Batch Requests/sec, but not Transactions/sec. UPDATE/INSERT/DELETE statements increased both in lockstep. Makes perfect sense so far.
We conducted the same tests in 2005 and found a radically different story. While SELECT statements behaved the same, UPDATE/INSERT/DELETE statements showed Transactions/sec skyrocket 2-10x more than Batch Requests/sec for the duration of the statement. In other words, a single transaction submitted by our application fires off exponentially more transactions than the one we submitted. I was unable to pinpoint exactly what these "hidden" transactions were actually doing. Is this something that occurred in 2000 but simply wasn't reported? Or is it new behavior in 2005?
While trying to answer these questions we noticed a second strange behavior in 2005. When no queries are being executed the Transactions/sec counter still jumps every six seconds like clockwork. And these phantom transactions number in the thousands. We tried to use profiler to capture what SQL was being executed, but nothing shows up in any SQL Statement or Batch event. However, when we turned on the SQLTransaction event we found it, sort of. An object called GhostCleanupTask runs every six seconds causing thousands of transactions. We don't know exactly what it is doing, but we noticed that it ran consistently on some databases, but never on other databases. Both sets of databases are identical and in use.
So, all of this investigation leads me with three final questions.
1. What is behind all the extra transactions caught by perfmon when I submit a single transaction? 2. What is GhostCleanupTask and why does it take so many transactions? (And why does it only run on certain databases?) 3. If a potential customer asks for our Transactions/sec count, is it accurate to give them the big number, knowing that our application is only actually submitting a fraction of that? On the other hand, the system apparently is actually doing that many transactions. (For instance, on our production server during peak, Batch Requests/sec is about 4,000, while Transactions/sec hits 26,000.
I've been trying to find an answer to the mystery of how date conversions differ between SQL server and Excel. In Excel the number 37711 is displayed as the date '3/31/2003'. The same number in SQL server yields '2003-04-02' (I used the following: select cast(37711 as datetime) ).
Any idea what is going on here and how I might resolve this problem?
Dependingon the printer, the report prints differently. Sometimes it's all messed up on certain printers on others it prints fine. I see the problem mostly with older HP printers..
I'm looking for the minimum date of an entry into a history table. The table contains multiple entries for the customer and the item with an activation and deactivation date for each entry.
I could use the following:
select customerId, item, min(activationDate) from history group by customerId, item
or a sub query
select customerId, item, activationDate
from history h1
where activationDate=(select min(activationDate) from history h2 where h2.customerId=h1.customerId and h2.item=h1.item)
How are these two queries parsed differently by SQL.
Same RDL, 2 different servers. I run the report on my computer and export to PDF, it prints properly. When the customer runs the report on their server (SSRS 2K5 SP1, same as mine), they get it displayed differently. The columns on the report extend to the next page and the lines are thicker.
Is this a formatting issue on the customer's PC? It uses standard fonts (Tahoma, Sans-serif).
I've written a script that should create a SPROC. It does, but I expect the SPROC to contain everything that is bold when I run this script (See below.) It doesnt...when I run the script, it creates the required table, but instead, it gets created with only the italic text. What gives? Please throw me a bone here :)USE [myproject];GOIF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE SPECIFIC_SCHEMA = N'dbo' AND SPECIFIC_NAME = N'myproject_CreateTable_SendEmail_Errors' ) DROP PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsGOCREATE PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsASGO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:31:30 ******/IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sendEmail_Errors]') AND type in (N'U'))DROP TABLE [dbo].[sendEmail_Errors]GO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:14:09 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[sendEmail_Errors]( [errorID] [smallint] IDENTITY(1,1) NOT NULL, [sendToEmail] [nchar](256) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [timeLogged] [datetime] NOT NULL CONSTRAINT [DF_sendEmail_Errors_timeLogged] DEFAULT (getutcdate()), CONSTRAINT [PK_SendEmail_Errors] PRIMARY KEY CLUSTERED ( [errorID] ASC)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Error count' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors', @level2type=N'COLUMN',@level2name=N'errorID'GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'This table is used to log failed calls to send a verification email to users. No Email was sent to the user.' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors'GO USE [myproject]GO/****** Object: StoredProcedure [dbo].[myproject_CreateTable_SendEmail_Errors] Script Date: 10/31/2006 01:59:19 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [dbo].[myproject_CreateTable_SendEmail_Errors]AS
I have a database on a SQL Server 2000 (sp3a) installation. For some reason it's reporting time that is 7 hours ahead of the system time.
The application is on one server the DB is on a shared production server. The app server and the DB server are reporting the same system time and are using a network time server. All the other db's on the shared production db server are reporting time correctly.
My questions:
Is there a T-SQL query to use to see what the time/timezone is for that database? Is there a T-SQL query I can use to set the db time (not the system time)? Anyone have any other suggestions as to what could be wrong?
Ok, I have a table with IP addresses stored in decimal format using both positive and negative numbers. The way that they are stored is: Positve 1 thru 2147483647 = 0.0.0.1 - 127.255.255.255 Negative -2147483648 thru -1 = 128.0.0.0 - 255.255.255.255 Conversion positive x/2^24 . (x/2^24)/2^16 . etc . etc negative (x+2^32)/2^24 . ((x+2^32)/2^24)/2^16 . etc . etc
I have a script which works by using UNION and the WHERE statements are x>0 x<0
My problem is I need to use a 3rd party app to run the script (McAfee ePO). McAfee does not recognize the UNION. My question is, can I acheive the same results as the script below, without using UNION.
SELECT ReportFullPathNode.FullPathName, cast(cast(IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar), cast(cast(IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar), cast(IPSubnetMask.LeftMostBits as varchar), IPSubnetMask.IP_Start FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode WHERE IPSubnetMask.IP_Start>0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID UNION ALL SELECT ReportFullPathNode.FullPathName, cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar), cast(cast(4294967296+IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar), cast(IPSubnetMask.LeftMostBits as varchar), IPSubnetMask.IP_Start+4294967296 FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode WHERE IPSubnetMask.IP_Start<0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID
I have an update trigger on a table. When a specific column is updated, I get the rowid from 'inserted' and then pass it via service broker to another database that will fire off a maintenance routine at a later time. This whole process seems to work fine if I update a single row at a time through Query Analyzer.
During testing (of the service broker part) I found that if in Query Analyzer I run an update that updates all of the records at once, then the trigger seems to fire only once for the entire process, therefore killing the rest of my process.
I would have thought that regardless of how a record was being updated the trigger would fire atomically for each row.
I am trying to import some data from csv files. When I try it using bulk insert I get a conversion error. When I use the exact same format file and data file with an openrowset it works fine. I would prefer to use the BULK insert as I can make some generic stored procedures to handle all my imports and not have to code the column names in the SQL. Any suggestions?
BULK Insert stuff
From 'c:projects estdatalist.txt'
with
(FORMATFILE='c:projects estdatamyformat.xml')
insert into stuff (ExternalId, Description, ScheduledDate, SentDate, Name)
select *
from OPENROWSET (BULK 'c:projects estdatalist.txt',
FORMATFILE='c:projects estdatamyformat.xml')
as t1
The destination table has more columns than the data file. The Field IDs represent the ordinal position of the columns in the destination table. Column 1 in the destination table is an int identity. The conversion failure is from trying to convert column 5 to int which makes me think bulk insert is ignoring the name attributes in the XML and just trying to insert the columns into the table in order without skipping.
Hi!<br><br>A larger SP runs ok in console. When called in VB NET 1.1, results get turncated and some thing don't run at all. The connection is ODBC. Small SP's run ok.<br><br>Is this default behavior or something common? Are there VB parameters to let a larger SP run without interruption?<br><br>-Bahman<br><br><br>
SSIS is behaving differently in different environments but the code is same.
One thing is nor working correctly that is I am converting a string data type column to float data type in data conversion. In our local environments the package is working fine but in production environments it is not working correclty. It is unable to convert the data it is throwing an error.
"The data value cannot be converted for reasons other than sign mismatch or data overflow"
The following code does not function if I use SQLOLEDB if I omit the provide and default to ODBC OLE DB it works correctly. I am assume I am coding something wrong for a SQLOLEDB provide. Any help is greatly appricated.
VB Code
Public Function SqlExecuteResult(xSQL As String, sServer As String, sDatabase As String, sUserName As String, sPassword As String, sCaller As String, Optional bLog As Boolean = False) As Object
Dim oDB As Object Dim oRS As Object
Set oDB = CreateObject("adodb.connection") Set oRS = CreateObject("adodb.recordset")
Private Sub Form_Load() Dim rs As Object Set rs = SqlExecuteResult("exec NextEntry 'SentMessages'", "surecomp-bob", "pmsureus33", "sa", "", "") MsgBox rs.fields(0) End Sub
SQL proceedure
CREATE PROCEDURE NextEntry @CounterName Varchar(20) AS
begin declare @counter int select @counter = counter from counters where countername = @counterName select @counter = @counter + 1 update counters set counter = @counter where countername = @countername select counter from counters where countername = @counterName End GO
If I run the same FOR XML query in a Development edition enviornment and a Enterprise Edition environment, the results are different. The query is exactly the same.
Here is the query:
DECLARE @MessageBody XML DECLARE @AuditTable SYSNAME DECLARE @SendTrans BIT DECLARE @SendAudit BIT DECLARE @RecordCount INT DECLARE @OperationType CHAR(1)
SET @RecordCount = @@ROWCOUNT SET @OperationType = 'U' SET @SendTrans = 1 SET @SendAudit = 1 SET @AuditTable = 'States'
SELECT @MessageBody = ( SELECT * FROM ( SELECT TOP 10 'INSERTED' AS ActionType, @SendTrans AS SendTrans, @SendAudit AS SendAudit, COLUMNS_UPDATED() AS ColumnsUpdated, GETDATE() AS AuditDate, @AuditTable AS AuditTable, 'test' AS UserName, @RecordCount AS RecordCount, * FROM l_states )AuditRecord FOR XML AUTO, ROOT('AuditTable'), BINARY BASE64)
SELECT @MessageBody
In my DEV env (Developer Edition), this result is produced: <AuditTable> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AK" State="Alaska" /> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AL" State="Alabama" /> </AuditTable>
In my Enterprise Edition evn, this is the result: <AuditTable> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:44:48.230" AuditTable="States" UserName="test" RecordCount="1"> <l_states StateAbbreviation="AK" State="Alaska" /> <l_states StateAbbreviation="AL" State="Alabama" /> </AuditRecord> </AuditTable>
Does anyone have any idea what might be wrong? Any help is greatly appreciated. Tim
I am converting old MS Access queries to T-SQL and ran into a problem. The results of the same update queries returned different results. The idea is to subtract each of the amounts of Table2 from Table1:
Source sample tables and content: Table1 ID Amount 1 100
Table2 ID Amount 1 10 1 20 1 30
In Access (Orginal source): UPDATE Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID SET Table1.Amount = Table1.Amount - Table2.Amount
In T-SQL (Converted): UPDATE Table1 SET Table1.Amount = Table1.Amount - Table2.Amount FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID
Syntax for T-SQL is different from Access. When both queries are ran on their respective database, Table1.Amount in access became 40 (100 - 10 - 20 - 30), but Table1.Amount in SQL became 90 (100 - 10).
It looks as if in T-SQL it only ran one row? Or it could be that in T-SQL, updates written to the database in batches, hence why Table1.Amount was not updated for all update instances? Any help would be greatly appreciated.
I am able to get reports going with tables sized properly. They look fine on the ReportServer website and I adjust the column widths so that the headings and data look nice. When I set up a subscription to be delivered by "Report Server E-Mail," though, the table formattings get completely distorted.
In particular, I have two tables, with some column headers being two short words (e.g. Max Height). When rendering on the site, I adjust the columns so the full column header is visible on one line. When I receive the email and read it in Outlook, the header row is now about twice as tall and everything is scrunched together. Both the headings and the data in the fields do not format the same as on the website.
The two tables tend to actually have the exact same width in the email version, although occasionally they are a little different (in the web version one is about half as wide as the other). I have tried just making the columns bigger and that has not worked. I've tried making the font sizes smaller, which didn't work. If I do that, leaving the columns the same width, the email version just gets scrunched into a smaller area with the same text-wrapping problems.
If I open the email in a browser (in a web mail interface) the report renders perfectly as on the site.
I have almost all the default settings, and haven't been messing around with page sizes and things like this (except after, to see if that would fix the problem).
Any ideas, similar experiences, or suggestions? If there is a book I should read or any reference you could point me to in order to figure this out would be helpful. I haven't been able to understand this either using web searches or the two SQL reporting services books I have.
My product was developed for and works correctly on SQL Server 2000. However, when we upgraded to 2005, we found that certain system stored procedures were different, causing our product to break.
We can easily change our stored procedures to work in 2005, but we have a large client base, some of whom will be using each version. Our current solution is to check the version of SQL Server during installation and choose which script to use at that time in order to have an appropriate stored procedure for that version, but we are concerned about users who install our product with SQL Server 2000 and then upgrade to SQL Server 2005.
How can I make a stored procedure that will run differently depending on the version? I tried something like:
if (select charindex('2000', @@version)) > 0
begin -- SQL Server 2000
SELECT
... FROM
... WHERE end else -- SQL Server 2005
SELECT
... FROM ... WHERE end
Unfortunately, the system tables I'm selecting from have different stuctures in the different versions (one example is msdb.dbo.sysjobschedules and msdb.dbo.sysschedules), and even though the code never gets into the SQL Server 2000 section on 2005, it parses the whole procedure for errors before allowing it to be saves and will not allow this.
Hi,I've run into a curious problem with MS SQL Server 8.0. Using sp_help andSQL Query Analyzer's object browser to view the columns returned by a view,I find that sp_help is reporting stale information.In a recent schema change, for example, someone lengthened a varchar columnfrom 15 to 50 characters. If we use sp_help to find out about a view thatdepends upon this column, it still shows up as VARCHAR(15), whereas theobject browser correctly reports it as VARCHAR(50).Dropping and recreating the view fixes the problem, but we have quite a fewviews, and dropping and re-creating all of them any time a schema change ismade is something we want to avoid. I tried using DBCC CHECKDB in hopes thatit would 'refresh' SQL Server's information, but no luck.(if you're curious as to why I don't just use the object browser instead,read boring technical details below)Has anyone seen this before? Is there some other way (other thanre-creating every view) to tell SQL Server to "refresh" it's information?Thanks!-Scott----------------------Boring Technical Information:The reason this is an issue for us (i.e., I can't just use the objectbrowser instead) is that our object model classes are built using standardmetadata query methods in Java that seem to be returning the same staleinformation that sp_help is returning. These methods are a part of thestandard JDK, so we can't easily fiddle with them. Anyway, as a result, ourobject model (at least with respect to views) may not match our currentschema!
Trying to locate information on MSDTC. Is this "needed" to run SQL Server? That is, if this part of the installation is deleted, will SQL Server still function? Also, does anyone know if this is a crucial tool needed by Veritas Volume Manager or Windows Disk Manager?
If anyone knows of a link to this information, I'd appreciate it. My searches come up with lots on information on MSDTC, but nothing that answers my specific questions.
When i do a BEGIN TRAN to a SQL server sitting on Windows 2003 Server from a SQL Server Sitting on Windows 2000 Server, the Transaction hangs and if I try o kill it the Transaction is in ROLLBACK State.
I tried setting the Properties for the MSDTC and restarted the Windows 2003 Server but in vain
I keep getting this error on my application server...what does it mean?The description for Event ID ( 0 ) in Source ( ODBC ) cannot be found.The local computer may not have the necessary registry information ormessage DLL files to display messages from a remote computer. Thefollowing information is part of the event: Failed to enlist in DTC:SQL state 37000, native error 8501, error message [Microsoft][ODBC SQLServer Driver][SQL Server]MSDTC on server 'PROSQL' is unavailable..any help would be appreciated...thanks-Jim
I'm using msde. On the little icon in task tray, you have under currentservice, a choice of mssqlserver, and another choice is msdtc. What ismsdtc? And what is sqlserver agent?
The resource msdtc is down. Every time, it works pretty well , after we restart the servers(node1 and node2). But, about half a hour later, the problem is there. We reintalled msdtc. It didn't work out too. msdtc and sql server is on a cluster. Can anyone help me out?
I am running SQl 7.0 (SP4) on Windows XP Professional (SP2). Whenever I try an insert/update type of activity, the System returns the following message :
Server: Msg 8501, Level 16, State 3, Line 2 MSDTC on server 'SERVER' is unavailable.
However, all the required services including MSDTC are running on the System.