Transactions/sec Behaves Much Differently In 2005
Aug 6, 2007
Our Transactions/sec counter jumped quite a bit when we moved to SQL Server 2005. The move coincided with increased load so we didn't think anything of it until recently. Upon further review, the counter just seems too high.
There was an article in SQL Server magazine a few years ago by Brian Moran where he states, "Transactions/sec doesn't measure activity unless it's inside a transaction. Batch Requests/sec measures all batches you send to the server even if they don't participate in a transaction." He goes on to say that Transactions/sec will be skewed lower because it is a subset of Batch Requests/sec. (http://www.sqlmag.com/Article/ArticleID/26380/sql_server_26380.html)
The article was written for SQL Server 2000. We conducted tests in 2000 and found what he said to be right on the money. SELECT statements increased Batch Requests/sec, but not Transactions/sec. UPDATE/INSERT/DELETE statements increased both in lockstep. Makes perfect sense so far.
We conducted the same tests in 2005 and found a radically different story. While SELECT statements behaved the same, UPDATE/INSERT/DELETE statements showed Transactions/sec skyrocket 2-10x more than Batch Requests/sec for the duration of the statement. In other words, a single transaction submitted by our application fires off exponentially more transactions than the one we submitted. I was unable to pinpoint exactly what these "hidden" transactions were actually doing. Is this something that occurred in 2000 but simply wasn't reported? Or is it new behavior in 2005?
While trying to answer these questions we noticed a second strange behavior in 2005. When no queries are being executed the Transactions/sec counter still jumps every six seconds like clockwork. And these phantom transactions number in the thousands. We tried to use profiler to capture what SQL was being executed, but nothing shows up in any SQL Statement or Batch event. However, when we turned on the SQLTransaction event we found it, sort of. An object called GhostCleanupTask runs every six seconds causing thousands of transactions. We don't know exactly what it is doing, but we noticed that it ran consistently on some databases, but never on other databases. Both sets of databases are identical and in use.
So, all of this investigation leads me with three final questions.
1. What is behind all the extra transactions caught by perfmon when I submit a single transaction?
2. What is GhostCleanupTask and why does it take so many transactions? (And why does it only run on certain databases?)
3. If a potential customer asks for our Transactions/sec count, is it accurate to give them the big number, knowing that our application is only actually submitting a fraction of that? On the other hand, the system apparently is actually doing that many transactions. (For instance, on our production server during peak, Batch Requests/sec is about 4,000, while Transactions/sec hits 26,000.
Any insight would be much appreciated.
3P
View 1 Replies
ADVERTISEMENT
Oct 9, 2007
Hi,
SSIS is behaving differently in different environments but the code is same.
One thing is nor working correctly that is I am converting a string data type column to float data type in data conversion. In our local environments the package is working fine but in production environments it is not working correclty. It is unable to convert the data it is throwing an error.
"The data value cannot be converted for reasons other than sign mismatch or data overflow"
Can anybody help me please?
View 5 Replies
View Related
Mar 18, 2008
Hello everyone,
I created a custom assembly using C# to transform some binary data into text, and in this assembly I used one win32 dll developed by our customer to help me to do the tranformation.
The code I used to call the win32 dll is like below:
[DllImport("tdasuie.dll", EntryPoint = "AlrtLogConditionToText",
ExactSpelling = false, CharSet = CharSet.Auto, SetLastError = true)]
private static extern UInt32 AlrtLogConditionToText(Byte[] pbCondition, StringBuilder pszText, UInt32 dwSize);
I defined a C# method to call the above win32 method and return a string. Then in the report, I called this C# method to get the correct string.
In the report designer, the C# method in the custom assembly can return the correct string in the preview window. But after I deployed the report into the report server, the textbox will only display "#error" in the report manager web page.
Can anybody help me on it? Thanks a lot.
Danny
View 4 Replies
View Related
Jun 22, 2007
If I run the same FOR XML query in a Development edition enviornment and a Enterprise Edition environment, the results are different. The query is exactly the same.
Here is the query:
DECLARE @MessageBody XML
DECLARE @AuditTable SYSNAME
DECLARE @SendTrans BIT
DECLARE @SendAudit BIT
DECLARE @RecordCount INT
DECLARE @OperationType CHAR(1)
SET @RecordCount = @@ROWCOUNT
SET @OperationType = 'U'
SET @SendTrans = 1
SET @SendAudit = 1
SET @AuditTable = 'States'
SELECT @MessageBody = (
SELECT * FROM
(
SELECT TOP 10
'INSERTED' AS ActionType, @SendTrans AS SendTrans, @SendAudit AS SendAudit,
COLUMNS_UPDATED() AS ColumnsUpdated, GETDATE() AS AuditDate,
@AuditTable AS AuditTable, 'test' AS UserName, @RecordCount AS RecordCount, *
FROM l_states
)AuditRecord
FOR XML AUTO, ROOT('AuditTable'), BINARY BASE64)
SELECT @MessageBody
In my DEV env (Developer Edition), this result is produced:
<AuditTable>
<AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AK" State="Alaska" />
<AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AL" State="Alabama" />
</AuditTable>
In my Enterprise Edition evn, this is the result:
<AuditTable>
<AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:44:48.230" AuditTable="States" UserName="test" RecordCount="1">
<l_states StateAbbreviation="AK" State="Alaska" />
<l_states StateAbbreviation="AL" State="Alabama" />
</AuditRecord>
</AuditTable>
Does anyone have any idea what might be wrong? Any help is greatly appreciated.
Tim
View 1 Replies
View Related
Apr 25, 2007
I select these columns:
TABLEONE: SEC
TABLETWO: B_DATE and E_DATE (they are numbers – not dates); RATE (is a number)
I first calculate the difference between E_DATE and B_DATE and between a chosen date and B_DATE
creating the new columns GG_CED_C and GG_MAT_C.
These new columns are calculated perfectly (they make the difference
between the two dates according to the 30/360 date count convention) and don’t give me any problems. They are all filled with numbers.
I’m able to use these new columns to make other calculations. For example I succeed in calculating
RATE/GG_CED_C* GG_MAT_C
but if I try to calculate
RATE/360*GG_MAT_C AS RATEO
the result is that in some records I have my new column RATEO correctly calculated, while in some other records there is no value in the column.
For example:
B_DATE E_DATE RATE GG_CED_C GG_MAT_C RATEO
20061229 20070630 3,995 180 90 0,99875
20061229 20070630 4,185 180 90
Dates are identical and so are GG_CED_C and GG_MAT_C: why don't
I have the result in the second record????
SELECT
SEC, B_DATE, E_DATE, RATE, GG_CED_C, GG_MAT_C,
RATE/ 360 * GG_MAT_C AS RATEO
FROM (
SEC, B_DATE, E_DATE, RATE, GG_CED_C, GG_MAT_C
(YEAR2 - YEAR1)*360 + (MONTH2 - MONTH1)*30 + (DAY2 - DAY1) AS GG_CED_C,
(YEARRIL - YEAR1)*360 + (MONTHRIL - MONTH1)*30 + (DAYRIL - DAY1) AS GG_MAT_C,
FROM (
SELECT
A.SEC, , B_DATE, E_DATE, RATE,
TRUNC(B_DATE/10000,0) AS YEAR1 ,
TRUNC(MOD(B_DATE,10000)/100,0) AS MONTH1 ,
CASE
WHEN MOD(B_DATE,1000)= 229 THEN 30
WHEN MOD(B_DATE,1000)= 228
AND MOD(TRUNC(B_DATE/10000,0),4) > 0 THEN 30
ELSE MIN(MOD(B_DATE,100),30)
END AS DAY1,
TRUNC(E_DATE/10000,0) AS YEAR2 ,
TRUNC(MOD(E_DATE,10000)/100,0) AS MONTH2 ,
CASE
WHEN MOD(E_DATE,1000)= 229 THEN 30
WHEN MOD(E_DATE,1000)= 228
AND MOD(TRUNC(E_DATE/10000,0),4) > 0 THEN 30
ELSE MIN(MOD(E_DATE,100),30)
END AS DAY2,
TRUNC(&DRIL/10000,0) AS YEARRIL ,
TRUNC(MOD(&DRIL,10000)/100,0) AS MONTHRIL ,
CASE
WHEN MOD(&DRIL,1000)= 229 THEN 30
WHEN MOD(&DRIL,1000)= 228
AND MOD(TRUNC(&DRIL/10000,0),4) > 0 THEN 30
ELSE MIN(MOD(&DRIL,100),30)
END AS DAYRIL,
FROM TABLEONE A, TABLETWO C
What's happening?
After what I've seen in this query I'm wondering if SQL is reliable
when it comes to calculations (after all its main duty is to query data and not to make calculations among them)(?????.
Please give an explanation to this mystery.
Thank you.
Anna - Verona (Italy)
View 10 Replies
View Related
May 4, 2008
Hi all,
I am have database consist of 10 tables, I want to make log transactions for this tables, So any act happened on these tables, such as adding or modifying or deleting record, I want to put this information somewhere in the database, to determine the user who has done such acts on these tables.
I would high appreciate for any assistant.
View 7 Replies
View Related
Jan 24, 2007
Hello guys!
I am working with SQL Server 2005 a few months, and i need your help about transactions topic.
In first, i hope you are patient with my english, i will try to explain it the best i can ;-)) and now... my question and thanks you in advance.
I am trying to set the transaction option of a foreach loop container task in run time using the Expressions option. There, I have configured the transaction option of this task with a variable value (var type int) and this variable is informed at the beginning of my dts using a script task (values: 0 for Not Supported, 1 for Supported, 2 for Required).
The problem is the transaction option gets the variable value but... the variable value by default. After, during the dts execution, the variable gets the new value but not the transaction option.
In summary, my question is ... is it possible to set the transaction option during the run time?.
Thanks for your help!
Chris.
View 7 Replies
View Related
Jan 22, 2008
Hello,
I've been redirected here from the Transaction Programming forum becuase I have e peculiar issue with SQL 2005 running INSERT stored procs from multiple WCF services all withing a TransactionScope.
The original post is http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2720665&SiteID=1&mode=1
The story goes, I have SRVC A with starts a TransactionScope which in turn calls SRVC B & C in sequence based on processing rules.
SRVC A is a Sequential Workflow which Starts and Completes the TransactionScope
SRVC B Creates a new Customer into the database
SRVC C Creates new Accounts for that Customer and Initialises the accounts with funds
The DB Tables underneath are Customer, Account and AccountLog
DDL
Code Block
CREATE TABLE [Member].[Customers](
[CustomerId] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](32) NOT NULL,
[CreatedUtc] [datetime] NOT NULL ,
CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED
(
[CustomerId] ASC
)
CREATE TABLE [Bank].[Accounts](
[AccountId] [int] IDENTITY(1,1) NOT NULL,
[CustomerId] [int] NOT NULL,
[CurrentBalance] [money] NOT NULL,
[LastUpdateDate] [datetime] NULL,
[CreatedDate] [datetime] NOT NULL,
[timestamp] [timestamp] NOT NULL,
CONSTRAINT [PK_Bank_Account] PRIMARY KEY CLUSTERED
(
[AccountId] ASC
)
) ON [PRIMARY]
GO
ALTER TABLE [Bank].[Accounts] WITH CHECK ADD CONSTRAINT [FK_Account_Customer] FOREIGN KEY([CustomerId])
REFERENCES [Member].[Customers] ([CustomerId])
CREATE TABLE [Bank].[AccountLog](
[AccountLogId] [int] IDENTITY(1,1) NOT NULL,
[AccountId] [int] NOT NULL,
[Amount] [money] NOT NULL,
[UtcDate] [datetime] NOT NULL,
CONSTRAINT [PK_Bank_AccountLog] PRIMARY KEY CLUSTERED
(
[AccountLogId] ASC
)
) ON [PRIMARY]
GO
ALTER TABLE [Bank].[AccountLog] WITH CHECK ADD CONSTRAINT [FK_AccountLog_Account] FOREIGN KEY([AccountId])
REFERENCES [Bank].[Accounts] ([AccountId])
NB. I've removed most fields not essential for this example.
So from SRVC A I invoke SRVC B and the Customer is created, however when I get to SRVC C and the accounts are to be created I get a lock. Only when the Transaction aborts due to timeout, do I see in SQL Profiler that the call to the SP that created the Account is executed but eventually rolls back as it is part of the distributed transaction.
Now, If I set the Isolation level in the TransactionScope to ReadUncommitted (urgh) the problem remains. When I set the IsolationLevel to Read Uncommitted in the SP that creates the account the problem remains but when I remove the FK constraint the problem disappers. The other curious thing is that with the Customer -> Account FK removed and when SRVC C calls to insert funds into the AccountLog which also updates an aggregated total in the Account from within the same transaction scope and with Account -> AccountLog FK constraints in place there is no locking even with Isolation Serializable.
I'm quite at a loss as to what could be causing these issues. If anyone has any suggestions I would greatly appreciate any help.
Thanks
Andy
View 1 Replies
View Related
Mar 19, 2007
I have a million plus line program that uses DAO and JET/Access databases. We are modifying the code so that it will work with BOTH JET and SQL Server databases with an eye to moving to just SQL Server eventually.
Much of the code works just fine without change when using DAO and SQL server as 90+ percent of the code is accessing data READ-ONLY. (Jet's ODBC capability is used)
There are however some transactions that will not work.
When I examine the SQL Trace, I notice that after the transaction has begun, the SPID has changed. This happens often when cursors are used, BUT happens even if the only activity after a DBEngine.BeginTrans is issued DO NOT USE CURSORS as in the following code:
DBEngine.BeginTrans
strSQL = "INSERT INTO [TestTable] (MyKey,MyData, MyData2) VALUES (1,'apple','fruit')"
DB1.Execute strSQL
strSQL = "UPDATE [TestTable] SET [MyData] = 'apple2' WHERE [MyKey] = 1"
DB1.Execute strSQL
DBEngine.Committrans
The second DB1.Execute times out: [Microsoft][ODBC SQL Server Driver]Timeout expired
The trace shows a new SPID is opened when the UPDATE query is executed. A new SPID is spawned and the JET engine is sending an SELECT SQL to get the value for the MyKey:
SELECT "dbo"."TestTable"."MyKey" FROM "dbo"."TestTable" WHERE ("MyKey" = 1 )
The second SQL statement is waiting for the first one that is being done in a transaction to be completed. The program is expecting that both of these will be being done on the SAME SPID or at least the same transaction scope.
So, since I have no way of knowing how and when a new SPID will be spawned using DAO, we are not going to use it for when we do transactions.
I have tried to use ADO (Provider=SQLOLEDB.1), but ADO has the same problems as DAO as far as a new SPID being spawned in some circumstances, but it works in more situations than DAO.
The Connection Object is not guaranteed to be attached to a specific SPID - it can change at will. So Cn.Execute done twice in succession may be done on separate SPIDs.
My question is this: Is there a way to make sure that no new SPID is spawned?, Is there a way to know when a new SPID will be spawned so we can know how to avoid this situation? Is there a better way than using ADO? Keep in mind that this program has to be able to use BOTH Access/Jet databases and SQL server.
Note:
I have spent over 160 hours reading about Jet, the VB Guide to SQL Server, searching MSDN, Google etc. No article really does justice to this subject of transactions using DAO, ADO and SQL Server and the issue of the SPIDs (equivalent to a session and the session is the scope of the transaction). If you could be sure to stay on the same SPID, then you could just issue your own SQL Begin Transaction and control it all if you only plan on doing SQL statements without the need for cursors. A Microsoft article on this subject would be really helpful to all the programmers that are just now getting around to migrating to SQL Server.
Also, note, that if we did not do transactions, then we would not have to modify a single line of our code to make it work with SQL Server.
There is virtually no documentation for much of what I have written about in this Thread. I am very surprised as it seems this would have come up for thousands of programmers. I must be missing something, so thank you in advance for your help.
View 4 Replies
View Related
Nov 13, 2007
Help, I had a tran log grow to it's restricted size, however the person that created this made the max size almost equal to the set max size. Needless to say I have not space to work with. SQL got bounced and my db went into recovery mode. After recovery mode was complete I tried to put my database in emergency mode but it exec's but never sets the mode. Next I tried to dbcc checkdb and I get msg 7929, level 16 state 1, line i Check statement aborted. Database contains deferred transactions. There is no back up for this database. Dev play area. I can not detatch db becase of the same error. What next? Any help would be great.
View 7 Replies
View Related
Aug 22, 2006
Hello,I'm trying to follow some sql sentences that my system send to SQL 2005express and I don't have a deep knowlegde of databases. I know thatthere's a transactions log that keeps all sentences that go intodatabase motor. Is it correct? in case yes, is there a way to look atthis archived sentences?Thanks in advanceIgnacio
View 1 Replies
View Related
May 22, 2005
Hi there,
I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following:
<code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction?
Any help is greatly appreciated
View 3 Replies
View Related
Jun 4, 2007
Is it possible to use DTC (or cross database queries) with mirroring on SQL 2005 Service Pack 2?
Thnx,
GoranP
View 2 Replies
View Related
Apr 13, 2006
Hi,
I created a PDA application with a database, which has a table with a uniqueidentifier field and primarykey.
While doing the bulk insert from dataset into sql mobile database, It is inserting the record but it is not inserting the id which was entered into the sql server 2005 database, instead the id by creating a new id and the code is as below.
conAdap = new SqlCeDataAdapter(strQuery, conSqlceConnection);
SqlCeCommandBuilder cmdBuilder = new SqlCeCommandBuilder(conAdap);
conAdap.Fill(dsData);
int r =conAdap.Update(dsData);
Please help me.
Thank you,
Prashant
View 1 Replies
View Related
Feb 5, 2007
I'm hoping someone can help me out - at least by pointing me to who I can ask, if not answering the question directly.
I have some encrypted values in a SQL Server 2000 database that I unencrypt and use in a website that I just converted from .NET 1.1 to .NET 2.
The data is pulled from the database using standard ADO with no changes between the .NET 1.1 version and the .NET 2 version - yet for some data entries, when the identical value is pulled by the .NET 2 code it is changed or shorted.
For example:
1.1 code traces out a value pulled from the db as:
ᒪ࢖淨�d�把���媑쬹�䜻ꖉ���
The same value pulled from the database by .NET 2 looks like this:
ᒪ࢖淨�d�把媑쬹�䜻ꖉ
Do you know why the database value would be interpreted diferently by .NET 2 than by .NET 1.1? How can I bring this in sync so that both 1.1 sites and 2 sites can use the same data?
View 8 Replies
View Related
May 22, 2008
Hi Guys,
There is an int filed in my table called "WeekNo" and when I use order by WeekNo Desc, I am getting the following result.
9
8
7
7
6
5
4
3
2
18
17
16
15
15
14
13
13
12
11
10
10
1
This does not seem right, can anyone comment why i am getting this result.
Many Thanks
View 4 Replies
View Related
Feb 19, 2008
I've been trying to find an answer to the mystery of how date conversions differ between SQL server and Excel. In Excel the number 37711 is displayed as the date '3/31/2003'. The same number in SQL server yields '2003-04-02' (I used the following: select cast(37711 as datetime) ).
Any idea what is going on here and how I might resolve this problem?
View 4 Replies
View Related
Apr 26, 2007
Has anyone else run into this issue?
Dependingon the printer, the report prints differently. Sometimes it's all messed up on certain printers on others it prints fine. I see the problem mostly with older HP printers..
Any ideas?
Daryl
View 2 Replies
View Related
Mar 2, 2007
I have SQL stored proc that calls a CLR function. This function does a "select ... for xml" statement, manipulates the XML a little, and returns the manipulated XML to the stored proc.
This all works fine when I call the stored proc from a query window, but when I have BizTalk call the stored proc, the CLR function fails. I have a feeling this may have to do with BizTalk using MSDTC , but I am not sure.
Here's a code snippet from where CLR function fails:
SqlConnection conn = new SqlConnection("Context Connection=true");
conn.Open();SqlCommand cmd = new SqlCommand("Select * From Items FOR XML AUTO",conn);XmlDocument xdoc = new XmlDocument();xdoc.Load(cmd.ExecuteXmlReader());
Under BizTalk, the last line fails with: System.InvalidOperationException "Invalid command sent to ExecuteXmlReader. The command must return an Xml result."
Now to see why XmlReader doesn't like the returned data, i changed the last 2 lines of that snippet to this:
SqlDataReader dr = cmd.ExecuteReader();
dr.Read();Object obj = dr[0];
If i have a breakpoint after that last line, obj is of type string when i call the proc myself, but it is a byte[] under Biztalk. If i look at the bytes themselves, its close to the expected xml...but with some nontext bytes sprinkled around. I can't seem to cast or encode the byte array into anything useful.
Anyone have any idea what is going on here? Why would the same code return different types based on a) who is calling it, or b) the type of transaction used?
Thanks,
Ron
View 1 Replies
View Related
Sep 16, 2007
I'm looking for the minimum date of an entry into a history table. The table contains multiple entries for the customer and the item with an activation and deactivation date for each entry.
I could use the following:
select customerId, item, min(activationDate) from history group by customerId, item
or a sub query
select customerId, item, activationDate
from history h1
where activationDate=(select min(activationDate) from history h2 where h2.customerId=h1.customerId and h2.item=h1.item)
How are these two queries parsed differently by SQL.
They return a different number of results.
Thanks,
karen
View 7 Replies
View Related
Mar 5, 2007
This is a good one:
Same RDL, 2 different servers. I run the report on my computer and export to PDF, it prints properly. When the customer runs the report on their server (SSRS 2K5 SP1, same as mine), they get it displayed differently. The columns on the report extend to the next page and the lines are thicker.
Is this a formatting issue on the customer's PC? It uses standard fonts (Tahoma, Sans-serif).
Any ideas?
View 3 Replies
View Related
Oct 31, 2006
I've written a script that should create a SPROC. It does, but I expect the SPROC to contain everything that is bold when I run this script (See below.) It doesnt...when I run the script, it creates the required table, but instead, it gets created with only the italic text. What gives? Please throw me a bone here :)USE [myproject];GOIF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE SPECIFIC_SCHEMA = N'dbo' AND SPECIFIC_NAME = N'myproject_CreateTable_SendEmail_Errors' ) DROP PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsGOCREATE PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsASGO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:31:30 ******/IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sendEmail_Errors]') AND type in (N'U'))DROP TABLE [dbo].[sendEmail_Errors]GO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:14:09 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[sendEmail_Errors]( [errorID] [smallint] IDENTITY(1,1) NOT NULL, [sendToEmail] [nchar](256) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [timeLogged] [datetime] NOT NULL CONSTRAINT [DF_sendEmail_Errors_timeLogged] DEFAULT (getutcdate()), CONSTRAINT [PK_SendEmail_Errors] PRIMARY KEY CLUSTERED ( [errorID] ASC)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Error count' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors', @level2type=N'COLUMN',@level2name=N'errorID'GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'This table is used to log failed calls to send a verification email to users. No Email was sent to the user.' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors'GO USE [myproject]GO/****** Object: StoredProcedure [dbo].[myproject_CreateTable_SendEmail_Errors] Script Date: 10/31/2006 01:59:19 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [dbo].[myproject_CreateTable_SendEmail_Errors]AS
View 1 Replies
View Related
Oct 12, 2004
Hiya,
I have a database on a SQL Server 2000 (sp3a) installation. For some reason it's reporting time that is 7 hours ahead of the system time.
The application is on one server the DB is on a shared production server. The app server and the DB server are reporting the same system time and are using a network time server. All the other db's on the shared production db server are reporting time correctly.
My questions:
Is there a T-SQL query to use to see what the time/timezone is for that database?
Is there a T-SQL query I can use to set the db time (not the system time)?
Anyone have any other suggestions as to what could be wrong?
Thanks in advance for any help!
'chele
View 2 Replies
View Related
Oct 11, 2005
Ok, I have a table with IP addresses stored in decimal format using both positive and negative numbers.
The way that they are stored is:
Positve
1 thru 2147483647 = 0.0.0.1 - 127.255.255.255
Negative
-2147483648 thru -1 = 128.0.0.0 - 255.255.255.255
Conversion
positive
x/2^24 . (x/2^24)/2^16 . etc . etc
negative
(x+2^32)/2^24 . ((x+2^32)/2^24)/2^16 . etc . etc
I have a script which works by using UNION and the WHERE statements are x>0 x<0
My problem is I need to use a 3rd party app to run the script (McAfee ePO). McAfee does not recognize the UNION. My question is, can I acheive the same results as the script below, without using UNION.
SELECT ReportFullPathNode.FullPathName,
cast(cast(IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar),
cast(cast(IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' +
cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar),
cast(IPSubnetMask.LeftMostBits as varchar),
IPSubnetMask.IP_Start
FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode
WHERE IPSubnetMask.IP_Start>0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID
UNION ALL
SELECT ReportFullPathNode.FullPathName,
cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar),
cast(cast(4294967296+IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' +
cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar),
cast(IPSubnetMask.LeftMostBits as varchar),
IPSubnetMask.IP_Start+4294967296
FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode
WHERE IPSubnetMask.IP_Start<0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID
View 2 Replies
View Related
Aug 1, 2007
Let me set the scene:
I have an update trigger on a table. When a specific column is updated, I get the rowid from 'inserted' and then pass it via service broker to another database that will fire off a maintenance routine at a later time. This whole process seems to work fine if I update a single row at a time through Query Analyzer.
During testing (of the service broker part) I found that if in Query Analyzer I run an update that updates all of the records at once, then the trigger seems to fire only once for the entire process, therefore killing the rest of my process.
I would have thought that regardless of how a record was being updated the trigger would fire atomically for each row.
Any guidance on this would be MOST appreciated!
View 20 Replies
View Related
Feb 22, 2008
I am trying to import some data from csv files. When I try it using bulk insert I get a conversion error. When I use the exact same format file and data file with an openrowset it works fine. I would prefer to use the BULK insert as I can make some generic stored procedures to handle all my imports and not have to code the column names in the SQL. Any suggestions?
BULK Insert stuff
From 'c:projects estdatalist.txt'
with
(FORMATFILE='c:projects estdatamyformat.xml')
insert into stuff (ExternalId, Description, ScheduledDate, SentDate, Name)
select *
from OPENROWSET (BULK 'c:projects estdatalist.txt',
FORMATFILE='c:projects estdatamyformat.xml')
as t1
The destination table has more columns than the data file. The Field IDs represent the ordinal position of the columns in the destination table. Column 1 in the destination table is an int identity. The conversion failure is from trying to convert column 5 to int which makes me think bulk insert is ignoring the name attributes in the XML and just trying to insert the columns into the table in order without skipping.
<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<RECORD>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR=" " MAX_LENGTH="12"/>
<FIELD ID="5" xsi:type="CharTerm" TERMINATOR=" " MAX_LENGTH="200" COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
<FIELD ID="7" xsi:type="CharTerm" TERMINATOR=" " MAX_LENGTH="24"/>
<FIELD ID="8" xsi:type="CharTerm" TERMINATOR=" " MAX_LENGTH="24"/>
<FIELD ID="9" xsi:type="CharTerm" TERMINATOR="
" MAX_LENGTH="500" COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
</RECORD>
<ROW>
<COLUMN SOURCE="2" NAME="ExternalId" xsi:type="SQLINT"/>
<COLUMN SOURCE="5" NAME="Description" xsi:type="SQLVARYCHAR"/>
<COLUMN SOURCE="7" NAME="ScheduledDate" xsi:type="SQLDATETIME"/>
<COLUMN SOURCE="8" NAME="SentDate" xsi:type="SQLDATETIME"/>
<COLUMN SOURCE="9" NAME="Name" xsi:type="SQLVARYCHAR"/>
</ROW>
</BCPFORMAT>
View 1 Replies
View Related
Aug 19, 2007
Hi!<br><br>A larger SP runs ok in console. When called in VB NET 1.1, results get turncated and some thing don't run at all. The connection is ODBC. Small SP's run ok.<br><br>Is this default behavior or something common? Are there VB parameters to let a larger SP run without interruption?<br><br>-Bahman<br><br><br>
View 2 Replies
View Related
Jul 11, 2007
Hello
The following code does not function if I use SQLOLEDB if I omit the provide and default to ODBC OLE DB it works correctly. I am assume I am coding something wrong for a SQLOLEDB provide. Any help is greatly appricated.
VB Code
Public Function SqlExecuteResult(xSQL As String, sServer As String, sDatabase As String, sUserName As String, sPassword As String, sCaller As String, Optional bLog As Boolean = False) As Object
Dim oDB As Object
Dim oRS As Object
Set oDB = CreateObject("adodb.connection")
Set oRS = CreateObject("adodb.recordset")
oDB.open "driver={SQL Server};provider=sqloledb;server=" & sServer & ";database=" & sDatabase & ";uid=" & sUserName & ";pwd=" & sPassword & ";"
oRS.CursorLocation = adUseClient
oRS.CursorType = adOpenStatic
Set oRS.ActiveConnection = oDB
oRS.open xSQL
Set oRS.ActiveConnection = Nothing
Set SqlExecuteResult = oRS
oDB.Close
Set oDB = Nothing
End Function
Private Sub Form_Load()
Dim rs As Object
Set rs = SqlExecuteResult("exec NextEntry 'SentMessages'", "surecomp-bob", "pmsureus33", "sa", "", "")
MsgBox rs.fields(0)
End Sub
SQL proceedure
CREATE PROCEDURE NextEntry @CounterName Varchar(20) AS
begin
declare @counter int
select @counter = counter from counters where countername = @counterName
select @counter = @counter + 1
update counters set counter = @counter where countername = @countername
select counter from counters where countername = @counterName
End
GO
Thanks
Bob Jenkin
View 1 Replies
View Related
Sep 19, 2007
Hello, DECLARE @x DECIMAL
SET @x = 65.554
SELECT ROUND(@x, 1)--this returns 66
SELECT ROUND(65.554, 1)--this returns 65.600 can someone explain to me why is like that?
Thank you
View 3 Replies
View Related
Dec 17, 2007
Hello,
I am converting old MS Access queries to T-SQL and ran into a problem. The results of the same update queries returned different results. The idea is to subtract each of the amounts of Table2 from Table1:
Source sample tables and content:
Table1
ID Amount
1 100
Table2
ID Amount
1 10
1 20
1 30
In Access (Orginal source):
UPDATE Table1
INNER JOIN Table2
ON Table1.ID = Table2.ID
SET Table1.Amount = Table1.Amount - Table2.Amount
In T-SQL (Converted):
UPDATE Table1
SET Table1.Amount = Table1.Amount - Table2.Amount
FROM Table1 INNER JOIN Table2
ON Table1.ID = Table2.ID
Syntax for T-SQL is different from Access. When both queries are ran on their respective database, Table1.Amount in access became 40 (100 - 10 - 20 - 30), but Table1.Amount in SQL became 90 (100 - 10).
It looks as if in T-SQL it only ran one row? Or it could be that in T-SQL, updates written to the database in batches, hence why Table1.Amount was not updated for all update instances? Any help would be greatly appreciated.
View 3 Replies
View Related
May 13, 2007
I am able to get reports going with tables sized properly. They look fine on the ReportServer website and I adjust the column widths so that the headings and data look nice. When I set up a subscription to be delivered by "Report Server E-Mail," though, the table formattings get completely distorted.
In particular, I have two tables, with some column headers being two short words (e.g. Max Height). When rendering on the site, I adjust the columns so the full column header is visible on one line. When I receive the email and read it in Outlook, the header row is now about twice as tall and everything is scrunched together. Both the headings and the data in the fields do not format the same as on the website.
The two tables tend to actually have the exact same width in the email version, although occasionally they are a little different (in the web version one is about half as wide as the other). I have tried just making the columns bigger and that has not worked. I've tried making the font sizes smaller, which didn't work. If I do that, leaving the columns the same width, the email version just gets scrunched into a smaller area with the same text-wrapping problems.
If I open the email in a browser (in a web mail interface) the report renders perfectly as on the site.
I have almost all the default settings, and haven't been messing around with page sizes and things like this (except after, to see if that would fix the problem).
Any ideas, similar experiences, or suggestions? If there is a book I should read or any reference you could point me to in order to figure this out would be helpful. I haven't been able to understand this either using web searches or the two SQL reporting services books I have.
View 4 Replies
View Related
Jun 26, 2007
I have a report where I use Globals!ReportName in the header of the report for the report title. In Development and on SSRS stand alone the value for Globals!ReportName is in mixed case and the file extension is omitted. When the report is published to a MOSS server integrated with SSRS the value for Globals!ReportName is all in lower case and the file extension is included.
Is there any reason for this change in behavior and is there a way I can put back the mixed case and omit the file extension?
View 2 Replies
View Related
Oct 16, 2007
My product was developed for and works correctly on SQL Server 2000. However, when we upgraded to 2005, we found that certain system stored procedures were different, causing our product to break.
We can easily change our stored procedures to work in 2005, but we have a large client base, some of whom will be using each version. Our current solution is to check the version of SQL Server during installation and choose which script to use at that time in order to have an appropriate stored procedure for that version, but we are concerned about users who install our product with SQL Server 2000 and then upgrade to SQL Server 2005.
How can I make a stored procedure that will run differently depending on the version? I tried something like:
if (select charindex('2000', @@version)) > 0
begin -- SQL Server 2000
SELECT
...
FROM
...
WHERE
end
else -- SQL Server 2005
SELECT
...
FROM
...
WHERE
end
Unfortunately, the system tables I'm selecting from have different stuctures in the different versions (one example is msdb.dbo.sysjobschedules and msdb.dbo.sysschedules), and even though the code never gets into the SQL Server 2000 section on 2005, it parses the whole procedure for errors before allowing it to be saves and will not allow this.
Any thoughts?
View 3 Replies
View Related