I've been trying to find an answer to the mystery of how date conversions differ between SQL server and Excel. In Excel the number 37711 is displayed as the date '3/31/2003'. The same number in SQL server yields '2003-04-02' (I used the following: select cast(37711 as datetime) ).
Any idea what is going on here and how I might resolve this problem?
I am working in SSRS 2005. I have three parameters on the reports
Parmeter 1 is Date filter which is drop down and values are MTD,QTD,YTD...
2nd and 3rd parameters are fromdate and todate which are datetime parameters.
When user select let's say MTD from parameter1 then i have Stored procedure which populate the fromdate paramter with 1st date of the month and, todate populate the todays's date. The problem i am facing is the moment these date parameters get populated it converts into drop down. i want these date parameters should be still datetime so user can select the date. The value i am passing to these parameters are datetime type( Now()) .Still the date parameters controls are showing as dropdown. I don't know how to handle it. Please help me if you have faced this kind of problem.
We are using lookup transformation in SSIS 2012. The lookup transformation queries a table with two date columns. When we hover the mouse over the two columns in the 'columns' tab of the lookup transformation editor, the two columns show as DT_WSTR instead of DT_DBDATE. This causes the SSIS package to fail due to data type mismatch.A similar abandoned thread is available at: URL....
I downloaded this 7/6/07 and am trying to debug. I have a field that is PICs9(5) and has 00 00 1D as its hex value. The component returns a dt_decimal field value of 20201-
I believe it thinks the 00 are spaces and converts them to ASCII 20. Any ideas on what is wrong here? Is there another step I am missing after the conversion? Any help would be appreciated.
I've some huge table (over 100GB each), these table contain column of NTEXT, IMAGE. Application team needs to change these data types to VARBINARY(MAX), I've tested the modification in our lab and I noticed that the operation has been almost immediate so, I think that DB Engnine has not converted the existing data in the column but it has simply changed the definition of the column.
Or maybe NTEXT and IMAGE can be transparently converted into VARBINARY(MAX)?
Anyway, I want to be sure that the modified table is "coherent" I don't want that at a given point SQL Server tells me that some data is not readable.
I created a PDA application with a database, which has a table with a uniqueidentifier field and primarykey.
While doing the bulk insert from dataset into sql mobile database, It is inserting the record but it is not inserting the id which was entered into the sql server 2005 database, instead the id by creating a new id and the code is as below.
conAdap = new SqlCeDataAdapter(strQuery, conSqlceConnection);
SqlCeCommandBuilder cmdBuilder = new SqlCeCommandBuilder(conAdap);
I'm hoping someone can help me out - at least by pointing me to who I can ask, if not answering the question directly. I have some encrypted values in a SQL Server 2000 database that I unencrypt and use in a website that I just converted from .NET 1.1 to .NET 2. The data is pulled from the database using standard ADO with no changes between the .NET 1.1 version and the .NET 2 version - yet for some data entries, when the identical value is pulled by the .NET 2 code it is changed or shorted. For example: 1.1 code traces out a value pulled from the db as: ᒪ࢖淨�d�把���媑쬹�䜻ꖉ��� The same value pulled from the database by .NET 2 looks like this: ᒪ࢖淨�d�把媑쬹�䜻ꖉ Do you know why the database value would be interpreted diferently by .NET 2 than by .NET 1.1? How can I bring this in sync so that both 1.1 sites and 2 sites can use the same data?
There is an int filed in my table called "WeekNo" and when I use order by WeekNo Desc, I am getting the following result. 9 8 7 7 6 5 4 3 2 18 17 16 15 15 14 13 13 12 11 10 10 1
This does not seem right, can anyone comment why i am getting this result.
Our Transactions/sec counter jumped quite a bit when we moved to SQL Server 2005. The move coincided with increased load so we didn't think anything of it until recently. Upon further review, the counter just seems too high.
There was an article in SQL Server magazine a few years ago by Brian Moran where he states, "Transactions/sec doesn't measure activity unless it's inside a transaction. Batch Requests/sec measures all batches you send to the server even if they don't participate in a transaction." He goes on to say that Transactions/sec will be skewed lower because it is a subset of Batch Requests/sec. (http://www.sqlmag.com/Article/ArticleID/26380/sql_server_26380.html)
The article was written for SQL Server 2000. We conducted tests in 2000 and found what he said to be right on the money. SELECT statements increased Batch Requests/sec, but not Transactions/sec. UPDATE/INSERT/DELETE statements increased both in lockstep. Makes perfect sense so far.
We conducted the same tests in 2005 and found a radically different story. While SELECT statements behaved the same, UPDATE/INSERT/DELETE statements showed Transactions/sec skyrocket 2-10x more than Batch Requests/sec for the duration of the statement. In other words, a single transaction submitted by our application fires off exponentially more transactions than the one we submitted. I was unable to pinpoint exactly what these "hidden" transactions were actually doing. Is this something that occurred in 2000 but simply wasn't reported? Or is it new behavior in 2005?
While trying to answer these questions we noticed a second strange behavior in 2005. When no queries are being executed the Transactions/sec counter still jumps every six seconds like clockwork. And these phantom transactions number in the thousands. We tried to use profiler to capture what SQL was being executed, but nothing shows up in any SQL Statement or Batch event. However, when we turned on the SQLTransaction event we found it, sort of. An object called GhostCleanupTask runs every six seconds causing thousands of transactions. We don't know exactly what it is doing, but we noticed that it ran consistently on some databases, but never on other databases. Both sets of databases are identical and in use.
So, all of this investigation leads me with three final questions.
1. What is behind all the extra transactions caught by perfmon when I submit a single transaction? 2. What is GhostCleanupTask and why does it take so many transactions? (And why does it only run on certain databases?) 3. If a potential customer asks for our Transactions/sec count, is it accurate to give them the big number, knowing that our application is only actually submitting a fraction of that? On the other hand, the system apparently is actually doing that many transactions. (For instance, on our production server during peak, Batch Requests/sec is about 4,000, while Transactions/sec hits 26,000.
Dependingon the printer, the report prints differently. Sometimes it's all messed up on certain printers on others it prints fine. I see the problem mostly with older HP printers..
I have SQL stored proc that calls a CLR function. This function does a "select ... for xml" statement, manipulates the XML a little, and returns the manipulated XML to the stored proc.
This all works fine when I call the stored proc from a query window, but when I have BizTalk call the stored proc, the CLR function fails. I have a feeling this may have to do with BizTalk using MSDTC , but I am not sure.
Here's a code snippet from where CLR function fails:
SqlConnection conn = new SqlConnection("Context Connection=true"); conn.Open();SqlCommand cmd = new SqlCommand("Select * From Items FOR XML AUTO",conn);XmlDocument xdoc = new XmlDocument();xdoc.Load(cmd.ExecuteXmlReader());
Under BizTalk, the last line fails with: System.InvalidOperationException "Invalid command sent to ExecuteXmlReader. The command must return an Xml result."
Now to see why XmlReader doesn't like the returned data, i changed the last 2 lines of that snippet to this: SqlDataReader dr = cmd.ExecuteReader();
dr.Read();Object obj = dr[0];
If i have a breakpoint after that last line, obj is of type string when i call the proc myself, but it is a byte[] under Biztalk. If i look at the bytes themselves, its close to the expected xml...but with some nontext bytes sprinkled around. I can't seem to cast or encode the byte array into anything useful.
Anyone have any idea what is going on here? Why would the same code return different types based on a) who is calling it, or b) the type of transaction used?
I'm looking for the minimum date of an entry into a history table. The table contains multiple entries for the customer and the item with an activation and deactivation date for each entry.
I could use the following:
select customerId, item, min(activationDate) from history group by customerId, item
or a sub query
select customerId, item, activationDate
from history h1
where activationDate=(select min(activationDate) from history h2 where h2.customerId=h1.customerId and h2.item=h1.item)
How are these two queries parsed differently by SQL.
Same RDL, 2 different servers. I run the report on my computer and export to PDF, it prints properly. When the customer runs the report on their server (SSRS 2K5 SP1, same as mine), they get it displayed differently. The columns on the report extend to the next page and the lines are thicker.
Is this a formatting issue on the customer's PC? It uses standard fonts (Tahoma, Sans-serif).
I've written a script that should create a SPROC. It does, but I expect the SPROC to contain everything that is bold when I run this script (See below.) It doesnt...when I run the script, it creates the required table, but instead, it gets created with only the italic text. What gives? Please throw me a bone here :)USE [myproject];GOIF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE SPECIFIC_SCHEMA = N'dbo' AND SPECIFIC_NAME = N'myproject_CreateTable_SendEmail_Errors' ) DROP PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsGOCREATE PROCEDURE dbo.myproject_CreateTable_SendEmail_ErrorsASGO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:31:30 ******/IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sendEmail_Errors]') AND type in (N'U'))DROP TABLE [dbo].[sendEmail_Errors]GO/****** Object: Table [dbo].[sendEmail_Errors] Script Date: 10/28/2006 05:14:09 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[sendEmail_Errors]( [errorID] [smallint] IDENTITY(1,1) NOT NULL, [sendToEmail] [nchar](256) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [timeLogged] [datetime] NOT NULL CONSTRAINT [DF_sendEmail_Errors_timeLogged] DEFAULT (getutcdate()), CONSTRAINT [PK_SendEmail_Errors] PRIMARY KEY CLUSTERED ( [errorID] ASC)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Error count' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors', @level2type=N'COLUMN',@level2name=N'errorID'GOEXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'This table is used to log failed calls to send a verification email to users. No Email was sent to the user.' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'sendEmail_Errors'GO USE [myproject]GO/****** Object: StoredProcedure [dbo].[myproject_CreateTable_SendEmail_Errors] Script Date: 10/31/2006 01:59:19 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE PROCEDURE [dbo].[myproject_CreateTable_SendEmail_Errors]AS
I have a database on a SQL Server 2000 (sp3a) installation. For some reason it's reporting time that is 7 hours ahead of the system time.
The application is on one server the DB is on a shared production server. The app server and the DB server are reporting the same system time and are using a network time server. All the other db's on the shared production db server are reporting time correctly.
My questions:
Is there a T-SQL query to use to see what the time/timezone is for that database? Is there a T-SQL query I can use to set the db time (not the system time)? Anyone have any other suggestions as to what could be wrong?
Ok, I have a table with IP addresses stored in decimal format using both positive and negative numbers. The way that they are stored is: Positve 1 thru 2147483647 = 0.0.0.1 - 127.255.255.255 Negative -2147483648 thru -1 = 128.0.0.0 - 255.255.255.255 Conversion positive x/2^24 . (x/2^24)/2^16 . etc . etc negative (x+2^32)/2^24 . ((x+2^32)/2^24)/2^16 . etc . etc
I have a script which works by using UNION and the WHERE statements are x>0 x<0
My problem is I need to use a 3rd party app to run the script (McAfee ePO). McAfee does not recognize the UNION. My question is, can I acheive the same results as the script below, without using UNION.
SELECT ReportFullPathNode.FullPathName, cast(cast(IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar), cast(cast(IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar), cast(IPSubnetMask.LeftMostBits as varchar), IPSubnetMask.IP_Start FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode WHERE IPSubnetMask.IP_Start>0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID UNION ALL SELECT ReportFullPathNode.FullPathName, cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)/16777216 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216/65536 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_Start as bigint)%16777216%65536%256 as varchar), cast(cast(4294967296+IPSubnetMask.IP_End as bigint)/16777216 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216/65536 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536/256 as varchar) + '.' + cast(cast(4294967296+IPSubnetMask.IP_End as bigint)%16777216%65536%256 as varchar), cast(IPSubnetMask.LeftMostBits as varchar), IPSubnetMask.IP_Start+4294967296 FROM IPSubnetMask, ReportFullPathNode ReportFullPathNode WHERE IPSubnetMask.IP_Start<0 and IPSubnetMask.ParentID = ReportFullPathNode.LowestNodeID
I have an update trigger on a table. When a specific column is updated, I get the rowid from 'inserted' and then pass it via service broker to another database that will fire off a maintenance routine at a later time. This whole process seems to work fine if I update a single row at a time through Query Analyzer.
During testing (of the service broker part) I found that if in Query Analyzer I run an update that updates all of the records at once, then the trigger seems to fire only once for the entire process, therefore killing the rest of my process.
I would have thought that regardless of how a record was being updated the trigger would fire atomically for each row.
I am trying to import some data from csv files. When I try it using bulk insert I get a conversion error. When I use the exact same format file and data file with an openrowset it works fine. I would prefer to use the BULK insert as I can make some generic stored procedures to handle all my imports and not have to code the column names in the SQL. Any suggestions?
BULK Insert stuff
From 'c:projects estdatalist.txt'
with
(FORMATFILE='c:projects estdatamyformat.xml')
insert into stuff (ExternalId, Description, ScheduledDate, SentDate, Name)
select *
from OPENROWSET (BULK 'c:projects estdatalist.txt',
FORMATFILE='c:projects estdatamyformat.xml')
as t1
The destination table has more columns than the data file. The Field IDs represent the ordinal position of the columns in the destination table. Column 1 in the destination table is an int identity. The conversion failure is from trying to convert column 5 to int which makes me think bulk insert is ignoring the name attributes in the XML and just trying to insert the columns into the table in order without skipping.
Hi!<br><br>A larger SP runs ok in console. When called in VB NET 1.1, results get turncated and some thing don't run at all. The connection is ODBC. Small SP's run ok.<br><br>Is this default behavior or something common? Are there VB parameters to let a larger SP run without interruption?<br><br>-Bahman<br><br><br>
SSIS is behaving differently in different environments but the code is same.
One thing is nor working correctly that is I am converting a string data type column to float data type in data conversion. In our local environments the package is working fine but in production environments it is not working correclty. It is unable to convert the data it is throwing an error.
"The data value cannot be converted for reasons other than sign mismatch or data overflow"
The following code does not function if I use SQLOLEDB if I omit the provide and default to ODBC OLE DB it works correctly. I am assume I am coding something wrong for a SQLOLEDB provide. Any help is greatly appricated.
VB Code
Public Function SqlExecuteResult(xSQL As String, sServer As String, sDatabase As String, sUserName As String, sPassword As String, sCaller As String, Optional bLog As Boolean = False) As Object
Dim oDB As Object Dim oRS As Object
Set oDB = CreateObject("adodb.connection") Set oRS = CreateObject("adodb.recordset")
Private Sub Form_Load() Dim rs As Object Set rs = SqlExecuteResult("exec NextEntry 'SentMessages'", "surecomp-bob", "pmsureus33", "sa", "", "") MsgBox rs.fields(0) End Sub
SQL proceedure
CREATE PROCEDURE NextEntry @CounterName Varchar(20) AS
begin declare @counter int select @counter = counter from counters where countername = @counterName select @counter = @counter + 1 update counters set counter = @counter where countername = @countername select counter from counters where countername = @counterName End GO
The date in sql appears like this '07/25/2013 00:00:00' but when I export to excel the date shows like this '22-JUL-81 12.00.00.000000000 AM'. When I change format in excel nothing happens.
If I run the same FOR XML query in a Development edition enviornment and a Enterprise Edition environment, the results are different. The query is exactly the same.
Here is the query:
DECLARE @MessageBody XML DECLARE @AuditTable SYSNAME DECLARE @SendTrans BIT DECLARE @SendAudit BIT DECLARE @RecordCount INT DECLARE @OperationType CHAR(1)
SET @RecordCount = @@ROWCOUNT SET @OperationType = 'U' SET @SendTrans = 1 SET @SendAudit = 1 SET @AuditTable = 'States'
SELECT @MessageBody = ( SELECT * FROM ( SELECT TOP 10 'INSERTED' AS ActionType, @SendTrans AS SendTrans, @SendAudit AS SendAudit, COLUMNS_UPDATED() AS ColumnsUpdated, GETDATE() AS AuditDate, @AuditTable AS AuditTable, 'test' AS UserName, @RecordCount AS RecordCount, * FROM l_states )AuditRecord FOR XML AUTO, ROOT('AuditTable'), BINARY BASE64)
SELECT @MessageBody
In my DEV env (Developer Edition), this result is produced: <AuditTable> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AK" State="Alaska" /> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:43:12.497" AuditTable="States" UserName="test" RecordCount="1" StateAbbreviation="AL" State="Alabama" /> </AuditTable>
In my Enterprise Edition evn, this is the result: <AuditTable> <AuditRecord ActionType="INSERTED" SendTrans="1" SendAudit="1" AuditDate="2007-06-22T15:44:48.230" AuditTable="States" UserName="test" RecordCount="1"> <l_states StateAbbreviation="AK" State="Alaska" /> <l_states StateAbbreviation="AL" State="Alabama" /> </AuditRecord> </AuditTable>
Does anyone have any idea what might be wrong? Any help is greatly appreciated. Tim
I am converting old MS Access queries to T-SQL and ran into a problem. The results of the same update queries returned different results. The idea is to subtract each of the amounts of Table2 from Table1:
Source sample tables and content: Table1 ID Amount 1 100
Table2 ID Amount 1 10 1 20 1 30
In Access (Orginal source): UPDATE Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID SET Table1.Amount = Table1.Amount - Table2.Amount
In T-SQL (Converted): UPDATE Table1 SET Table1.Amount = Table1.Amount - Table2.Amount FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID
Syntax for T-SQL is different from Access. When both queries are ran on their respective database, Table1.Amount in access became 40 (100 - 10 - 20 - 30), but Table1.Amount in SQL became 90 (100 - 10).
It looks as if in T-SQL it only ran one row? Or it could be that in T-SQL, updates written to the database in batches, hence why Table1.Amount was not updated for all update instances? Any help would be greatly appreciated.
I am trying to output data from my sql table to an excel spreadsheet and send it by email which works fine, the problem is he wants the date to be in the format d-mmm-yy, which is easy to format in excel manually, but he do not want to do this manually. I tried to do this when I select the date from the table to spreadsheet, "select convert(char,value_date,106) from table", but this don't get transported to the excel spreadsheet, I get my results on the spread sheet as dd/mm/yy. Can you please help either to set the date on excel forever to be in this format "d-mmm-yy" or to force this output to excel
In my Excel file the date column contain some nulls. In Data conversion I am converting this Date column as Date[dt_date]. When I run the package it is giving the error Can not convert date to Copy of date.. This error is coming due to nulls in Date column. How to solve this error?
I am able to get reports going with tables sized properly. They look fine on the ReportServer website and I adjust the column widths so that the headings and data look nice. When I set up a subscription to be delivered by "Report Server E-Mail," though, the table formattings get completely distorted.
In particular, I have two tables, with some column headers being two short words (e.g. Max Height). When rendering on the site, I adjust the columns so the full column header is visible on one line. When I receive the email and read it in Outlook, the header row is now about twice as tall and everything is scrunched together. Both the headings and the data in the fields do not format the same as on the website.
The two tables tend to actually have the exact same width in the email version, although occasionally they are a little different (in the web version one is about half as wide as the other). I have tried just making the columns bigger and that has not worked. I've tried making the font sizes smaller, which didn't work. If I do that, leaving the columns the same width, the email version just gets scrunched into a smaller area with the same text-wrapping problems.
If I open the email in a browser (in a web mail interface) the report renders perfectly as on the site.
I have almost all the default settings, and haven't been messing around with page sizes and things like this (except after, to see if that would fix the problem).
Any ideas, similar experiences, or suggestions? If there is a book I should read or any reference you could point me to in order to figure this out would be helpful. I haven't been able to understand this either using web searches or the two SQL reporting services books I have.
I have a report where I use Globals!ReportName in the header of the report for the report title. In Development and on SSRS stand alone the value for Globals!ReportName is in mixed case and the file extension is omitted. When the report is published to a MOSS server integrated with SSRS the value for Globals!ReportName is all in lower case and the file extension is included.
Is there any reason for this change in behavior and is there a way I can put back the mixed case and omit the file extension?
My product was developed for and works correctly on SQL Server 2000. However, when we upgraded to 2005, we found that certain system stored procedures were different, causing our product to break.
We can easily change our stored procedures to work in 2005, but we have a large client base, some of whom will be using each version. Our current solution is to check the version of SQL Server during installation and choose which script to use at that time in order to have an appropriate stored procedure for that version, but we are concerned about users who install our product with SQL Server 2000 and then upgrade to SQL Server 2005.
How can I make a stored procedure that will run differently depending on the version? I tried something like:
if (select charindex('2000', @@version)) > 0
begin -- SQL Server 2000
SELECT
... FROM
... WHERE end else -- SQL Server 2005
SELECT
... FROM ... WHERE end
Unfortunately, the system tables I'm selecting from have different stuctures in the different versions (one example is msdb.dbo.sysjobschedules and msdb.dbo.sysschedules), and even though the code never gets into the SQL Server 2000 section on 2005, it parses the whole procedure for errors before allowing it to be saves and will not allow this.
I export a table to excel file by using DTS. It seems the date field show as ###### when I open the excel file. If I expend the column I see the date. Is there any way I export in away that this date field will not show up as #####.