Confirmation On Script Task Behavior
May 1, 2007
Hello,
I am looking for some confirmation on a behavior of the SSIS Script Task. I have a custom script task that takes an input file, and archives it after it has been processed into the database.
When I run this package in the Visual Studio GUI, if the destination drive is full, it throws an exception telling me that there is not enough disk space. So, my questions are:
1) If this happens when the package is running through the command line, would this exception still be thrown? (I am thinking it will be)
2) Also, Do I need to explicitly fail the script task in the event handler, in order to ensure this .Net exception being thrown will cause the component to fail. (I am fairly certain I do, since this is what I had to do inside of the Visual Studio GUI, but does anyone know if this same behavior would occur when running from the command line?)
Thanks,
Chris
View 1 Replies
ADVERTISEMENT
Jan 20, 2006
I€™m using a For Loop container to with an Execute Package Task inside, looping until a folder is empty. I€™ve noticed some strange behaviors:
1. The child package keeps creating new connections. I start with 3 connections to the DB and when the For Loop container is done I€™ve got 364 connections.
2. The Execute Package Task is pulling the wrong version of the package I€™ve specified. I€™m using a package saved to the File System and there€™s only one copy on the drive. I€™ve verified the path is going to the correct location.
Does anyone have a work-around for the €˜connection generation€™ issue?
TIA
Eric
View 1 Replies
View Related
Jan 3, 2008
Hello all-
I am currently trying to deploy a production package to our servers and am seeing some very strange behavior. I am attempting to use a File System Task inside a Foreach Loop Container that renames (i.e. moves) a file from a processing location, appends a date, and places it in a different directory. The source file is coming from a connection manager whose file name is generated dynamically. The destination file is coming from a variable that is set in the foreach as a preceding task. I experience two different behaviors for the File System task depending on where/how I run the package...
Inside the VS solution on the server & from the SSIS MSDB store on the server
The package behaves as intended. The file is correctly processed then renamed/moved to the correct location.
From the SSIS MSDB store on my local machine
The file is correctly processed, however strange behavior occurs in the renaming task. The informational message in the execution window says that it deleted the newly renamed file (the same message displayed in the VS solution results). Upon checking the file system, the file is neither in the source directory nor in the destination.
Does anyone have some insight as to why these might be behaving differently? My only guess is that it's some sort of file system permission (even though I have domain credentials for a local admin on the server), but I can't imagine why it would allow me to delete the file and not rename it... Though finding the source of the problem would be nice, my biggest question is whether or not the package will behave correctly if I setup a schedule for it.
View 6 Replies
View Related
Dec 20, 2007
Hello Everyone,
I've seen many entries about trailing spaces but have not found one like this.
In the Control Flow I am using an "Execute SQL Task" to populate some SSIS local variables (type string) by: (1) executing a SQL stored proc with output variables (type varchar(100)) to (2) be mapped to the local variable name (the parameter mapping Data Type is VARCHAR).
One of these mapped outputs is used as a path for subsequent operation in the Control Flow. At execution the sproc fires, populating the local variable with the path but with trailing spaces out to 255. Later in the "Script Task" when that path is used I receive an error telling me that the path is too long, and something about 260 or 246 characters.
Here's the oddity. I have two desktop environments running XP and a server environment (server 2003). This package runs just fine on the server - no trailing space issue, no need to trim. But on both my desktops I get the errors. By adding trim statements I can get back the correct path, but varchars should not be including trailing spaces, and the sproc return variable is a varchar (100).
I know this soulds like numerous other posts which indicate the solution is to trim, but I think the question I am asking is why does it work on the server but not the desktop? Is the SSIS variable type string experiencing a bug on different OS's?
Not to further complicate the issue but it used to work on my laptop, but through a horrible sequence of events I had to reload the studio in which case the error started to happen on that too.
View 6 Replies
View Related
Apr 16, 2006
I ran into a variety of problems trying to set a script task breakpoint in a package containing multiple script tasks. The debugger apparently treats the breakpoint as if it were set in ALL tasks in the package, not just the one in which it is actually set.
At best this results in hitting breakpoints in scripts where they are not set and at worst the debugger brings up the "Send error report" dialog and quits (while the package continues to run). The latter seems to happen most often when the script with the breakpoint has more lines than an earlier script and the breakpoint is set at a line number that exceeds the number of lines in the earlier script--it bombs when the earlier, shorter script starts.
To get the debugger to work under these circumstances I had to add some nonsense code like
While False
Dim i as Integer = 0
End While
to every script, at the same line number near the beginning of the script (line 40, for example). I then set a breakpoint on the middle statement in one of the scripts (doesn't matter which) to cause the debugger to open at runtime. It doesn't hit the breakpoint because the line is never executed. If the breakpoint is set on a line that can be executed in any script in the package, bad things tend to happen.
I then add a "stop" statement to the script that I want to debug. This only works if the debugger is already open, hence the dummy breakpoint above.
This workaround is usable, but I am debugging a package that has quite a few scripts and having to insert dummy code in all of them at a fixed line number is rather inconvenient. I would really like to see breakpoints work the way one would expect--only in the scripts where they are set.
Is there some other, easier way around this problem? Is there at least an easier way to get the debugger to open so that "stop" will work?
View 3 Replies
View Related
Apr 17, 2007
I have a task to perform a new SQL Server 2005 installation at a client.
They have a system with 10 external SCSI drives, each of 72GB. They only have one database of 80GB in use at this system.
1) Userdata: I think that I will put 5 disks at SCSI channel 1, with RAID 3 or 6, with formatted space of 140GB.
2) Log: I think that I will put 3 disks at SCSI channel 2, with RAID 5, with formatted space of 70GB.
3) TempDB: I think that I will put 2 disks at SCSI channel 3, with RAID 0, with formatted space of 140GB.
System already has two built-in drives (RAID 1) for operating system, where I think I will put system databases.
I am open for suggestions and improvements!
Peter Larsson
Helsingborg, Sweden
View 19 Replies
View Related
May 22, 2007
I am working with a client who has all databases, logs, tempdb and user/system databases on a single raid set.
In order to make some speed improvement, we have decided to move log files to a second raid controller card.
To accomplish this, we are thinking about taking these steps:
1) Detach database XYZ.
2) Stop database service.
3) Create a new partition on new controller card.
4) Copy old log files from I: to new K: drive.
5) Remove drive letter I: from system.
6) Change drive letter K: to I:
7) Attach database
My question is, does SQL Server recognize the log files automatically, since they are placed at same logical position (i: drive)?
Peter Larsson
Helsingborg, Sweden
View 5 Replies
View Related
Feb 13, 2004
Just a thought.
If an issue is resolved please note that the problem has been resolved. Because there are "many ways to skin a cat" it would be helpful to anyone else with a similar problem (or someone trying to learn) what the solution was.
View 7 Replies
View Related
Apr 8, 2008
Hello all,
I have 2 primary key fields the ssn and refnum... if the data in the file is duplicated it will not import to my table rights even though i am using DTS to do my import, correct? or do I need to add an extra validator in there?
View 2 Replies
View Related
Jun 7, 2005
We have a project at work that requires us to run an msde database from cd. My sup. and I disagree on if this can be done or not. We currently have an application written in asp that uses an access db. We copy the *.mdb file and all the asp files to cd and use the app. to query the db. Now, we have a client who wants the same capability only using msde. They will not allow us to install the full version of sql on there machines. No one in the dept wants to re-write this app. so we're trying to figure out if we can make it work with msde. So, My first question is, does anyone think that it is possible to just copy the *.mdf file to cd (mind you WITHOUT the *.ldf file... can't write to a cd)andMy second question is, can a sql db run without both files in any situation(mdf,ldf)? (I believe this is impossible, but would like to here it from someone with more experience)Any comments are greatly appreciatedthanks
View 1 Replies
View Related
May 24, 2007
Hi.
I am about to write my registration confirmation tables.
Right now I have my registration form and table "Users"
that receives input from user registration.
I want to do email confirmation of a registered account.
I was thinking of doing this by creating a table "Verify"
that is set to 0 if not verified and 1 if the user verifies
his email address.
Then I started thinking about the original registration
information. If a user "doesn't" verify, then there is all
this dormant data in the "Users" database. So, maybe
it is better to write all the registration data to the "Verify"
table and then have SQL move that data to the "Users"
table upon verification? This sounds a bit sloppy to me...
is this how the email verification of data is done or, if
not, can somebody suggest a best practice?
I thank you.
View 3 Replies
View Related
Mar 12, 2004
I need confirmation from you SQL Server experts out there. Please let me know if the following works. Thanks!
This stored procedure gets a value and increments by 1, but while it does this, I want to lock the table so no other processes can read the same value between the UPDATE and SELECT (of course, this may only happen in a fraction of a second, but I anticipate that we will have thousands of concurrent users). I need to manually increment this column because an identity column is not appropriate in this case.
BEGIN TRANSACTION
UPDATE forum WITH (TABLOCKX)
SET forum_last_used_msg_id = forum_last_used_msg_id + 1
WHERE forum_id = @forum_id
SELECT @new_id = forum_last_used_msg_id
FROM forum
WHERE forum_id = @forum_id
COMMIT TRANSACTION
View 10 Replies
View Related
Jan 23, 2008
Hi to alll of you, i'm working in a project to save cars information, when the user who adds on the new record enter all the data, this will need to be printed with a particulary number, which needs to be unique, (lets take a passport number as an example) this generated number will takes some info from the filling fields, for example:
Cars Brand: BMW (catalog number: 120)
Cars Model: 325 (catalog number: 30)
Year: 2008 (catalog number: 18)
So the unique certification number will takes from the catalog numbers * 20 / 5 for example for cars Brand
for Cars Modelo: catalog number *3 + 15
CERTIFICATE NUMBER: 480-105
I'm not sure if this is possible.
I will realy thanks for your help in this issue.
Best Regards
Gian
View 2 Replies
View Related
Nov 13, 2006
How can we say whether the SP is successfully compiled or not if we are compiling it on the server as a part of the TSQL script since it does not throw any message like ORACLE does.
In oracle, system will let you know whether the the procedure is successfully complied or not?
Thanks/
View 6 Replies
View Related
Sep 9, 2007
Hi
How can I check "Native client" is present on my client? I'm trying to eliminate all the reasons my client won't run a VC++ app which access a sql server.
thanks
Z
View 4 Replies
View Related
Jun 5, 2007
Hi.
I have set up my confirmation system to work as thus:
1) user registers
2) registration goes into a temp table with a code
3) user gets a MD5 code in his inbox
4) user clicks back
5) click-back page moves registration data from TEMP table to USER table and sends the user to the signin page.
6) ...
Now, my question is - how do I handle this final step in account
confirmation? Should the first signin act as a final confirmation of
the users account? This makes sense. Should the data in the USER table self-delete after a day if there has been no first sign-in? Should I have a column in the USER table to show if the first login has happened? How should I do this?
I am sure I could mess around with this but it would be great to
get feedback from somebody that has done this multiple times and has a sense of what the best practice is (based on large volume examples).
Your feedback is much apprecaited.
Thanks.
View 2 Replies
View Related
Jul 23, 2007
I have discovered what looks like a bug in the optimiser. I've posted it at https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=288243 but I wonder if any of you with SQL 2005 RTM, 2005 SP1 or 2008 CTP could confirm when this was introduced and whether it is still an issue?
Code Snippet
-- Bug report
-- 2007/07/19
-- Alasdair Cunningham-Smith
-- alasdair at acs-solutions dot co dot uk
set nocount on
go
-- example date in in British date format
set dateformat dmy
go
use tempdb
go
create table foo( bar varchar( 30 ) not null )
go
insert into foo( bar ) values ( 'fishy' )
insert into foo( bar ) values ( '19/07/2007' )
go
-- this works fine in all versions - only valid dates are passed to the convert function
select
convert( smalldatetime, bar, 103 ) as bardate
from
foo
where
bar like '__/__/____'
go
-- this works on SQL 2000, but fails on SQL 2005 SP2 (I've not tried other SPs of SQL 2005):
-- Msg 295, Level 16, State 3, Line 2
-- Conversion failed when converting character string to smalldatetime data type.
--
-- I believe the query is rewritten as if the derived table query contained
-- "and convert( smalldatetime, bar, 103 ) < getdate()"
-- which would expose the convert to the invalid data
select
*
from
(
select
convert( smalldatetime, bar, 103 ) as bardate
from
foo
where
bar like '__/__/____'
) as derived
where
bardate < getdate()
go
-- Workaround:
-- Use a case statement to protect the convert operator from the invalid data
select
*
from
(
select
case when bar like '__/__/____' then
convert( smalldatetime, bar, 103 )
else
null
end as bardate
from
foo
where
bar like '__/__/____'
) as derived
where
bardate < getdate()
go
drop table foo
go
The workaround I discovered is simple but ugly. I invite your comments...
alasdair.
View 5 Replies
View Related
Jul 21, 2015
I have the following UNION ALL query with SELECT INTO @tblData temp table. I would like to confirm if my query is correct.
In my first SELECT statement, I have INSERT INTO @tblData.
Do I need another INSERT INTO @tblData again in my second SELECT statement after UNION ALL?
DECLARE @BeginDate as Datetime
DECLARE @EndDate as Datetime
SET @BeginDate = '7/1/2015'
SET @EndDate = '7/13/2015'
DECLARE @tblData table
[Code] ....
View 3 Replies
View Related
Jun 21, 2007
OK. I give up and need help. Hopefully it's something minor ...
I have a dataflow which returns email addresses to a recordset.
I pass this recordset into a ForEachLoop configuring the enumerator as (Foreach ADO Enumerator). I also map the email address as a variable with index 0.
I then have a Execute SQL task which receives this email address as a varchar variable (parameter 0) which I then use in my SQL command to limit the rows returned. I have commented out the where clause and returned all rows regardless of email address to try to troubleshoot this problem. In either event, I then use a resultset to store the query result of type object and result name 0.
I then pass this resultset into a script variable to start parsing the sql rows returned as type object. ( I assume this is the correct way to do this from other prior posts ...).
The script appears to throw an exception at the following line. I assume it's because I'm either not passing in the values properly or the query doesn't return anything. However, I am certain the query works as it executes just fine at the command prompt.
Try
ds = CType(Dts.Variables("VP_EMAIL_RESULTS_RS").Value, DataSet)
My intent is to email the query results to each email address with the following type of data by passing the parsed data from the script to a send mail task. Email works fine and sends out messages but the content is empty. I pass the parsed data as string values to the messagesource and define the messagesourcetype as a variable in the mail task.
part number leadtime
x 5
y 9
....
Does anyone have any idea what I might be doing wrong?
thanks
John
View 5 Replies
View Related
Feb 6, 2007
This was originally posted on DBForums.com, so here is the link:
http://www.dbforums.com/showthread.php?t=1614086
Since some of the Microsoft staff come around here occasionally, I figured I should at least link to it here. This is the gist of the problem, though. I was asked to come up with a script to create all required data directories in case an emergency was declared, and someone had to rebuild one of our database servers. Most of you are probably thinking of hitting up the sysaltfiles table about now, but this will turn into a cautionary tale. Try it if you dare. The one requirement is that you install the data for SQL Server in a non-standard directory that has a short path (such as C:MSSQL8, instead of the whole C:Program files...).
What I am unclear on is whether this is a problem in the reverse function, the r(l)trim function, or the fixed-width datatype. I have confirmed that transferring the data to a temp table did not eliminate the...oddity.
select filename
from master..sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename)), filename
from sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename))
from sysaltfiles
where dbid = 2
I have also had two independent DBAs confirm this oddity exists, so this should be relatively easy to replicate.
View 10 Replies
View Related
Aug 29, 2007
I have three tables I am using, aspnet_Users, Stories,
CustomizedStory. Stories and
CustomizedStory are related via a foreign key StoryID. I’ve setup the tables so that when I delete a
Story row it cascade deletes the corresponding row from CustomizedStory.
Each CustomizedStory row has a reference to UserID from aspnet_Users.
Since, I didn’t want to mess with the table definition by adding a cascade
delete option on aspnet_Users, I decide to use a trigger, essentially delete all customized stories and associated stories if a user is deleted:ALTER
TRIGGER [dbo].[DeleteCustomizedStories] ON [dbo].[aspnet_Users] FOR DELETEAS
BEGIN DELETE FROM dbo.Story
WHERE StoryID
= (SELECT StoryID FROM dbo.CustomizedStory WHERE
UserID = (SELECT UserID FROM deleted))END
The problem I am having is that it deletes all of the
CustomizedStory rows as specified by the cascading option, but doesn’t delete
the Story rows. I can’t seem to understand
why this is happening, especially when I
explicitly told it to delete story rows.
View 1 Replies
View Related
Feb 1, 2008
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
View 1 Replies
View Related
Jan 14, 2005
I get the error " 'commandBehavior' not declared". What does this mean? What is CommandBehavior exactly?
'
Sub Page_Load(Sender As Object, e As EventArgs)
' Obtain categoryId from QueryString
Dim connectionString As String = "server=(local); trusted_connection=true; database=SalesSide"
Dim sqlConnection As System.Data.SqlClient.SqlConnection = New System.Data.SqlClient.SqlConnection(connectionString)
Dim queryString As String = "Select tblIdent.fldStockNo, tblIdent.fldProgram, tblIdent.fldGenus" & _
"tblIdent.fldVariety, tblIdent.fldSize, tblAvailability.fldQuantity" & _
"FROM tblIdent INNER JOIN tblIdent On tblAvailability.fldStockNo = tblIdent.fldStockNo"
Dim sqlCommand As System.Data.SqlClient.SqlCommand = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection)
sqlConnection.Open
dgAvailability.DataSource = sqlCommand.ExecuteReader(CommandBehavior.CloseConnection)
dgAvailability.DataBind()
sqlConnection.Close
End Sub
View 1 Replies
View Related
Dec 17, 1999
I'd like to understant the Percent Log Used behavior...
I monitored the Percent Log Used. For a specific database, it was 50 % used. I backup up the transaction log and the Percent Log Used was still 50 % used. For another database, the log was 60 % used. After the backup, it was 80 % used!!!!
Would you help me to understant this situation?
Thank you,
Fabio
View 2 Replies
View Related
May 28, 1999
Hi all,
I have a question regarding some very odd NT/SQL behavior that is exhibited when running a shell command through xp_cmdshell. If I do a 'net use z: servernamesharename' from the xp_cmdshell, it shows up in Explorer with the icon for a local drive! This would not be more than an ordinary M$ bug, except for the fact if go to a command line, I can't do a 'net use z: /delete' because it says that the network drive isn't there! Can someone shed some light on this?
Thanks,
Ed Molinari
Emerald Solutions
View 2 Replies
View Related
Jun 3, 2004
Hi
I'm using MSSQL 2000 (with SP3) on Win2000.
Now, before installing the service pack I encountered with several strange bugs in the MSSQL mostly in queries that used TOP, gladly they were all fixed when installing the service pack... or so I thought...
So yesterday while trying to optimize a heavy query (7 joins - 2 of them are left joins from different tables crossed to the same table) I encountered yet again and with the latest service pack with an even stranger bug.
First the returned records are just not always the same, meaning, for example when I use TOP 465 in the SELECT statement, the last record which is 465 contains some value, when using TOP 466 the record before the last which is record 465 contains different value!... of course both with the same ORDER BY clause.
Also when I view the execution plan it's also not same in both cases, with TOP 465 it's one way (and much much faster) and with TOP 466 the plan is completely different and much slower...
Does anyone encountered with this phenomenon? Any suggestions?
BTW, don't pay too much attention to the number 465, in my case this is the border of the problem, after trying this query on different tables I found that each has it's own border that after it the TOP starts to freak out.
Thanks for the help!
Inon.
View 14 Replies
View Related
Jun 30, 2004
I've scheduled a job to run on a certain schedule, but the Last Run Status date comes back very oddly, a couple years out of synch, the other jobs scheduled report back just fine.
Anyone seen this behavior?
Edward R Hunter, Data Application Designer
comScore Networks, Inc.
View 1 Replies
View Related
Nov 29, 2005
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet.
Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ?
-----
HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5
Windows 2000 Advanced Server - SQL Server 2000 SP4
View 1 Replies
View Related
Feb 18, 2008
I would like to ask for help. We had no problems with dynamic queries in SQL 2000, which were very fast. But when we ran the same queries in SQL 2005, it was many times slower lasting several seconds. I guess it has something to do with creating execution plan, because when I run it second time, it is suddenly extremely fast. But after just a little change (like adding space character), the speed is very slow again. If it is caused by execution plan in SQL 2005, is it somehow possible to change its settings so that it will behave like in sql 2000?
Thank you for answers!
View 2 Replies
View Related
Jul 23, 2005
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
View 4 Replies
View Related
Apr 22, 2007
SQL Server 2000 SP4.Running the script below prints 'Unexpected':-----------------------------DECLARE @String AS varchar(1)SELECT @String = 'z'IF @String LIKE '[' + CHAR(32) + '-' + CHAR(255) + ']'PRINT 'Expected'ELSEPRINT 'Unexpected'-----------------------------If the @String variable is set to 'y' (or in fact any ANSI character otherthan 'z'), the result is 'Expected'. The comparison also evaluates asexpected if CHAR(255) is replaced with CHAR(254). The server collation, ifthat matters, is SQL_Latin1_General_CP1_CI_AS.It would be helpful to find the explanatin of this behavior. Thanks.--(remove a 9 to reply by email)
View 2 Replies
View Related
Nov 1, 2007
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks
DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
foreach (DataRow drServer in dtSQLServers.Rows)
{
String ServerName;
ServerName = drServer["Server"].ToString();
if (drServer["Instance;] != null && drServer["Instance"].ToString().Length > 0)
ServerName += " + drServer["Instance"].ToString();
if (cmbServer.Items.IndexOf(ServerName) < 0)
cmbServer.Items.Add(ServerName);
}
View 3 Replies
View Related
Jul 20, 2006
I am writing an upsert proc that should detect the change in state for a record. The change in state happens when a particular date field (default null) is populated. However, I can not get a record set that detects the changes properly.
Here is an example
set ANSI_NULLS on
go
create table #t1
(
ID int,
DateField datetime
)
create table #t2
(
ID int,
DateField datetime
)
insert into #t1 (ID, DateField) values (1, '7/20/2006')
insert into #t2 (ID, DateFIeld) values (1, null)
select * from #t1 join #t2 on #t1.ID = #t2.ID where #t1.DateField <> #t2.DateField
drop table #t1
drop table #t2
The select should return a record because NULL does not equal '7/20/2006' but it doesn't.
What am I missing?
Thanks in advance.
View 4 Replies
View Related