I'm seeing some strange behavior from a stored procedure of mine. It essentially grabs a bunch of rows using a fairly simple JOIN....here's the from statement:
Code Snippet
FROM Payment PY (NOLOCK)
JOIN (SELECT DISTINCT
PY.AccountPaymentId,
ROW_NUMBER() OVER(ORDER BY PY.AccountPaymentId ASC) AS RowNum
FROM Payment PY (NOLOCK)) AS SQ
ON (SQ.AccountPaymentId = PY.AccountPaymentId)
INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN
Party PT ON PE.PartyId = PT.PartyId INNER JOIN
Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN
Account AC ON DS.AccountId = AC.AccountId INNER JOIN
clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN
PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND
PT.PartyId = PA.PartyId
WHERE RowNum BETWEEN (((@Page * @PageSize) - @PageSize) + 1) AND ((@Page * @PageSize) - @PageSize) + @PageSize
and ((@PayeeName IS NULL) OR (PT.[Name] LIKE '%' + @PayeeName + '%'))
AND ((@AccountId IS NULL) OR (AC.AccountId = @AccountId))
AND ((@DistributionId IS NULL) OR (DS.DistributionId = @DistributionId))
AND ((@PaymentDate IS NULL) OR (DATEADD(day, DATEDIFF(day, 0, PY.PaymentDate), 0) = DATEADD(day, DATEDIFF(day, 0, @PaymentDate), 0))) -- Ignores the time
AND ((@PaymentNumber IS NULL) OR (PY.AccountPaymentId = @PaymentNumber))
AND ((@IsReconciled IS NULL) OR (PY.ReconciledInd = @IsReconciled))
AND ((@AmountIssued IS NULL) OR (PY.PaymentAmount = @AmountIssued))
AND ((@AmountPaid IS NULL) OR (PY.AccountPaidAmount = @AmountPaid))
AND ((@IssueStatus IS NULL) OR (PY.PaymentStatusEnumItemId = @IssueStatus))
AND ((@AccountStatus IS NULL) OR (PY.AccountStatusEnumItemId = @AccountStatus))
AND ((@IsReissued IS NULL) OR (PY.ReissuedInd = @IsReissued))
ORDER BY AccountPaymentID ASC
When I pass a 1 for the @IsReconciled parameter, I get the right number of rows back - 9779. But when I pass a 0 (zero), i get no rows back, although there are 222 rows which satisfy the condition.
Is there somethig I'm overlooking (I don't think I am...)? I don't know whay 1 works and 0 wouldn't...
FYI - the @IsReconciled parameter is set to NULL at the outset of the procedure -
We have an interesting problem. We are attempting to migrate from sql 2000 to sql 2005. the schema we have is exactly the same. the new 2005 box is more powerful than our 2000 box.
here is our schema:
tbl_Items ItemID int pk ReferenceID int sessionid varchar(255) StatusID int
tbl_ItemsStatus statusid int pk isinternalstatus bit
there is an index on (ReferenceID, SessionID, StatusID) and (SessionID, StatusID)
this is the query:
DECLARE @referenceid INTEGER SET @referenceid = 1019
SELECT MAX(i2.itemid) FROM tbl_Items i2 (NOLOCK) JOIN tbl_ItemsStatus s (NOLOCK) ON i2.StatusID = s.StatusID WHERE s.IsInternalStatus = 0 AND i2.referenceid = @referenceid AND i2.sessionid IN ( SELECT i3.sessionid FROM tbl_Items i3 (NOLOCK) WHERE i3.referenceid = @referenceid AND i3.status <> 7 AND i3.status <> 8 AND i3.status <> 10 AND i3.itemid IN ( SELECT max(i4.itemid) FROM tbl_Items i4 (NOLOCK) WHERE i4.referenceid = @referenceid GROUP BY i4.sessionid ) AND i3.itemid NOT IN ( SELECT MAX(i7.itemid ) FROM tbl_Items i7 (NOLOCK) WHERE i7.referenceid = @referenceid AND i7.SessionID IN ( SELECT i5.SessionID FROM tbl_Items i5 (NOLOCK) WHERE i5.status <> 11 AND i5.referenceid = @referenceid AND i5.itemid IN ( SELECT MAX(i6.itemid) FROM tbl_Items i6 (NOLOCK) WHERE i6.referenceid = @referenceid AND i6.status IN (7,11,8) GROUP BY i6.sessionid ) ) GROUP BY i7.SessionID ) )
GROUP BY i2.sessionid
we know this query is pretty bad and can be optimized. however, if we run this query as is on 2005 it takes about 2 hours to run...if we run the exact same query on 2000 it takes 9 seconds.
so this query on 2005 if run takes 2 hours..however, if we omit the s.IsInternalStatus = 0 or the i2.referenceid = @referenceid line it takes about 9 seconds.
why would this be? it makes no sense why omitting one of those where clauses would increase the performance of the query by 2 hours? we know its a bad query...but this doesnt make sense.
Hi!I'm studying to have my MCSE 70-228 certification and I'm trying somethings with backing up transaction logs and shrinking it.Here's what I do:There is no activity in the database by the way.I have a transaction log of 1792 kb...I do the following command:BACKUP LOG TestDB TO TestDBBackupDBCC SHRINKFILE ('TestDB_Log',0)The transaction log is now 1280 kbI do the same command and finally my transaction log is now 1024kb...Any idea why it didn't shrink it at 1024 kb the first time?Thanks!Jeff
Guys, I have some data in an excel sheet. Some of the columns have a few NULL values for certain amount of rows till is gets data. What makes it so weird is that when priviewing this in the wizard, the whole column is filled with NULL values when the number of leading NULLs is quite large. When NULLs are quite a few, the column works fine!! Can anyone explains this? We tried some manual work to cut some of the rows from below and put them at the start and it worked! It's so strange though this behavior. Shiko
I was able to successfully create a database maintenance plan for SQL Server 2000 Transaction Log Shipping for a few databases a few weeks ago. Yesterday, I created a few more but to my surprise, I can no longer do it. I can create a maintenance plan but the job it creates does not start even if I force the job to start. I did exactly the same thing as what I did (as I document everything I do) before but no luck.
I had created 2 packages... one is the parent package and contains a 50 iterations loop running a secondary package for each iteration... i had reached the following conclusion:
My package takes an average of 5 seconds from the time it ends executing one iteration and starts another... after about 30 iterations... my average time between end and start increases significantly to about 12 secs or even more...
All packages have delay validation set to false, and receive several variables from the parent package... Has for the logging, it is done to files based on a path coming from a variable in the parent package.
To execute the parent package i am using dtexecui.exe and i consider this behavior rather strange... had anyone experienced the same? Can anyone test this?
My environment is a 4 x64 processors with 8gb memory, so i guess its good enough to get 0 secs from end to start
I have a script component in a data flow that is exhibiting some strange behavior. In the PreExecute event of the data flow, I stuff a recordset into a variable that is declared at the data flow scope. Within the data flow, I use a script component to read in the data from the recordset.
Example:
Dim olead As New Data.OleDb.OleDbDataAdapter
Dim dt1 As New System.Data.DataTable
Dim row As System.Data.DataRow
olead.Fill(dt1, Me.Variables.rsIntRateStrata)
If I display the count of the records in the data table dt1, it shows 42 rows, which is correct. Run the package, everything runs as expected. So far, so good.
Now, I set up another source/destination within the same data flow, as well as a script component between them, same as the first flow described above. Now my data flow has two parallel flows (different source & destinations). I copy the same script logic from the first flow into the second. Run the package- no errors, everything is fine... except when I inspect the data, it looks like the transformation isn't working correctly in the second script.
So I display a messagebox of each script component during run time. The first component displays 42 records, while the second displays 0 records? Same variable. Same data flow.
So I delete the first (original) flow from my data flow. Run the package again. Now the messagebox says 42.
What is happening here? Do I have to create two variables to duplicate the same recordset if I need to use it multiple times within the same data flow? Is this a bug?
BEGIN declare @datefin_flag datetime, @strip datetime SELECT @strip = dateadd(d,datediff(d,0,getdate()),0) SELECT @datefin_flag = DATE_FIN_PERIODE_FISCALE FROM DM_LKP_CALENDRIER_PERIODE_F WHERE DATE_DEBUT_PERIODE_FISCALE < @strip AND DATE_FIN_PERIODE_FISCALE = @strip --select @datefin_flag --select @strip IF(@datefin_flag != @strip) RAISERROR('You cant run this',16,1) END
Well this Query should return the raiserror it returns completes successfuly since todays date is not the same as the date in the database. if you select @datefin_flag it returns NULL and if you select @strip it brings back todays date how can NULL be equal to to todays date assuming that todays date is equal to NULL. ?
This was originally posted on DBForums.com, so here is the link: http://www.dbforums.com/showthread.php?t=1614086
Since some of the Microsoft staff come around here occasionally, I figured I should at least link to it here. This is the gist of the problem, though. I was asked to come up with a script to create all required data directories in case an emergency was declared, and someone had to rebuild one of our database servers. Most of you are probably thinking of hitting up the sysaltfiles table about now, but this will turn into a cautionary tale. Try it if you dare. The one requirement is that you install the data for SQL Server in a non-standard directory that has a short path (such as C:MSSQL8, instead of the whole C:Program files...).
What I am unclear on is whether this is a problem in the reverse function, the r(l)trim function, or the fixed-width datatype. I have confirmed that transferring the data to a temp table did not eliminate the...oddity.
select filename from master..sysaltfiles where dbid = 2 go select reverse(rtrim(filename)), filename from sysaltfiles where dbid = 2 go select reverse(rtrim(filename)) from sysaltfiles where dbid = 2
I have also had two independent DBAs confirm this oddity exists, so this should be relatively easy to replicate.
I have three tables I am using, aspnet_Users, Stories, CustomizedStory. Stories and CustomizedStory are related via a foreign key StoryID. I’ve setup the tables so that when I delete a Story row it cascade deletes the corresponding row from CustomizedStory. Each CustomizedStory row has a reference to UserID from aspnet_Users. Since, I didn’t want to mess with the table definition by adding a cascade delete option on aspnet_Users, I decide to use a trigger, essentially delete all customized stories and associated stories if a user is deleted:ALTER TRIGGER [dbo].[DeleteCustomizedStories] ON [dbo].[aspnet_Users] FOR DELETEAS BEGIN DELETE FROM dbo.Story WHERE StoryID = (SELECT StoryID FROM dbo.CustomizedStory WHERE UserID = (SELECT UserID FROM deleted))END
The problem I am having is that it deletes all of the CustomizedStory rows as specified by the cascading option, but doesn’t delete the Story rows. I can’t seem to understand why this is happening, especially when I explicitly told it to delete story rows.
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
I get the error " 'commandBehavior' not declared". What does this mean? What is CommandBehavior exactly?
' Sub Page_Load(Sender As Object, e As EventArgs)
' Obtain categoryId from QueryString Dim connectionString As String = "server=(local); trusted_connection=true; database=SalesSide" Dim sqlConnection As System.Data.SqlClient.SqlConnection = New System.Data.SqlClient.SqlConnection(connectionString)
Dim queryString As String = "Select tblIdent.fldStockNo, tblIdent.fldProgram, tblIdent.fldGenus" & _ "tblIdent.fldVariety, tblIdent.fldSize, tblAvailability.fldQuantity" & _ "FROM tblIdent INNER JOIN tblIdent On tblAvailability.fldStockNo = tblIdent.fldStockNo"
Dim sqlCommand As System.Data.SqlClient.SqlCommand = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection)
I'd like to understant the Percent Log Used behavior... I monitored the Percent Log Used. For a specific database, it was 50 % used. I backup up the transaction log and the Percent Log Used was still 50 % used. For another database, the log was 60 % used. After the backup, it was 80 % used!!!! Would you help me to understant this situation? Thank you, Fabio
Hi all, I have a question regarding some very odd NT/SQL behavior that is exhibited when running a shell command through xp_cmdshell. If I do a 'net use z: servernamesharename' from the xp_cmdshell, it shows up in Explorer with the icon for a local drive! This would not be more than an ordinary M$ bug, except for the fact if go to a command line, I can't do a 'net use z: /delete' because it says that the network drive isn't there! Can someone shed some light on this?
Now, before installing the service pack I encountered with several strange bugs in the MSSQL mostly in queries that used TOP, gladly they were all fixed when installing the service pack... or so I thought... So yesterday while trying to optimize a heavy query (7 joins - 2 of them are left joins from different tables crossed to the same table) I encountered yet again and with the latest service pack with an even stranger bug. First the returned records are just not always the same, meaning, for example when I use TOP 465 in the SELECT statement, the last record which is 465 contains some value, when using TOP 466 the record before the last which is record 465 contains different value!... of course both with the same ORDER BY clause. Also when I view the execution plan it's also not same in both cases, with TOP 465 it's one way (and much much faster) and with TOP 466 the plan is completely different and much slower...
Does anyone encountered with this phenomenon? Any suggestions? BTW, don't pay too much attention to the number 465, in my case this is the border of the problem, after trying this query on different tables I found that each has it's own border that after it the TOP starts to freak out.
I've scheduled a job to run on a certain schedule, but the Last Run Status date comes back very oddly, a couple years out of synch, the other jobs scheduled report back just fine.
Anyone seen this behavior?
Edward R Hunter, Data Application Designer comScore Networks, Inc.
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet. Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ? ----- HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5 Windows 2000 Advanced Server - SQL Server 2000 SP4
I would like to ask for help. We had no problems with dynamic queries in SQL 2000, which were very fast. But when we ran the same queries in SQL 2005, it was many times slower lasting several seconds. I guess it has something to do with creating execution plan, because when I run it second time, it is suddenly extremely fast. But after just a little change (like adding space character), the speed is very slow again. If it is caused by execution plan in SQL 2005, is it somehow possible to change its settings so that it will behave like in sql 2000?
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
SQL Server 2000 SP4.Running the script below prints 'Unexpected':-----------------------------DECLARE @String AS varchar(1)SELECT @String = 'z'IF @String LIKE '[' + CHAR(32) + '-' + CHAR(255) + ']'PRINT 'Expected'ELSEPRINT 'Unexpected'-----------------------------If the @String variable is set to 'y' (or in fact any ANSI character otherthan 'z'), the result is 'Expected'. The comparison also evaluates asexpected if CHAR(255) is replaced with CHAR(254). The server collation, ifthat matters, is SQL_Latin1_General_CP1_CI_AS.It would be helpful to find the explanatin of this behavior. Thanks.--(remove a 9 to reply by email)
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
I am writing an upsert proc that should detect the change in state for a record. The change in state happens when a particular date field (default null) is populated. However, I can not get a record set that detects the changes properly.
Here is an example set ANSI_NULLS on go create table #t1 ( ID int, DateField datetime )
create table #t2 ( ID int, DateField datetime )
insert into #t1 (ID, DateField) values (1, '7/20/2006') insert into #t2 (ID, DateFIeld) values (1, null)
select * from #t1 join #t2 on #t1.ID = #t2.ID where #t1.DateField <> #t2.DateField
drop table #t1 drop table #t2
The select should return a record because NULL does not equal '7/20/2006' but it doesn't. What am I missing?
joeuser is a domain login. testgroup and grpsysadmins are both nt groups. joeuser is a member of both testgroup and grpsysadmins grpsysadmins is a member of administrators grp on the sql server machine testrole is a sql role testgroup is a login into sql testgroup is a member of testrole
1) when I do select is_member('db_owner') I get 1 2) when I do select is_member('testrole') I get 0 <== this is the wierd one 3) when I do select is_member('machinename estgroup') I get 1
I am expecting all 3 to return 1. now if I am logged as a user that is not sysadmin then select is_member('testrole') returns 1.
this behavior seems to be by design, but for the life of me i have no idea why?
I'm seeing mixed results when I run a certain JOIN query...
Here's the original query:
Code Snippet SELECT DISTINCT PY.PaymentId , PY.PayeeId , PY.PartyAddressId , PY.DistributionId , PY.EntitlementId , PY.DeliveryTypeEnumItemId , PY.AccountPaymentId , PY.ParentPaymentId , PY.PaymentAmount , PY.PaymentDate , PY.PaymentStatusEnumItemId , PY.PaymentStatusDate , PY.ReleaseRunId , PY.ReleaseDate , PY.AccountTransactionLogId , PY.AccountStatusEnumItemId , PY.AccountStatusDate , PY.AccountPaidAmount , PY.ReconciledInd , PY.ReissuedInd -- Added 8/15/2007 AMR , PY.UndeliverableInd , PY.ReissueNote , PY.CreateDate , PY.CreateId , PY.ModifiedDate , PY.ModifiedId , DS.Description , AC.Description --Removed 8/14/2007 AMR , AC.AccountProvider , AC.AccountId , PT.Name , PA.AddressLine1 , PA.AddressLine2 , PA.City , PA.State , PA.Zip5 , PA.Zip4 , PE.clm_no , CM.clmnt_idno FROM Payment PY (NOLOCK) LEFT JOIN (SELECT PY.AccountPaymentId, PA.AddressLine1, ROW_NUMBER() OVER(ORDER BY PA.addressline1 ASC) AS RowNum FROM PartyAddress AS PA INNER JOIN Payee AS PE ON PA.PartyId = PE.PartyId INNER JOIN Payment AS PY ON PE.PayeeId = PY.PayeeId ) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((1*1000) - 1000) + 1) AND ((1*1000) - 1000) + 1000
This query numbers rows based on the PartyAddress. When I keep the WHERE clause in the query, it returns 810 rows. There exists a relationship between PARTY and PARTYADDRESS. There are 10001 rows in the PARTY table and 810 corresponding rows in the PARTYADDRESS table. Not every PARTY will have a corresponding PARTYADDRESS.
What the query SHOULD do is return all rows from PARTY and it's corresponding PARTADDRESS entry, if there is one.
When I take the WHERE clause out, I get back all 10001 rows - what I'm expecting.
Why would the "WHERE RowNum" clause affect the wuery? How can I get around that?
I have 2 packages that for ease I'll call Parent & Child. The Parent package calls the Child package as the 4th step in the process. Once the Child has completed, the Parent has a few more imports that it does.
The Portfolio table is loaded in the Child package which is step 4 in the Parent package. Then in step 5 a few tasks utilize that Portfolio data for lookups.
The strange part is that there are probably 4 or 5 data tasks that do lookups against the Portfolio data in Step 5 (step 5 is a container). All but 2 of the data tasks retreive data from the Portfolio data. The other 2 don't find any data and just move on. Once the package stops, if I simply execute those tasks they run and load the data correctly.
It seems to me to be a caching or an isolation problem but I can't find a solution.
I have a dts package that is mysteriously changing back to a previously saved version. Package is saved, and re-opened with saved changes, then moments later opened again to have been reverted back to an older version. Has anyone ran into this before? The only thing that I can think of is that if a version of the package is left open on some other machine (as the older version0, it may be auto-saving, and reverting back.
I run a website, so all of this is done remotely, I do not have access to the servers directly.
On my prior host, everything on the DB worked just fine. Had it optimized and running pretty good for the site by adding indexes, etc.
Then I moved to my new faster server (new host) and at first everything seemed to be just fine. However, some users started complaining about speed in spots.
Sure enough when I checked a given page -- WHAM... the page took about 14 seconds to load. Not exactly fast. :)
So I checked to make sure the indexes copied over and sure enough they did. But it was still slow, even in Query Analyzer/MSE.
So I decided to rerun my view script... on a lark... just to see if that affected something. And sure enough it did. The page started running at 1 second or less. OK, I thought, the view just got corrupted or something.
All was fine until the next night when it happened again. I had added 12 new rows to the table (I do this nightly) and it seemed that adding the new rows slowed down the system. Until the view was rebuilt. With that done again, it's worked fine.
But each night now I have to rerun that view.
What gives??? Is there a DB setting or something causing views/indexes to not be maintained?
The prior server was 2000 or 2003. The new one is 2005.
Again, I cannot get access to the box itself except through Management Studio. The host's tech support group is less than useless, treating everyone as incompetent before they help you.
I'm trying to CAST an integer to a varchar and I keep getting e+006 in my result set. The reason I need to do this is so that I can combine 2 columns to return a string.
Scenario: I have a database in SQL Server 2000 with 90gb of data. It is growing every day with a some millions records. In order to get superior performance I rebuild the indexes every night. The job takes about 1.5 hours, and sometimes causes to the log file to grow to 30-40 gigs. I do a backup of the log files every 20 minutes, and this causes an log truncation to happen automatically (according to the MS documention). Problem: The log files makes the disk goes full. I cannot insert new disks at this point. Question: When rebuilding the indexes takes 1.5 hours, and I do a log backup (which also does a truncation) every 20 minutes, will the log backup ever get to do something with the transaction log while its still working on a transaction (doing the rebuild of index)? Observation: It seems like the 20 minute log backup is of no help because I see the log files grows so big.
CASE CLOSED: removing the FROM clause noted below handles the problem ....I've used "NOT EXISTS" in inserts before and thought I understood how they work, but I'm puzzled why I get three rows from the following sub-query, which I want to use to prevent errors from duplicate keys. Since it gives me three rows, it will actually try to insert three duplicate records and cause a primary-key fault, which is a twin to the very thing I'm trying to avoid. (of course, there are three records already in the table, none having the key of 20050810)The sub-query is acting like it's using "WHERE DemoDate <> 20050810" instead of a "NOT EXISTS".Attempts to use this statement causes a termination with no records inserted.DemoDate is the primary key and an INT field in SQL Server. Four other int columns for this table have default values.<code>INSERT INTO DemoStats (DemoDate)SELECT 20050810 AS Expr1FROM DemoStats <-- remove this line so the effective table has only one row, when the NOT EXISTS is TRUEWHERE (NOT EXISTS (SELECT * FROM DemoStats WHERE DemoDate = 20050810))</code>
I have a sqldatasource (code listed below) whose insert Paramaters are control parameters. My aspx page has a textbox and a submit button. the button onclick runs the sqdatasource1.insert. What I get is every other insert inserts the text in textbox2 and every other insert enters nothing for the namecust value. I have a required field validator which correctly prevents submission if textbox2 is empty. How do I fix this? :<code> <asp:Panel ID="Panel1" runat="server" Height="50px" Width="548px"> <asp:Button ID="Button1" runat="server" Text="New Prospect" ValidationGroup="insertCust" /> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ControlToValidate="TextBox2" ErrorMessage="Prospect Name can not be blank" ValidationGroup="insertCust"></asp:RequiredFieldValidator> <asp:TextBox ID="TextBox2" runat="server" Width="330px" ValidationGroup="insertCust"></asp:TextBox></asp:Panel> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:AccPac2ConnectionString %>" SelectCommand="SELECT DISTINCT CODETERR FROM dbo.F_arcus() AS F_arcus_1 WHERE (DATEINAC = 0) AND (rtrim(CODETERR) <>'')"> </asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:AccPac2ConnectionString %>" InsertCommand="INSERT INTO dbo.BudgetProspects(NameCust, CodeTerr) VALUES (@Namecust, @codeterr)" SelectCommand="SELECT CustomerID, NameCust FROM dbo.BudgetProspects WHERE (CodeTerr = @codeterr)" UpdateCommand="UPDATE dbo.BudgetProspects SET NameCust = @namecust"> <UpdateParameters> <asp:Parameter Name="namecust" /> </UpdateParameters> <SelectParameters> <asp:ControlParameter ControlID="RadioButtonList1" Name="codeterr" PropertyName="SelectedValue" /> </SelectParameters> <InsertParameters> <asp:ControlParameter ControlID="textbox2" Name="Namecust" PropertyName="text" /> <asp:ControlParameter ControlID="RadioButtonList1" Name="codeterr" PropertyName="SelectedValue" /> </InsertParameters> </asp:SqlDataSource> </code> codebehind button_click: <code> Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click If Not TextBox2.Text Is Nothing Then SqlDataSource1.Insert() TextBox2.Text = "" End If End Sub </code>
Hi all, I face a problem as follows: We have an application runnig on SS2K.We log every delete of
documents(from Archive table) in another table.Now it seems some of the rows have deleted strangely
without any delete log by our application.We assumed there is somebody who has direct access to
database and delete them manually(obviousely our app does not generate any log in this situation)But
there is no people.We check that with admins many times. Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think
our app flaws somewhere? Thanks a lot for your attention.