This was originally posted on DBForums.com, so here is the link:
http://www.dbforums.com/showthread.php?t=1614086
Since some of the Microsoft staff come around here occasionally, I figured I should at least link to it here. This is the gist of the problem, though. I was asked to come up with a script to create all required data directories in case an emergency was declared, and someone had to rebuild one of our database servers. Most of you are probably thinking of hitting up the sysaltfiles table about now, but this will turn into a cautionary tale. Try it if you dare. The one requirement is that you install the data for SQL Server in a non-standard directory that has a short path (such as C:MSSQL8, instead of the whole C:Program files...).
What I am unclear on is whether this is a problem in the reverse function, the r(l)trim function, or the fixed-width datatype. I have confirmed that transferring the data to a temp table did not eliminate the...oddity.
select filename
from master..sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename)), filename
from sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename))
from sysaltfiles
where dbid = 2
I have also had two independent DBAs confirm this oddity exists, so this should be relatively easy to replicate.
I have three tables I am using, aspnet_Users, Stories, CustomizedStory. Stories and CustomizedStory are related via a foreign key StoryID. I’ve setup the tables so that when I delete a Story row it cascade deletes the corresponding row from CustomizedStory. Each CustomizedStory row has a reference to UserID from aspnet_Users. Since, I didn’t want to mess with the table definition by adding a cascade delete option on aspnet_Users, I decide to use a trigger, essentially delete all customized stories and associated stories if a user is deleted:ALTER TRIGGER [dbo].[DeleteCustomizedStories] ON [dbo].[aspnet_Users] FOR DELETEAS BEGIN DELETE FROM dbo.Story WHERE StoryID = (SELECT StoryID FROM dbo.CustomizedStory WHERE UserID = (SELECT UserID FROM deleted))END
The problem I am having is that it deletes all of the CustomizedStory rows as specified by the cascading option, but doesn’t delete the Story rows. I can’t seem to understand why this is happening, especially when I explicitly told it to delete story rows.
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
I get the error " 'commandBehavior' not declared". What does this mean? What is CommandBehavior exactly?
' Sub Page_Load(Sender As Object, e As EventArgs)
' Obtain categoryId from QueryString Dim connectionString As String = "server=(local); trusted_connection=true; database=SalesSide" Dim sqlConnection As System.Data.SqlClient.SqlConnection = New System.Data.SqlClient.SqlConnection(connectionString)
Dim queryString As String = "Select tblIdent.fldStockNo, tblIdent.fldProgram, tblIdent.fldGenus" & _ "tblIdent.fldVariety, tblIdent.fldSize, tblAvailability.fldQuantity" & _ "FROM tblIdent INNER JOIN tblIdent On tblAvailability.fldStockNo = tblIdent.fldStockNo"
Dim sqlCommand As System.Data.SqlClient.SqlCommand = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection)
I'd like to understant the Percent Log Used behavior... I monitored the Percent Log Used. For a specific database, it was 50 % used. I backup up the transaction log and the Percent Log Used was still 50 % used. For another database, the log was 60 % used. After the backup, it was 80 % used!!!! Would you help me to understant this situation? Thank you, Fabio
Hi all, I have a question regarding some very odd NT/SQL behavior that is exhibited when running a shell command through xp_cmdshell. If I do a 'net use z: servernamesharename' from the xp_cmdshell, it shows up in Explorer with the icon for a local drive! This would not be more than an ordinary M$ bug, except for the fact if go to a command line, I can't do a 'net use z: /delete' because it says that the network drive isn't there! Can someone shed some light on this?
Now, before installing the service pack I encountered with several strange bugs in the MSSQL mostly in queries that used TOP, gladly they were all fixed when installing the service pack... or so I thought... So yesterday while trying to optimize a heavy query (7 joins - 2 of them are left joins from different tables crossed to the same table) I encountered yet again and with the latest service pack with an even stranger bug. First the returned records are just not always the same, meaning, for example when I use TOP 465 in the SELECT statement, the last record which is 465 contains some value, when using TOP 466 the record before the last which is record 465 contains different value!... of course both with the same ORDER BY clause. Also when I view the execution plan it's also not same in both cases, with TOP 465 it's one way (and much much faster) and with TOP 466 the plan is completely different and much slower...
Does anyone encountered with this phenomenon? Any suggestions? BTW, don't pay too much attention to the number 465, in my case this is the border of the problem, after trying this query on different tables I found that each has it's own border that after it the TOP starts to freak out.
I've scheduled a job to run on a certain schedule, but the Last Run Status date comes back very oddly, a couple years out of synch, the other jobs scheduled report back just fine.
Anyone seen this behavior?
Edward R Hunter, Data Application Designer comScore Networks, Inc.
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet. Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ? ----- HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5 Windows 2000 Advanced Server - SQL Server 2000 SP4
I would like to ask for help. We had no problems with dynamic queries in SQL 2000, which were very fast. But when we ran the same queries in SQL 2005, it was many times slower lasting several seconds. I guess it has something to do with creating execution plan, because when I run it second time, it is suddenly extremely fast. But after just a little change (like adding space character), the speed is very slow again. If it is caused by execution plan in SQL 2005, is it somehow possible to change its settings so that it will behave like in sql 2000?
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
SQL Server 2000 SP4.Running the script below prints 'Unexpected':-----------------------------DECLARE @String AS varchar(1)SELECT @String = 'z'IF @String LIKE '[' + CHAR(32) + '-' + CHAR(255) + ']'PRINT 'Expected'ELSEPRINT 'Unexpected'-----------------------------If the @String variable is set to 'y' (or in fact any ANSI character otherthan 'z'), the result is 'Expected'. The comparison also evaluates asexpected if CHAR(255) is replaced with CHAR(254). The server collation, ifthat matters, is SQL_Latin1_General_CP1_CI_AS.It would be helpful to find the explanatin of this behavior. Thanks.--(remove a 9 to reply by email)
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
I am writing an upsert proc that should detect the change in state for a record. The change in state happens when a particular date field (default null) is populated. However, I can not get a record set that detects the changes properly.
Here is an example set ANSI_NULLS on go create table #t1 ( ID int, DateField datetime )
create table #t2 ( ID int, DateField datetime )
insert into #t1 (ID, DateField) values (1, '7/20/2006') insert into #t2 (ID, DateFIeld) values (1, null)
select * from #t1 join #t2 on #t1.ID = #t2.ID where #t1.DateField <> #t2.DateField
drop table #t1 drop table #t2
The select should return a record because NULL does not equal '7/20/2006' but it doesn't. What am I missing?
joeuser is a domain login. testgroup and grpsysadmins are both nt groups. joeuser is a member of both testgroup and grpsysadmins grpsysadmins is a member of administrators grp on the sql server machine testrole is a sql role testgroup is a login into sql testgroup is a member of testrole
1) when I do select is_member('db_owner') I get 1 2) when I do select is_member('testrole') I get 0 <== this is the wierd one 3) when I do select is_member('machinename estgroup') I get 1
I am expecting all 3 to return 1. now if I am logged as a user that is not sysadmin then select is_member('testrole') returns 1.
this behavior seems to be by design, but for the life of me i have no idea why?
I'm seeing some strange behavior from a stored procedure of mine. It essentially grabs a bunch of rows using a fairly simple JOIN....here's the from statement:
Code Snippet FROM Payment PY (NOLOCK) JOIN (SELECT DISTINCT PY.AccountPaymentId, ROW_NUMBER() OVER(ORDER BY PY.AccountPaymentId ASC) AS RowNum FROM Payment PY (NOLOCK)) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((@Page * @PageSize) - @PageSize) + 1) AND ((@Page * @PageSize) - @PageSize) + @PageSize and ((@PayeeName IS NULL) OR (PT.[Name] LIKE '%' + @PayeeName + '%')) AND ((@AccountId IS NULL) OR (AC.AccountId = @AccountId)) AND ((@DistributionId IS NULL) OR (DS.DistributionId = @DistributionId)) AND ((@PaymentDate IS NULL) OR (DATEADD(day, DATEDIFF(day, 0, PY.PaymentDate), 0) = DATEADD(day, DATEDIFF(day, 0, @PaymentDate), 0))) -- Ignores the time AND ((@PaymentNumber IS NULL) OR (PY.AccountPaymentId = @PaymentNumber)) AND ((@IsReconciled IS NULL) OR (PY.ReconciledInd = @IsReconciled)) AND ((@AmountIssued IS NULL) OR (PY.PaymentAmount = @AmountIssued)) AND ((@AmountPaid IS NULL) OR (PY.AccountPaidAmount = @AmountPaid)) AND ((@IssueStatus IS NULL) OR (PY.PaymentStatusEnumItemId = @IssueStatus)) AND ((@AccountStatus IS NULL) OR (PY.AccountStatusEnumItemId = @AccountStatus)) AND ((@IsReissued IS NULL) OR (PY.ReissuedInd = @IsReissued)) ORDER BY AccountPaymentID ASC
When I pass a 1 for the @IsReconciled parameter, I get the right number of rows back - 9779. But when I pass a 0 (zero), i get no rows back, although there are 222 rows which satisfy the condition.
Is there somethig I'm overlooking (I don't think I am...)? I don't know whay 1 works and 0 wouldn't...
FYI - the @IsReconciled parameter is set to NULL at the outset of the procedure -
I'm seeing mixed results when I run a certain JOIN query...
Here's the original query:
Code Snippet SELECT DISTINCT PY.PaymentId , PY.PayeeId , PY.PartyAddressId , PY.DistributionId , PY.EntitlementId , PY.DeliveryTypeEnumItemId , PY.AccountPaymentId , PY.ParentPaymentId , PY.PaymentAmount , PY.PaymentDate , PY.PaymentStatusEnumItemId , PY.PaymentStatusDate , PY.ReleaseRunId , PY.ReleaseDate , PY.AccountTransactionLogId , PY.AccountStatusEnumItemId , PY.AccountStatusDate , PY.AccountPaidAmount , PY.ReconciledInd , PY.ReissuedInd -- Added 8/15/2007 AMR , PY.UndeliverableInd , PY.ReissueNote , PY.CreateDate , PY.CreateId , PY.ModifiedDate , PY.ModifiedId , DS.Description , AC.Description --Removed 8/14/2007 AMR , AC.AccountProvider , AC.AccountId , PT.Name , PA.AddressLine1 , PA.AddressLine2 , PA.City , PA.State , PA.Zip5 , PA.Zip4 , PE.clm_no , CM.clmnt_idno FROM Payment PY (NOLOCK) LEFT JOIN (SELECT PY.AccountPaymentId, PA.AddressLine1, ROW_NUMBER() OVER(ORDER BY PA.addressline1 ASC) AS RowNum FROM PartyAddress AS PA INNER JOIN Payee AS PE ON PA.PartyId = PE.PartyId INNER JOIN Payment AS PY ON PE.PayeeId = PY.PayeeId ) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((1*1000) - 1000) + 1) AND ((1*1000) - 1000) + 1000
This query numbers rows based on the PartyAddress. When I keep the WHERE clause in the query, it returns 810 rows. There exists a relationship between PARTY and PARTYADDRESS. There are 10001 rows in the PARTY table and 810 corresponding rows in the PARTYADDRESS table. Not every PARTY will have a corresponding PARTYADDRESS.
What the query SHOULD do is return all rows from PARTY and it's corresponding PARTADDRESS entry, if there is one.
When I take the WHERE clause out, I get back all 10001 rows - what I'm expecting.
Why would the "WHERE RowNum" clause affect the wuery? How can I get around that?
I have 2 packages that for ease I'll call Parent & Child. The Parent package calls the Child package as the 4th step in the process. Once the Child has completed, the Parent has a few more imports that it does.
The Portfolio table is loaded in the Child package which is step 4 in the Parent package. Then in step 5 a few tasks utilize that Portfolio data for lookups.
The strange part is that there are probably 4 or 5 data tasks that do lookups against the Portfolio data in Step 5 (step 5 is a container). All but 2 of the data tasks retreive data from the Portfolio data. The other 2 don't find any data and just move on. Once the package stops, if I simply execute those tasks they run and load the data correctly.
It seems to me to be a caching or an isolation problem but I can't find a solution.
Scenario: I have a database in SQL Server 2000 with 90gb of data. It is growing every day with a some millions records. In order to get superior performance I rebuild the indexes every night. The job takes about 1.5 hours, and sometimes causes to the log file to grow to 30-40 gigs. I do a backup of the log files every 20 minutes, and this causes an log truncation to happen automatically (according to the MS documention). Problem: The log files makes the disk goes full. I cannot insert new disks at this point. Question: When rebuilding the indexes takes 1.5 hours, and I do a log backup (which also does a truncation) every 20 minutes, will the log backup ever get to do something with the transaction log while its still working on a transaction (doing the rebuild of index)? Observation: It seems like the 20 minute log backup is of no help because I see the log files grows so big.
CASE CLOSED: removing the FROM clause noted below handles the problem ....I've used "NOT EXISTS" in inserts before and thought I understood how they work, but I'm puzzled why I get three rows from the following sub-query, which I want to use to prevent errors from duplicate keys. Since it gives me three rows, it will actually try to insert three duplicate records and cause a primary-key fault, which is a twin to the very thing I'm trying to avoid. (of course, there are three records already in the table, none having the key of 20050810)The sub-query is acting like it's using "WHERE DemoDate <> 20050810" instead of a "NOT EXISTS".Attempts to use this statement causes a termination with no records inserted.DemoDate is the primary key and an INT field in SQL Server. Four other int columns for this table have default values.<code>INSERT INTO DemoStats (DemoDate)SELECT 20050810 AS Expr1FROM DemoStats <-- remove this line so the effective table has only one row, when the NOT EXISTS is TRUEWHERE (NOT EXISTS (SELECT * FROM DemoStats WHERE DemoDate = 20050810))</code>
I have a sqldatasource (code listed below) whose insert Paramaters are control parameters. My aspx page has a textbox and a submit button. the button onclick runs the sqdatasource1.insert. What I get is every other insert inserts the text in textbox2 and every other insert enters nothing for the namecust value. I have a required field validator which correctly prevents submission if textbox2 is empty. How do I fix this? :<code> <asp:Panel ID="Panel1" runat="server" Height="50px" Width="548px"> <asp:Button ID="Button1" runat="server" Text="New Prospect" ValidationGroup="insertCust" /> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ControlToValidate="TextBox2" ErrorMessage="Prospect Name can not be blank" ValidationGroup="insertCust"></asp:RequiredFieldValidator> <asp:TextBox ID="TextBox2" runat="server" Width="330px" ValidationGroup="insertCust"></asp:TextBox></asp:Panel> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:AccPac2ConnectionString %>" SelectCommand="SELECT DISTINCT CODETERR FROM dbo.F_arcus() AS F_arcus_1 WHERE (DATEINAC = 0) AND (rtrim(CODETERR) <>'')"> </asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:AccPac2ConnectionString %>" InsertCommand="INSERT INTO dbo.BudgetProspects(NameCust, CodeTerr) VALUES (@Namecust, @codeterr)" SelectCommand="SELECT CustomerID, NameCust FROM dbo.BudgetProspects WHERE (CodeTerr = @codeterr)" UpdateCommand="UPDATE dbo.BudgetProspects SET NameCust = @namecust"> <UpdateParameters> <asp:Parameter Name="namecust" /> </UpdateParameters> <SelectParameters> <asp:ControlParameter ControlID="RadioButtonList1" Name="codeterr" PropertyName="SelectedValue" /> </SelectParameters> <InsertParameters> <asp:ControlParameter ControlID="textbox2" Name="Namecust" PropertyName="text" /> <asp:ControlParameter ControlID="RadioButtonList1" Name="codeterr" PropertyName="SelectedValue" /> </InsertParameters> </asp:SqlDataSource> </code> codebehind button_click: <code> Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click If Not TextBox2.Text Is Nothing Then SqlDataSource1.Insert() TextBox2.Text = "" End If End Sub </code>
Hi all, I face a problem as follows: We have an application runnig on SS2K.We log every delete of
documents(from Archive table) in another table.Now it seems some of the rows have deleted strangely
without any delete log by our application.We assumed there is somebody who has direct access to
database and delete them manually(obviousely our app does not generate any log in this situation)But
there is no people.We check that with admins many times. Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think
our app flaws somewhere? Thanks a lot for your attention.
I have a stored procedure that does a lookup on a particular field. Sometimes it runs almost instantly and other times it drags. Running the proc through ISQL it will come back with less than 10 logical reads and other times it comes back with over 800,000 logical reads. This only happens on our production box(of course). Anybody seen anything like this? Thanks -Bob-
Hi all, I face a problem as follows: We have an application runnig on SS2K.We log every delete of documents(from Archive table) in another table. Now it seems some of the rows have deleted strangely without any delete log by our application.We assumed there is somebody who has direct access to database and delete them manually(obviousely our app does not generate any log in this situation)But there is no people.We check that with admins many times. Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think our app flaws somewhere?
Hi, We are noticing some strange behavior with MSSQL. I was hoping somebody can shed some light.
Since the past few days in our production database we have been getting the following error
Could not allocate space for object 'Person' in database 'PROD' because the 'PRIMARY' filegroup is full...
Some data on your system
The PRIMARY filegroup is 20G in size. And 80% of it is free. Also, the Primary filegroup is setup to auto grow and there is about 20G free space at the OS level. So, I don't think it has anything to do with the filegroup.
I started doing some research on the 'person' object (table), run sp_spaceused etc... to get some data. On a trail and error basics I run DBCC INDEXDEFRAG on the 'person' table and the error went away.
Questions
1. Why is the error misleading? Why does it say, the 'PRIMARY' filegroup is full? 2. Why am I getting this error and why does running DBCC INDEXDEFRAG fix the problem? 3. I can understand the index being fragmented and needing a defrag, but can MSSQL server actually fail with this error if the index is fragmented too much? 4. What data can I look at and prevent this from happening in the future?
Running SQL 2000 SP4 on Windows 2000 Server. When a SELECT query is executed in Query Analyzer results are displayed in the results pane, fine...when an "ORDER BY" clause is added to the select stmt the query runs for apprx. 20 seconds then displays "TempDb log is full [Error 9002, Severity 17]". (The tempdb is set to autogrow/10%/unrestricted and plenty of storage space) The next time the query is executed after getting the "tempdb log is full" error, the server reboots upon query execution. As soon as F5 or ctrl-e is pressed to execute the query the server does a hard crash - black screen then reboot...no warnings, no event viewer log, no sql log warnings/errors, no drwtsn log, no hardware log message errors...nothing.
Re-applied Windows 2000 SP3 and SQL SP4 to server, same behavior.
Can someone try and explain to me how a sql query can run fine and return data, yet when I try to run it in dtswizard, while the "preview" view shows the data, actually running it returns zero rows?
This is on SQL2005 express and since I can't get dtsrun or dtexec to work, I'm using auto-it to simulate my actually stepping through the process. Very kludgy, but "when all you've got is a hammer...."
I have some code, that just works. But when I put it into a exec() I get a strange error. First the code
exec (' select year,quarter, min(price) as minimum into #temptable from ( select ntile(4) over (partition by year,quarter order by price) as rang ,year ,quarter ,price from ( select distinct id,year,quarter,price from #tbl1 ) as a ) as b group by rang,year,quarter
Select year ,quarter, ( SELECT CAST(minimum as varchar(max)) + "," FROM #temptable t2 where t1.year=t2.year AND t1.quarter=t2.quarter FOR XML PATH("") ) from #temptable t1 group by year,quarter
')
SQL Server says, that Insert Into is missing a column name. It points at line with FOR XML PATH(""). Any Idea what's wrong here?
The Output without exec (and correct quotes) looks like:
I created a very simple SSIS package (it just updates a single row in a table). When I execute the package from the command line (using dtexec), it takes about a second to finish, as expected. But when I execute it using dtexec via xp_cmdshell, it takes about 91 seconds. When I use a SQL job to execute the package as an operating system type, it takes 91 seconds. Using a SQL job to execute it as a SSIS package takes again 91 seconds. It appears that something is causing a delay of about 90 seconds before the package actually gets executed. I tried changing the SSIS service account, but that didn't change anything. Why is executing the package through SS2005 different than executing it directly from the command prompt?
I have a trigger on each table in a database which updates a datetime column (lastupdatedon) and a varchar field (enteredby) after update on each individual table. The problem is, when one table is updated at the same instant as another table (by different users), the same varchar data (SYSTEM_USER) is put in both tables, even though the users are different.
Here is an example of the trigger:
CREATE TRIGGER EventUpdate ON jrowley.Event AFTER UPDATE AS UPDATE jrowley.Event SET LastChangedOn = getdate(),EnteredBy = SYSTEM_USER WHERE EventID in (Select EventID from deleted)
Ok maybe someone smarter than me (not difficult) can help me out :)
Two queries:
#1:
select a.load_id, b.attribute_name, a.attribute_loc, b.attribute_loc from PCI_Template_NR_Map b left outer join PCI_Master a on a.attribute_name=b.attribute_name and a.load_id in (select distinct top 53 load_id from PCI_Load')
#2:
select a.load_id, b.attribute_name, a.attribute_loc, b.attribute_loc from PCI_Template_NR_Map b left outer join PCI_Master a on a.attribute_name=b.attribute_name and a.load_id in (select distinct top 54 load_id from PCI_Load')
#1 Produces a correct left outer join, any values in PCI_Template_NR_Map that are not in PCI_Master show null. This works for any number of load_id values in the subselect up to 53.
#2 Is the exact same query, except I am no longer limiting it to 53, when i get to 54 (or if I take away the top altogether) it returns rows as if it were a normal inner join, instead of a left outer join (i.e. it only shows rows that are a match between PCI_Master and PCI_Template_NR_Map).
Can anyone explain to me what is happening here, and how to get around this issue? I need to be able to filter this for as many load_ids as I need (usually aobut 200). Thanks in advance,
can't figure out why the following locking scenario works the way itdoes:spid 1:start transactionselect a row from table T1 with updlockresult: see an Update lock for the row and index keyspid 2:query for the same row as in session 1result: the query succeedsspid 3:do same as spid 1result: blocks trying to get Update lock on the index key for rowNow I do the query again in spid 2 and it blocks trying to get a Sharedlock on the index key, and it's waiting for spid3. why was it able toget a shared lock on a index key that had an Update lock at first butthen can't get the same shared lock when somebody else is also tryingto get an update lock?