what is order in which conditions are processed for sql query i.e for
select * from table1, table2 where cond1 and cond2 and cond3 which condition will be processed first (i.e. for optimination purpose condition cutting down max no. of row shud be placed first or last?)
When i process a order in table Orders_t i would like to check in storage_t if we defenetly have it in storage. .... if we have it in storage, i decrease the "amount" by 1 ..(amount -1), and process the order. Otherwise it will return nothing.
This is what i´ve come up with so far:
CREATE TRIGGER checkInStorage ON orders_t FOR INSERT, UPDATE
AS DECLARE @tOrderId char(3), BEGIN SET @tOrderId = (SELECT orderId FROM INSERTED)
--check if the amount in storage
IF EXIST(SELECT amount FROM storage_t WHERE orderId = @tOrderId and amount >= 0) BEGIN --if it return true, i update the storage by decrease the amount with one UPDATE storage_t SET (amount = amount - 1) WHERE orderId = @tOrderId END
I have a query with many (approximately, 30) conditions, such as:
select ....... from table1 join table2 on ( (table1.field1 = table2.field1 OR table1.filed1 IS NULL) AND (table1.field2 = table2.field2 OR table1.filed2 IS NULL) )
My question is:
In C++ or C#, when I write a condition like this, say, in an IF or WHILE, I know that I would be better off specifying the IS NULL (well, == null, to be precise) first, and use | instead of ||. In that case, the first condition (equality to null) is checked first, it's fast, and if it's not satisfied, the control flow goes to the next statement.
The question is, is there the same rule in T-SQL?
I mean, if I put the "... IS NULL" first, and then "OR ... = ...", will the query run faster than if I write it the other way around (that is, "... = ... OR ... IS NULL")?
This is very important to me, because most of those fields are VARCHAR, and due to some business rules, I can't change them to numerics etc, which would be compared much faster than text. So, even if I use full text search, I still need to find a way to optimize the query for performance...
By the way, I know that I can put those conditions in the WHERE clause, but as far as I know it won't make much of a difference for performance. So, my question is primarily about the order of conditions, in which SQL Server constructs its query plan.
[Edited:] In other words, what runs faster: comparing varchar to null or comparing varchars? And does it make a difference if I switch their places in my sql script?
We are using SQL Server 2000 SP4, Standard Edition. [Dev edition on the dev machine.]
My question is fairly simple. When I join between two tables, I always use the ON syntax. For example:
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId)
No problems there. However, if I then decide to further filter the selection based on some trait of the UserRole, I have two options: I can add the condition as a WHERE statement, or I can add the condition within the ON block.
--Version 1:
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId) WHERE
UserRoles.Active = 'TRUE'
-- Version 2
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId
AND UserRoles.Active = 'TRUE')
So, the question is, which is faster/better, if either? The Query Analyzer shows the two queries have the exact same execution plan, which makes sense, since they're both joining the same tables. However, I'm wondering if adding the condition in the ON statement results in fewer rows the JOIN statement initially needs to join up, thus reducing the overall initial size of the results table before the WHERE conditions are applied.
So is there a difference, performance wise? I imagine that if Users had a thousand records, and UserRoles had 10 records, then the JOIN would create a cartesian product of the two tables, resulting in 10,000 records in the table before the WHERE conditions are applied. However, if only three of the UserRoles is set to Active, would that mean that the resulting table, before applying WHERE conditions, would only contain 3000 records?
I have a confusing situation that I CAN resolve with cursors, BUT cursors are far too performance hungry to be run.
Tables: LogFileIn, LogFileOut,
I want to look on LogFileIn FLI for files that has a NULL LogFileOut(response to LFI) AND dont have any other value.
A file can have multiple LFO's depending on how many times that file gets processed.
It is confusing but bear with me.
Code:
select * from LogFileIn where IdentityFileChar = 'BPRTFILE' and responseLogFileOutId is null
This does pull out a list of files that havent been processed, BUT that same file might have been processed again and succeeded. Therefore the results I get arent accurate.
With the cursor I would have taken the filename from the LFI and used that to check if that filename comes up again in the LFO as processed.
Did you get that? And can you help?? Im confusing myself!
This table records everytime a file is run, so there might be 20 identical LFI's all with different LFO's... Im looking for LFI's that have null, and only Null.
I'm working with SQL Server Reporting Services (SQL Server 2005) I tried to configure a Data-Driven Subscription but the run failed with the error: Done: 1 processed of 1 total; 1 errors.
I cannot find anything into the error log ReportServerService__<datetime>.log located at <Program Files>Microsoft SQL ServerMSSQL.#Reporting ServicesLogFiles.
How can I find useful information about the error message?
I am using http to access my cube (MS Analysis Server) and there are userswho are getting "Query could not be processed. Access is forbidden"intermittently. Can someone help out and let me know what is causing thisissue? Some users are not getting this error.
when is seemd that everything works some weird behaviours comes out.
I try to summarize the problem without to post the complete code.
Service Broker is set to have a dialog between two databases on the same SQL Server instance.
The Initiator queue has retention=on and there is an activation SP to handle errors and Target's end dialog message.
The Target queue has retention=off, MAX_READER =1 and there is an activation SP to receive the message (WAIT FOR (RECEIVE (1) ...), TIMEOUT 30000 and do something with this message (sample insert into a DB).
The conversation has a Timeout Dialog to end the dialog after a while.
The problem that the message is constantly processed. The Process doens't stop is I end the dialof after the processing either.
n.b.the Receive is within a Transation that I commit at the end.
some other informations that in the meanwhile I found out :
This was my complete WAIT FOR(RECEIVE :
WAITFOR ( RECEIVE top(1) -- just handle one message at a time @message_type=message_type_id, --the type of message received @messagetypename=message_type_name, @message_body=message_body, -- the message contents @dialog = conversation_handle -- the identifier of the dialog this message was received on FROM [TargetQueue] ), timeout 1000;
if (@@ROWCOUNT = 0) BEGIN COMMIT; BREAK; END
IF I delet TIMEOUT 1000, everything works as expected ... Inside if (@@ROWCOUNT = 0)BEGIN..END I wrote also an Insert into a table to see wheter the end of the queue was reached but this insert never occurs (neither with not without timeout) I'm happy that it works what if this is the solution, it make no sense to me!
I have a scenerio where i need to run the scheduled package for every 10 minutes. Let me give some examples. Say once my ETL process is done, my SS_Batch table gets inserted once. Here i have two columns like SS_Batch_cd and SS_Create_TS respectively. Once the ETL process runs successfully this SS_Batch_CD column will have values as 'C' which means 'Completed'. Similarly when ETL process fails this SS_Batch_CD column will have values as 'F' which means 'Failed'. And Similary when ETL process is in progress this SS_Batch_CD column will have values as 'P' which means it is in 'Progress'. In SS_Create_TS column, it will have date like 2008-04-15'.
Note : This SS_Batch table will get inserted only once when ETL job is over (whether the job is runned sucessfully or not or in Progress) along with the date.
Actually i need to run the package for every 10 minutes because i dont know when the last cube was processed. If i get this last processed cube date then i can check this processed cube date with SS_Create_TS column and SS_Batch_cd in SS_Batch table. Say if Last processed cube date is greater than SS_Create_TS then i can refresh or process the cube. This validation i need to do to achieve my goal.
What are all the control flows should we need to have in SSIS to achieve this scenerio? Please give me briefly to solve this problem.
We've just patched our Dev server, and, of our 3 servers (Dev, Test, Prod), we see major changes in the output of a raw data import process that runs nightly. Each night we import tables from a Remedy helpdesk system running on Oracle and place each ticket into a row on a table, tracking the changes and history of the ticket, etc. This includes tracking when supervisor groups are changed during the course of a ticket (ie Helpdesk to Data Comms to Billing etc). Now, after SP1, the results on Dev are skewed with partial strings showing in the From and To fields, broken in odd places (like the middle of words).
Has anyone noticed any changes in which post-SP1 SQL Server 2005 processes strings? Does it automatically trim spaces or convert NULLs etc?
The data import should be identical between Dev and the other servers.
Hi All, I have some clarifications on stopping my package once cube is refreshed or processed.
Below i have given steps for the transformation in my package
Let me give you what are all the dataflow transformations that i had given in my package.
1. Data Flow Task
2. Script Task 1- I have written code for getting the last processed cube (global variable has been declared for Last processed cube date - lastProcessedCube)
3. Script Task 2 - I have written code for SS_Batch table where i can get Create_Ts date that is assigned to another global variable - create_ts.
4. Analysis Processing Task.
In between Script Task 2 and Analysis Processing Task i have given @lastProcessedCube > @create_ts for Expression and Constraint under Precedence Constraint Editor
Actually i need to run package for every 10 minutes which i can do it in Job Schedule and need to refresh or process the cube daily. Is there any way to stop the package once when my cube is processed on that day. Again start the package for the next day.... Is it possible to do this? Please let me know.
I have moved some DB's from a physical server to a virtual server while the physical server is being upgraded. This morning when I checked to make sure all the jobs are running fine I saw that a backup job failed for one of the databases. The error was :
[color=#ff0000]Executed as user: NT AUTHORITYSYSTEM. BACKUP 'database 1' detected an error on page (1:1400911) in file 'D:Datadatabase 1.mdf'. [SQLSTATE 42000] (Error 3043) BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.[/color]
Then I ran a DBCC check db command and found the following errors:
Msg 8928, Level 16, State 1, Line 1 Msg 8939, Level 16, State 98, Line 1 Msg 8976, Level 16, State 1, Line 1 Msg 8978, Level 16, State 1, Line 1 Msg 8939, Level 16, State 98, Line 1 Msg 8978, Level 16, State 1, Line 1
The messages varies from data from one page cannot be referenced on another, data page cannot be processed, page was not seen in the scan although the parent was etc etc.
Hi;I have been writing a lot of quick and diryt tsql scripts to correctdatabase issues.I've been saving the output of the scripts for future records.I have been executing them through query analyzer.I find the messages query analyzer puts out to be obscuring to myoutput: ie" 6 rows processed "Is there anyway to shut this off?Steve
I have created a cube with Analysis Service 2005. I then publish the pivot table (generated in Microsoft Excel 2003) as a web page. When I view the web page on the domain the LAN everything is working properly. But when the web page I view on Internet I get the error.
The query could not be processed:
* An error was encounted in the transport layer
* The peer prematurely closed the connection.
The web page is publish on the Internet Information Server, I changed the security directory of the website, but the error persists.
The firewall of the machine is disabled.
The port 2382 and 2883 are allowed.
The components needed to consult the cube are installed in the machine on which you are viewing the cube.
On the Rol of Analysis Service are allowed all users.
What's the best way to write key values of records processed in my SSIS 2012 package to the log provider chosen?My SSIS package deactivates widgets as well as thingies. Â It was just released into production this week, runs daily, and we'd like to keep a close eye on what it's doing for a couple of weeks, by that I mean on any day be able to quickly see which thingies and widgets were deactivated that morning. Â It typically deactivates less than 5 widgets and thingies per day. Â
We could dig through the database to see which were deactivated, but that only works if somebody hasn't manually reactivated it since it was deactivated. Â We need a log. Â This is a temporary watch we're doing, so we don't want to write to a table or make make any significant package changes, such as adding new tasks.It seems like writing the 5-or-so deactivated thingy and widget key values to the log is the best way to watch this package. Â What's the most efficient way to do this? Â I'm hoping to avoid a new loop and script component with "Dts.Log" calls, but I don't know any other way.
I can pull a last processed date for the entire model: URL...That works great for showing the last time anything in the entire model was processed. It is not table specific. Updating / processing table A also changes the timestamp on tables B and C.However, if I want to look at just a specific table in the model (build a column for each fact table and a measure to go with it) I find that doing any process operation updates all the =Now() columns in all the fact tables.If I have a model with 3 fact tables and I do a process table using ProcessFull on one of them, all three tables calculated columns "LastProcessed"  =NOW() are updated. URL...
Type>ProcessFull</Type> Â < Object> Â Â Â < DatabaseID>MyModel</DatabaseID> Â Â Â < DimensionID>TableA</DimensionID> Â </
[code]...
Is there a way to setup a measure in the model on a per table level to show last processed date for each individual table? LastDataRefresh:= MAX ('TableA'[LastProcessed])
For example: Our sales table may update three times a day, where as our warehouse inventory table is only updated nightly.I wanted to let end users see by adding a measure for each table when the last process event was for a given table in the model.
I have an Integration services application that retrieves big strings (over 32k) from an ODBC data source and stores them in a SQL server 2005 database as ntext. Somehow the strings are truncated to 32K. The app only copies the data from the ODBC datasource to the (OLEDB) target. Also, some of the special characters in the input text are not properly translated.
I have a main report and 2 sub reports. I would like to pass the total Premium Paid from 2nd Sub report , and Total Bonus received from 3rd Sub report back to main report.
My scenario is actualy much more complicated than what i had attached below, which i cannot join the query for the 3 different reports together. But for demo purpose, i created the following sample scenario.
My main report is to display the sales summary by person by location.
A person may have more than 1 account number, and each account number is entitly for bonuses. As illustrated below.
I'm looking at SSIS and SqlBulkCopy as a possible method to quickly insert and process large amounts of data, my current method uses the sp_xml_preparedocument and OPENXML to parse an xml document of the data I want to process and insert into the database, however I'm noticing the performance of SqlServer parsing the xml document isn't that good.
I found the C# SqlBulkCopy method (new in .NET 2.0) and I was thinking it would be faster to use that to load my data into a staging table and then use SSIS to extract the data from the staging table, process and transform it as necessary and finally load it into the final destination tables. I was able to create an Integration Services Project that selects the data from the staging table, does a bit of processing on one of the fields (by calling a stored procedure), and finally loads the processed data into the final table.
The problem with this is I need to clean out the rows that were extracted from the staging table and I'm not sure how I can accomplish this (and I can't just issue a "delete from staging_table" because there maybe new records in the staging table that were not processed), perhaps I can either delete each record as it is proccessed or somehow get the last proccessed identity id from the staging table and delete all records less than or equal to that id.
thanks in advance for any help you can provide, maybe there is an easy way to accomplish what I'm trying to do.
I've created a cube and it processed fine. The calculate command is there. The measure that I'm attempting to run in the SSAS/Visual Studios browser is simply a count rows measure. When I drag the measure to the window, it says no rows available. If I click on the filter that allows nulls, the only change it makes is that it goes from no rows available to "NULL".Â
I have  data driven subscriptions which are working fine for some subscription and one i have newly developed which is used to send the missing punch report to employeesNow , when the subscription runs it shows that all have gone email in report server but some of they are not getting any email i have tested it on my id to send each ones report to me but i also get sometimes 2 of 15 , 4 of 15 or else but not all i also checked mailroot folder under my server's inetpub there are nothing stucks. How to check why is it not sending email to all employees ?
I tried to use the "How to call a Parameterterized Stored Procedure by Using ADO.NET and Visual Basic.NET" in http://support.microsoft.com/kb/308049 to learn "Use DataReader to Return Rows and Parameter" in my VB 2005 Express. I did the following things:
1) created a stored procedure "pubsTestProc1.sql" in my SQL Server Management Studio Express (SSMSE):
USE pubs
GO
Create Procedure TestProcedure
(
@au_idIN varchar (11),
@numTitlesOUT Integer OUTPUT
)
As
select A.au_fname, A.au_lname, T.title
from authors as A join titleauthor as TA on
A.au_id=TA.au_id
join titles as T
on T.title_id=TA.title_id
where A.au_id=@au_idIN
set @numTitlesOUT = @@Rowcount
return (5)
2) created a project "pubsTestProc1.vb" in my VB 2005 Express and copied the following code from http://support.microsoft.com/kb/308049 (i.e. Added the code to the Form_Load eventQL_Client) :
Imports System.Data
Imports System.Data.Client
Imports System.Data.SqlType
Imports System.Data.Odbc
Public Class Form1
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
Dim PubsConn As SqlConnection = New SqlConnection & _
Console.WriteLine("Number of Records: " & (NumTitles.Value))
End Sub
End Class ////////////////////////////////////////////////////////////////////////////////////////////////////////////////// I compiled the above code and I got the following 15 errors: Warning 1 Namespace or type specified in the Imports 'System.Data.Client' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. Make sure the imported element name doesn't use any aliases. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 2 9 pubsTestProc1 Warning 2 Namespace or type specified in the Imports 'System.Data.SqlType' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. Make sure the imported element name doesn't use any aliases. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 3 9 pubsTestProc1 Error 3 Type 'SqlConnection' is not defined. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 10 25 pubsTestProc1 Error 4 ')' expected. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 15 30 pubsTestProc1 Error 5 Name 'testCMD' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 17 9 pubsTestProc1 Error 6 ')' expected. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 20 23 pubsTestProc1 Error 7 Name 'RetValue' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 21 9 pubsTestProc1 Error 8 ')' expected. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 23 23 pubsTestProc1 Error 9 Name 'auIDIN' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 24 9 pubsTestProc1 Error 10 ')' expected. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 26 28 pubsTestProc1 Error 11 Name 'NumTitles' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 27 9 pubsTestProc1 Error 12 Name 'auIDIN' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 29 9 pubsTestProc1 Error 13 Type 'SqlDataReader' is not defined. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 32 25 pubsTestProc1 Error 14 Name 'RetValue' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 39 47 pubsTestProc1 Error 15 Name 'NumTitles' is not declared. C:Documents and Settingse1enxshcMy DocumentsVisual Studio 2005ProjectspubsTestProc1pubsTestProc1Form1.vb 40 52 pubsTestProc1 //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// First, I am completely lost here alreay. Second, I should have the following code from http://support.microsoft.com/kb/308049 too:
OLE DB Data Provider
Dim PubsConn As OleDbConnection = New OleDbConnection & _
("Provider=sqloledb;Data Source=server;" & _
"integrated security=sspi;initial Catalog=pubs;")
Dim testCMD As OleDbCommand = New OleDbCommand & _
("TestProcedure", PubsConn)
testCMD.CommandType = CommandType.StoredProcedure
Dim RetValue As OleDbParameter = testCMD.Parameters.Add & _
Console.WriteLine("Number of Records: " & (NumTitles.Value)) ////////////////////////////////////////////////////////////////////////////////////////////////////// Now, I am completely out of touch with these two sets of the code from the Microsoft KB 308049 and do not know how to proceed to get the following output stated in the Microsoft KB 308049-see the below:
4. Modify the connection string for the Connection object to point to the server that is running SQL Server.
5.
Run the code. Notice that the ExecuteScalar method of the Command object returns the parameters. ExecuteScalar also returns the value of column 1, row 1 of the returned rowset. Thus, the value of intCount is the result of the count function from the stored procedure. //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// Please help and tell me what I should do to get this project landed on the right track.
Finding the "pieces of information" I need to successfully install the SQL Server Express edition is so complex. Uninstalls do "not" really uninstall completely, leading to failure of SQL install. Can you suggest a thorough, one-stop site for directions for the order of app uninstalls and then the order for app installs for the following...
SQL Server Express edition
Visual Studios 2005
Jet 4.0 newest upgrade
.Net Framework 2.0 (or should I use 3.0)
VS2005 Security upgrade
Anything else I need for just creating a database for my VS2005 Visual Basic project?
I was trying to use MS Access as my backend db but would like to try SQL Express
In SQL sERVER 2008, I have two fields - Depatment and Employees. I need to sort the result set by employee number ascending order, with following exception
1)when department number = 50 - the preferred order is Employee # - 573 followed by 551-572 (employee # belong to Dept 50 = 551-573)
2)When Department number = 20 – the preferred sort order is Employee # 213-220, followed by Employee # 201-213 (employee # belong to Dept 20 = 201-220)
I never paid much attention to this before but I noticed this today in a new table I was creating.
For tables defined in the tabular model the table properties have something like SELECT Blah FROM TableName ORDER BY Blah Then in the tabular model the table's data is in the same order it was ordered by in the data source for the table.
I have a date table I setup and I noticed it is NOT respecting the sort order.
I have it sorted by DateID which sorts with the oldest date first and newest date as last row.However, the table that is imported and stored in the data model is not in that order.
I can of course manually sort the rows in BIDS/DataTools, but I find this discrepancy odd.
Would this have negative impacts on the EARLIER function for example if the data rows are not in the order specified?
INSERT INTO PurchaseOrder (PurchaseOrderDate, SupplierID) VALUES(@date, @SupplierID)
END
SET @POno = @@IDENTITY
RETURN
However, how do i make it that it will automatically adds item under the POno being gernerated? can i use a trigger so that whenever a Insert for PO is success, it automaticallys proceed to adding the items into the table PurcahseOrderItem?