Dear All, Plz help me in optimising the following query,
Reduce repeatable reads from the table via select ,ythe table sare not
having referntial integrity constarints ,relations
CREATE proc Rolex136Sync
as
DECLARE @date varchar(50),@ydate varchar(50)
print CONVERT(char(11),(GETDATE()-1),100)
SET @date =
substring(CONVERT(char(11),(GETDATE()),100),5,2)+' -'+substring(CONVERT(char(11),(GETDATE()),100),1,3) +'-'+substring(CONVERT(char(11),(GETDATE()),100),8,4)
SET @ydate =
substring(CONVERT(char(11),(GETDATE()-1),100),5,2)+'-'+substring(CONVERT(char(11),(GETDATE()-1),100),1,3)+'-'+substring(CONVERT(char(11),(GETDATE()-1),100),8,4)
Print @date
Print @ydate
insert into
biiod.dbo.data_trans_currentday_test(MobileNo,UA,M essageID,ContentID,Description,MusicLabel,CPID,CPN ame,ContentType,Category,SubCategory,TransactionDa te,Units,Unitprice,Shortcode,Servicecode,OperatorI D,CatID,SubCatID,SpecialPackage,Royalties,
Operator,Circle,OPGPName)
(select mobileno,
(SELECT CASE ua
when 'unknown' then null
else ua
end) as ua,
(select case remarks
when 'unknown' then null
else remarks
end) as remarks,
contentid,
(select case description
when 'unknown' then null
else description
end) as description,
(select musiclabel from datalogs.dbo.cont_master where contentid =
datalogs.dbo.translogs.contentid) as musiclable,
(select cpid from datalogs.dbo.contentprovider where cpname =
datalogs.dbo.translogs.cpname) as cpid,
cpname,
contenttype,
(select catname from datalogs.dbo.cont_Catg where catid in (select
catid from cont_master where contentid =
datalogs.dbo.translogs.contentid)) as category,
(select subcatname from datalogs.dbo.cont_subCatg where subcatid in
(select subcatid from cont_master where contentid =
datalogs.dbo.translogs.contentid)) as subcategory,
transactiondate,1 as Units, price,
(select case servicename
when 'AIRTELIVE' then remarks
when 'ALCOMBOPACKREG' then remarks
when 'HINDI' then remarks
when 'NOKIAGAL' then remarks
when 'SUDOKU' then remarks
when 'SUDOKU_APP' then remarks
else NULL
end) as SHORTCODE,
servicename,
(select case servicename
when 'TSTTNEWS' THEN 600
when 'TSTTWAP' THEN 600
when 'TSTT_MMS' THEN 600
when 'AKTEL' THEN 300
when 'TELEMOVIL' THEN 700
when 'COMCEL' THEN 701
when 'QATAR2900' THEN 1
ELSE
(select operatorid from datalogs.dbo.operator where phoneseries =
substring(datalogs.dbo.translogs.mobileno,1,len(ph oneseries)))
end) as operatorid,
(select catid from datalogs.dbo.cont_master where contentid =
datalogs.dbo.translogs.contentid) as catid,
(select subcatid from datalogs.dbo.cont_master where contentid =
datalogs.dbo.translogs.contentid) as subcatid,
(select specialpackage from datalogs.dbo.cont_master where contentid =
datalogs.dbo.translogs.contentid) as specialpackage,
(select Royalties from datalogs.dbo.cont_master where contentid =
datalogs.dbo.translogs.contentid) as Royalties,
(select case servicename
when 'AKTEL' then 'Aktel'
when 'QATAR2900' then 'STAR MULTIMEDIA 2900'
when 'TELEMOVIL' then 'TeleMovil'
when 'COMCEL' THEN 'COMCEL'
when 'TSTTNEWS' then 'TSTT'
when 'TSTTWAP' then 'TSTT'
when 'TSTT_MMS' then 'TSTT'
when 'ALCLICKWIN6464' then 'Airtel'
when 'ALMMSPORTAL' then 'Airtel'
when 'ALMMSSMSDWN' then 'Airtel'
when 'ALMYALBUM646' then 'Airtel'
when 'HINDU6397' then
substring(remarks,1,PATINDEX('%.6397.%',remarks)-1)
else
(select OPname from datalogs.dbo.operator where phoneseries =
substring(datalogs.dbo.translogs.mobileno,1,len(ph oneseries)))
end) as Operator,
(select case servicename
when 'AKTEL' then 'Bangladesh'
when 'QATAR2900' then 'STAR MULTIMEDIA 2900'
when 'TELEMOVIL' then 'El Salvador'
when 'COMCEL' THEN 'Gautemala'
when 'TSTTNEWS' then 'Trinidad'
when 'TSTTWAP' then 'Trinidad'
when 'TSTT_MMS' then 'Trinidad'
when 'HINDU6397' then substring(remarks,PATINDEX('%.6397.%',remarks) +
6,len(remarks)-PATINDEX('%-%',remarks))
else
(select Circlename from datalogs.dbo.operator where phoneseries =
substring(datalogs.dbo.translogs.mobileno,1,len(ph oneseries)))
end) as Circle,
(select case servicename
when 'AKTEL' then 'Aktel'
when 'QATAR2900' then 'STAR MULTIMEDIA 2900'
when 'TELEMOVIL' then 'TeleMovil'
when 'COMCEL' THEN 'COMCEL'
when 'TSTTNEWS' then 'TSTT'
when 'TSTTWAP' then 'TSTT'
when 'TSTT_MMS' then 'TSTT MMS'
when 'ALCLICKWIN6464' then 'Airtel Click Win 646'
when 'ALMMSPORTAL' then 'Airtel MMS'
when 'ALMMSSMSDWN' then 'Airtel MMS SMS'
when 'ALMYALBUM646' then 'Airtel My Album'
when 'HINDU6397' then 'Hindu 6397'
else
(select OPname from datalogs.dbo.operator where phoneseries =
substring(datalogs.dbo.translogs.mobileno,1,len(ph oneseries)))
end) as OPGPName
from datalogs.dbo.translogs where transactiondate >= @ydate and
transactiondate < @date and servicename in
('AIRTELMMS_SUB','ALMYALBUM646','HINDU6397','MTV', 'QATAR2900','SIFY'))
go
Hi,I have a problem I would really appreciate help with. I am generatingdynamic SQL and need to optimise it. The specific example I am trying tooptimise looks like this:SELECT DISTINCT DataHeaderID FROM TB_DataDetailText T1 WHERE(EntityFieldID IN ( 31) AND (Data LIKE '12BORE%' )) AND(DataHeaderID=(SELECT DISTINCT DataHeaderID FROM TB_DataDetailText CT2WHERE T1.DataHeaderID = CT2.DataHeaderID AND (EntityFieldID IN ( 34)AND (Data LIKE 'SIDE BY SIDE%' )) ))AND(DataHeaderID=(SELECT DISTINCT DataHeaderID FROM TB_DataDetailText CCT3WHERE T1.DataHeaderID = CCT3.DataHeaderID AND (( Data LIKE 'church%' ))))I was OK optimising it with just 2 criteria and changed:SELECT DISTINCT DataHeaderID FROM TB_DataDetailText T1 WHERE(EntityFieldID IN ( 31) AND (Data LIKE '12BORE%' )) AND(DataHeaderID=(SELECT DISTINCT DataHeaderID FROM TB_DataDetailText CT2WHERE T1.DataHeaderID = CT2.DataHeaderID AND (( Data LIKE 'church%' ))))which took 26 seconds to using a derived tableSELECT distinct T1.DataHeaderID FROM TB_DataDetailText as T1inner join (SELECT distinct DataHeaderID, Data FROM TB_DataDetailText )CT2on T1.DataHeaderID = CT2.DataHeaderIDWHERE(T1.EntityFieldID IN ( 31) AND (T1.Data LIKE '12BORE%' ))and (( CT2.Data LIKE 'church%' )) which took 0.03 seconds on the same data.My problem is I need to write code to generate the SQL for 1 to n criteriaand am struggling to write the query for more than 2Best regards,Andrew
We are using a stored procedure which processes more than 11 million records . The Time that the Stored Procedure takes to execute is around 15 to 20 days . This is bad . We are not using any cursors , But Delete , Insert & Update Statements . There is some complex where clause also while performing deletes and updates .
Our job is to fine tune the SP . We run into problems like transaction log fillups , Tempdb full etc... U can imagine the problems when u look at the record count ..
Indexes donot help .
Can anybody recomend ways to fine tune the proc.
One More thing we do cross database updates ,inserts ,& deletes (I mean 2 databases in same server).
We are using a stored procedure which processes more than 11 million records . The Time that the Stored Procedure takes to execute is around 15 to 20 days . This is bad . We are not using any cursors , But Delete , Insert & Update Statements . There is some complex where clause also while performing deletes and updates .
Our job is to fine tune the SP . We run into problems like transaction log fillups , Tempdb full etc... U can imagine the problems when u look at the record count ..
Indexes donot help .
Can anybody recomend ways to fine tune the proc.
One More thing we do cross database updates ,inserts ,& deletes (I mean 2 databases in same server).
Hi, I'm looking for tips, advice, best practice etc. on optimising a DB with over 300,000 user records to be accessed rapidly via a web interface. Any help would be greatly appreciated - specifically i'm looking at the different methods of DB optimisation indexing, clustering etc.
Hi all, I have 20 SQL jobs thst are scheduled to run from say every 5mins to others that run every hour. Does anyone know the best way to optimise these jobs to run. At the moment once these jobs are running I cannot browse any tables in teh DB. I get a locked timeout rquest exceeded..
Do I need to stagger when the jobs run. Or make one big job where they all run one after another ?
Recently our company purchased a product from ip2location.com; a database containing 2.9million IP address ranges, and their approximate cities/countries of registration.
Naturally, I thought - "Hey, wouldn't it be great if we could cross reference this with our IIS logs so we could see where our visitors are from?".
So, I set about doing just that. Our IIS logs are already in SQL.
The trouble is, the ip2location database is so large that executing a query against it to find which range a particular IP address is within takes me 1 second. Multiply that by 1,000,000 log rows, and Houston - we have a problem.
One of the issues is that each record in the ip2location database comprises a FROM_IP and TO_IP range to describe a range of IPs. So to find which IP range a particular IP resides in, I have to join using a BETWEEN statement (or so, I think anyway!).
Does anyone have any suggestions on how to improve this process, or has anyone done anything similar before?
Ideally, I'd like to write a trigger to grab the IP region data (i.e. City/Country) and update the IISLog with that value when the new row is inserted, saving me having to do it later.
I tried this, and the batch import of IIS logs into SQL took so long I got bored and gave up :)
Any help anyone can offer would be appreciated.
Many thanks
Richard. P.S. Somebody is bound to ask - "Why couldn't you just use Google Analytics?"; my answer is because we want to slice up our log data into chunks, and give it to our customers in semi-real time. Plus the logs report on other services - not just HTTP. ;)
Hi.Maybe I'm just being dim, but I'm struggling to get my head aroundoptimising a query with regard to indexes. If I make a select query, suchas a pseudo-example 'select * from bigtable where foo='bar' and(barney>rubble and fred<flintoff)', and the table is indexed on 'foo', howcould I make that any better? What indexes could I add, or what could Ichange in the query?I know it looks simple, but so am I.CheersChris Weston
I have been doing some development work in a large VB6 application. I have updated the search capabilities of the application to allow the user to search on partial addresses as the existing search routine only allowed you to search on the whole line of the address.
Simple change to the stored procedure (this is just an example not the real stored proc):
From: Select Top 3000 * from TL_ClientAddresses with(nolock) Where strPostCode = ‘W1 ABC’ To: Select Top 3000 * from TL_ClientAddresses with(nolock) Where strPostCode LIKE ‘W1%’
Now this is when things went a bit crazy. I know the implications of using ‘with(nolock)’. But seeing the code is only using the ID field to get the required row, and the database is a live database with hundreds of users at any one time (some updating), I think a dirty read is ok in this routine, as I don’t want SQL to create a shared lock.
Anyway my problem is this. After the change, the search now created a Shared Lock which sometimes locks out some of the live users updating the system. The Select is also extremely SLOW. It took about 5 minutes to search just over a million records (locking the database during the search, and giving my manager good reason to shout abuse at me). So I checked the indexes. I had an index set on:
So I created an index just for the strPostCode (non clustered).
This had no change to the ‘Like select’ what so ever. So I am now stuck.
1)Is there another way to search for part of a text field in SQL. 2)Does ‘Like’ comparison use the index in any way? If so how do I set this index up? 3)Can I stop a ‘Shared Lock’ being created when I do a ‘like select’? 4)Do you have any good comebacks I could tell the boss after his next outburst of abuse (please not so bad that he sacks me).
I have an application that reads a monitoring devices that produces 200 digital outputs every second and I would like to store them in a table. This table would get quite big fairly quickly as ultimately I would like to monitor over a hundred of these devices.
I would like to construct queries against each of the individual digital channels or combinations of them.
M first thought is to set up a table with 200 separate columns (plus others for date stamp, device ID etc) however, I am concerned that a table with 200 boolean (1-bit) fields would be an enormous waste of space if each field takes maybe one to four bytes on the hard disk to store a single bit. However, this would have the advantage of make the SQL queries more natural.
The other alternative is to create a single 200 bit field and use lots of ANDing and ORing to isolate bits to do my queries. This would make my SQL code less readable and may also cause nore hassle in the future if the inputs changed, but it would make the file size smaller.
In essence I am asking (hoping) the following : If I create a table with 200 boolean fields, does SQL server express automatically optimise the storage to make it more compact? This means that the server can mess around at the bit level and leave my higher level SQL code looking cleaner and more logical.
I need to merge replicate data to two different types of subscribers:
Clients subscribers which will have a very small percentage of the data from the central database. The data on these machines will be managed using dynamic filtering on host_name() Server subscribers which will manage a copy of all the data from the central database There will be far fewer server subscribers than client subscribers.
As I see it I have two options for the configuration 1) Use two separate merge publications €“ one which is filtered and one which isn€™t 2) Use a single merge publication and setup the filtering so that the server subscribers receive all the rows
Which option is likely to lead to better performance?
With option 1) there would be 2 complete sets of replication metadata which need to be maintained €“ so I am tending towards option 2. Are there any disadvantages in using a dynamic filter to return a very large number of rows?
-- Get the new Customer Identifier, return as OUTPUT param SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned -- users. IF @FK_UserIDList IS NOT NULL EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record -- Retrieve Address reference into @AddressId -- EXEC spInsertForUserNote -- @FK_UserID, --@NoteID, -- @BeenRead -- @Fax, -- @PKId, -- @AddressId OUTPUT
COMMIT TRANSACTION
-------------------------------------------------- GO
ok can someone tell me why i get two different answers for the same query. (looking for last day of month for a given date)
SELECT DATEADD(ms, - 3, DATEADD(mm, DATEDIFF(m, 0, CAST('12/20/2006' AS datetime)) + 1, 0)) AS Expr1 FROM testsupplierSCNCR I am getting the result of 01/01/2007
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
Hey, i've written a query to search a database dependant on variables chosen by user etc etc. Opened up a new sqldatasource, entered the query shown below and went on to the test query page. Entered some test variables, everything works as it should do. Try to get it to show in a datagrid on a webpage - nothing. No data shows. SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches FROM dbo.MAKES INNER JOIN dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 ORDER BY count(*) DESC
Here is the page source
<%@ Page Language="VB" MasterPageFile="~/MasterPage.master" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="	SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches 	FROM dbo.MAKES INNER JOIN 				 dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN 				 dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN 				 dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN 				 dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID 	WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) 	GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID 	HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 	ORDER BY count(*) DESC "> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="ATT_ID1" PropertyName="SelectedValue" /> <asp:ControlParameter ControlID="TextBox1" Name="VAL1" PropertyName="Text" /> <asp:Parameter Name="ATT_ID2" /> <asp:Parameter Name="VAL2" /> <asp:Parameter Name="ATT_ID3" /> <asp:Parameter Name="VAL3" /> <asp:Parameter Name="ATT_ID4" /> <asp:Parameter Name="VAL4" /> </SelectParameters> </asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="SELECT * FROM [ATTRIBUTES]"></asp:SqlDataSource> <br /> <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="SqlDataSource2" DataTextField="ATTRIBUTE_NAME" DataValueField="ATTRIBUTE_ID"> </asp:DropDownList> <asp:TextBox ID="TextBox1" runat="server" AutoPostBack="True"></asp:TextBox><br /> <br /> <br /> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="DERIVATIVE_ID" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="DERIVATIVE_ID" HeaderText="DERIVATIVE_ID" InsertVisible="False" ReadOnly="True" SortExpression="DERIVATIVE_ID" /> <asp:BoundField DataField="Matches" HeaderText="Matches" ReadOnly="True" SortExpression="Matches" /> </Columns> </asp:GridView> </asp:Content> AFAIK I have configured the source to pick up the dropdownlist value and the textbox value (the text box is autopostback). Am i not submitting the data correctly? (It worked with a simple query...just not with this one). I have tried a stored procedure which works when testing just not when its live on a webpage. Please help!
(Visual Web Devleoper 2005 Express and SQL Server Management Studio Express)
However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.
I'm trying to find the command to open up an odbc conection inside sql2005 express. I only have ues of an odbc connector, we're conection to remedy. We will eventually be using stored procedures to extract the data we need from remedy and doing additional data crunching. I'm a foxpro programmer so once I get the correct syntax for making the odbc connector I shold be ok. Also I need a really good advanced book on sql2005. The type of book that would have my odbc answer. I've spent all morning trying to find this information and was unable to.
Thanks in advance
Daniel Buchanan.
If this was the wrong forum to post this on, please move this question to the correct one. I need this answer soon.
We have a issue with a MDS server that have been over us for a couple of days, the original error msg from SQL Server Engine is the one "The query processor could not produce a query plan" but the ones we get on the Excel-Addin are "Sequece contains no elements" or "The value cannot be null" T
• Using Microsoft SQL Server 2012 (SP1) - 11.0.3393.0 (X64) for 6months on this server without issues
• Two weeks ago we started to have 2 errors: "Sequence Contains No Elements" | "The Value Cannot Be Null"
• We are using the last version of Excel Add-in
• We try to reinstall the MDS feature
• If I backup/restore MDS database to other server it works
• We updated to SQL 2012 SP2 + CU4 but the error persisted ...
Looking at the MDSTraceLog we are routed to the this msg
SQL Error Debug Info: Number: 8624, Message: Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services., Server: bbdvsql03inst01, Proc: udpMetadataEntityGetDetailsXML, Line: 28
At line 28 udpMetadataEntityGetDetailsXML is calling udfMetadataEntityGetDetailsXML … and here is where we stopped
** Error found when try to get data from a entity using Excel add-in ** =================================== Sequence contains no elements ------------------------------ Program Location: Â Â at Microsoft.MasterDataServices.AsyncEssentials.AsyncResultBase.EndInvoke() Â Â at Microsoft.MasterDataServices.ExcelAddInCore.AsyncProviderBase`1.EndOperation(IAsyncResult ar)
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120)) insert into #temptable SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10 --column name declare @cname varchar(30)
-- The 3rd query uses an incorrect column name in a sub-query and succeeds but rows are incorrectly qualified. This is very DANGEROUS!!! -- The issue exists is in 2008 R2, 2012 and 2014 and is "By Design"
set nocount on go if object_id('tempdb.dbo.#t1') IS NOT NULL drop table #t1 if object_id('tempdb.dbo
[code]....
This succeeds when the invalid column name is a valid column name in the outer query. So in this situation the sub-query would fail when run by itself but succeed with an incorrectly applied filter when run as a sub-query. The danger here is that if a SQL Server user runs DML in a production database with such a sub-query which then the results are likely not the expected results with potentially unintended actions applied against the data. how many SQL Server users have had incorrectly applied DML or incorrect query results and don't even know it....?
For each customer, I want to add all of their telephone numbers to a different column. That is, multiple columns (depending on the number of telephone numbers) for each customer/row. How can I achieve that?
I want my output to be
CUSTOMER ID, FIRST NAME, LAST NAME, TEL1, TEL2, TEL3, ... etc
Each 'Tel' will relate to a one or more records in the PHONES table that is linked back to the customer.
do i need to nest a query in RS if i want a calculated column to be compared against a multi value variable? It looks like coding WHERE calcd name in (@variable) violates SQL syntax. My select looked like
SELECT ... ,CASE enddate WHEN null then 1 else 0 END calcd name FROM... WHERE ... and calcd name in (@variable)
When viewing an estimated query plan for a stored procedure with multiple query statements, two things stand out to me and I wanted to get confirmation if I'm correct.
1. Under <ParameterList><ColumnReference... does the xml attribute "ParameterCompiledValue" represent the value used when the query plan was generated?
2. Does each query statement that makes up the execution plan for the stored procedure have it's own execution plan? And meaning the stored procedure is made up of multiple query plans that could have been generated at a different time to another part of that stored procedure?
FROM [Order Details] OD, Orders O, Products P, Categories C
WHERE OD.OrderID = O.OrderID
AND OD.ProductID = P.ProductID
AND P.CategoryID = C.CategoryID
AND C.CategoryName = @CategoryName
AND SUBSTRING(CONVERT(nvarchar(22), O.OrderDate, 111), 1, 4) = @OrdYear
GROUP BY ProductName
ORDER BY ProductName
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// From an ADO.NET 2.0 book, I copied the code of ConnectionPoolingForm to my VB 2005 Express. The following is part of the code:
Imports System.Collections.Generic
Imports System.ComponentModel
Imports System.Drawing
Imports System.Text
Imports System.Windows.Forms
Imports System.Data
Imports System.Data.SqlClient
Imports System.Data.Common
Imports System.Diagnostics
Public Class ConnectionPoolingForm
Dim _ProviderFactory As DbProviderFactory = SqlClientFactory.Instance
Public Sub New()
' This call is required by the Windows Form Designer.
InitializeComponent()
' Add any initialization after the InitializeComponent() call.
'Force app to be available for SqlClient perf counting
Using cn As New SqlConnection()
End Using
InitializeMinSize()
InitializePerfCounters()
End Sub
Sub InitializeMinSize()
Me.MinimumSize = Me.Size
End Sub
Dim _SelectedConnection As DbConnection = Nothing
Sub lstConnections_SelectedIndexChanged(ByVal sender As Object, ByVal e As EventArgs) Handles lstConnections.SelectedIndexChanged
End Sub /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// I executed the code successfully and I got a box which asked for "Enter the query string". I typed in the following: EXEC dbo.SalesByCategory @Seafood. I got the following box: Query attempt failed. Must declare the scalar variable "@Seafood". I am learning how to enter the string for the "SQL query programed in the subQuery_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnQuery.Click" (see the code statements listed above). Please help and tell me what I missed and what I should put into the query string to get the information of the "Seafood" category out.
I have two tables .. in one (containing user data, lets call it u).The important fields are:u.userName, u.userID (uniqueidentifier) and u.workgroupID (uniqueidentifier)The second table (w) has fieldsw.delegateID (uniqueidentifier), w.workgroupID (uniqueidentifier) The SP takes the delegateID and I want to gather all the people from table u where any of the workgroupID's for that delegate match in w. one delegateID may be tied to multiple workgroupID's. I know I can create a temporary table (@wgs) and do a: INSERT INTO @wgs SELECT workgroupID from w WHERE delegateID = @delegateIDthat creates a result set with all the workgroupID's .. this may be one, none or multipleI then want to get all u.userName, u.userID FROM u WHERE u.workgroupIDThis query works on an individual workgroupID (using another temp table, @users to aggregate the results was my thought, so that's included) INSERT INTO @users SELECT u.userName,u.userID FROM tableU u LEFT JOIN tableW w ON w.workgroupID = u.workgroupID WHERE u.workgroupID = @workGroupIDI'm trying to avoid looping or using a CURSOR for the performance hit (had to kick the development server after one of the cursor attempts yesterday)Essentially what I'm after is: SELECT u.userName,u.userID FROM tableU u LEFT JOIN tableW w ON w.workgroupID = u.workgroupID WHERE u.workgroupID = (SELECT workgroupID from w WHERE delegateID = @delegateID) ... but that syntax does not work and I haven't found another work around yet.TIA!
When I run the following query from Query Analyzer in SQL Serer 2005, I get a message back that says. Command(s) completed successfully. What I really need it to do is to display the results of the query. Does anyone know how to do this? declare @SniierId as uniqueidentifierset @SniierId = '85555560-AD5D-430C-9B97-FB0AC3C7DA1F'declare @SniierAlias as nvarchar(50)declare @AlwaysShowEditButton as bitdeclare @SniierName as nvarchar (128)/* Check access for Sniier */SELECT TOP 1 @SniierName = Sniiers.SniierName, @SniierAlias = Sniiers.SniierAlias, @AlwaysShowEditButton = Sniiers.AlwaysShowEditButtonFROM SniiersWHERE Sniiers.SniierId=@SniierId
I am trying to run queries against any of the user tables in my MS SQL 7.0 database. I get a message the Query Designer encountered a query error. We have tried rebooting the SQL Server and I am still getting these messages. Also, the SQL error logs look fine - all database maintenance are running successfully including the DBCCs which show no errors. Any help would be greatly appreciated as we are to go into production in a few days.
How to optimize the following Stored procedure running on MSSQL server 2000 sp4 :
CREATE PROCEDURE proc1 @Franchise ObjectId , @dtmStart DATETIME , @dtmEnd DATETIME AS BEGIN
SET NOCOUNT ON
SELECT p.Product , c.Currency , c.Minor , a.ACDef , e.Event , t.Dec , count(1) "Count" , sum(Amount) "Total" FROM tb_Event t JOIN tb_Prod p ON ( t.ProdId = p.ProdId ) JOIN tb_ACDef a ON ( t.ACDefId = a.ACDefId ) JOIN tb_Curr c ON ( t.CurrId = c.CurrId ) JOIN tb_Event e ON ( t.EventId = e.EventId ) JOIN tb_Setl s ON ( s.BUId = t.BUId and s.SetlD = t.SetlD ) WHERE Fran = @Franchise AND t.CDate >= @dtmStart AND t.CDate <= @dtmEnd AND s.Status = 1 GROUP BY p.Product , c.Currency , c.Minor , a.ACDef , e.Event , t.Dec
I am able to run a query which runs FAst in QA but slow in theapplication.It takes about 16 m in QA but 1000 ms on theApplication.What I wanted to know is why would the query take a longtime in the application when it runs fast on SQL server?How should we try debugging it?Ajay