I have this query that is taking more than 5 minutes to run, granted it involves 7 tables, 4 of which have over 100000+ rows, but there must be a quicker way of executing this.
Code Block
SELECT
ACP.COMPANY_NAME,
WOD.WO ,
WOH.SCHEDULED_DATE ,
WOH.JOB_ADDRESS_1,
WOH.JOB_ADDRESS_2,
WOH.CUSTOMER_CODE,
ARC.CUSTOMER_NAME,
ARC.BILL_TO_CUSTOMER_CODE,
APS.SUPPLIER_NAME,
APC.INVOICE_NUMBER as AP_INVOICE_NUMBER,
APC.INVOICE_DATE as AP_INVOICE_DATE,
APC.DATE_OF_RECORD as AP_DATE_OF_RECORD,
WOD.AMOUNT,
APC.CHEQUE_NUMBER,
WOH.INVOICE_NUMBER as AR_INVOICE_NUMBER,
ARI.DATE_OF_RECORD as AR_DATE_OF_RECORD
FROM
WO_WODDescription_tbl AS WOD
LEFT OUTER JOIN WO_Headers_tbl AS WOH ON WOD.COMPANY_CODE = WOH.COMPANY_CODE AND WOD.WO = WOH.WORK_ORDER_NUMBER
LEFT OUTER JOIN AP_CurrentDetails_tbl as APC ON WOD.COMPANY_CODE = APC.COMPANY_CODE AND WOD.DRILL_DOWN_NUMBER = APC.DRILL_DOWN AND WOD.AUDIT_NUMBER = APC.AUDIT_NUMBER
LEFT OUTER JOIN AR_CustomerMaster_tbl as ARC ON WOD.COMPANY_CODE = ARC.COMPANY_CODE AND WOH.CUSTOMER_CODE = ARC.CUSTOMER_CODE
LEFT OUTER JOIN AP_Suppliers_tbl as APS ON APC.COMPANY_CODE = APS.COMPANY AND APC.SUPPLIER_CODE = APS.SUPPLIER_CODE
LEFT OUTER JOIN ADM_CompanyProfile_tbl as ACP ON WOD.COMPANY_CODE = ACP.COMPANY_CODE
LEFT OUTER JOIN AR_InvoiceDetailCurrent_tbl as ARI ON WOD.COMPANY_CODE = ARI.COMPANY_CODE AND WOH.INVOICE_NUMBER = ARI.INVOICE_NUMBER
WHERE
(WOD.COMPANY_CODE = '01' OR WOD.COMPANY_CODE = '03')
AND APC.CHEQUE_NUMBER <> 'X%'
AND (APC.DATE_OF_RECORD < '20061101' AND ARI.DATE_OF_RECORD > '20061031')
ORDER BY WOD.COMPANY_CODE, WOD.WO
Can anyone give me any suggestions of how I could speed this up?
Also, I have noticed that sqlservr.exe is using more than 1.5GB of the 2GB in the machine while doing conversions from flat files to the database while the CPU is under 3% load, is this action typical of MSSQL2005?
SELECT procs.name as ProcName, params.name as ParameterName, types.name as ParamType, params.max_length, params.precision, params.scale, params.is_output, params.has_default_value FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT' ORDER BY procname, params.parameter_id
Now, all I need from it is the column params.is_output.
I have modified it down to what I need, but I'm wondering if I can remove some of the joins or anything else for better performance without losing the proper results:
SELECT params.is_output FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT'
Hello :-)My question is: If I query a partitioned view, but don't know the valuesin the "where x in(<expression>)" clause, i.e.: select * from viewAwhere intVal in(select intVal from tbl1) . Compared to: select * fromviewA where intVal in(5,6).Of course "intVal" is partitioning column.Will this result in an optimized query that searches only the relevanttables?*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
We have a situation where queries against a partitioned view ignore a suitable index and perform a table scan (against 200+MB of data), where the same query on the underlying table(s) results in a 4 page index seek. I can€™t find any mention of the situation, so I€™m trying a post here.
We€™re running SQL Server 2005 Enterprise edition sp2 on Windows 2003 Enterprise Edition sp1 on a two node cluster, and it also occurs on a stand-alone development box with Developer edition. We have four tables, named Options#0, Options#1, Options#2, and Options#3. All are almost identical (script generated by SSMS and edited down a bit):
SET ANSI_NULLS OFF SET QUOTED_IDENTIFIER ON
CREATE TABLE [dbo].[Options#0]( [ControlID] [tinyint] NOT NULL CONSTRAINT [DF_Options#0__ControlID] DEFAULT ((0)), [ModelCode] [char](8) NOT NULL, [EquipmentID] [int] NOT NULL, [AdjustmentContextID] [int] NOT NULL, [EquipmentCode] [char](2) NOT NULL, [EquipmentTypeCode] [char](1) NOT NULL, [Description] [varchar](50) NOT NULL, [DisplayOrder] [smallint] NOT NULL, [IsStandard] [bit] NOT NULL, [Priority] [tinyint] NOT NULL, [Status] [bit] NOT NULL, [Adjustment] [int] NOT NULL, CONSTRAINT [PK_Options#0] PRIMARY KEY CLUSTERED ( [ModelCode] ASC, [EquipmentID] ASC, [AdjustmentContextID] ASC, [ControlID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
ALTER TABLE [dbo].[Options#0] WITH CHECK ADD CONSTRAINT [CK_Options#0__ControlID] CHECK (([ControlID]=(0)))
ALTER TABLE [dbo].[Options#0] CHECK CONSTRAINT [CK_Options#0__ControlID]
The only differences between the tables are in the names and in the value defaulted to and CHECKed, which matches the table name (to support the partitioned view, of course).
We receive and load data ever week and every two month, and use an unlikely algorithm to load and manage its availability by running an ATLER on the view (to maintain the access rights defined for the hosting environment). Scripted out via SSMS, the view looks like:
SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE VIEW [dbo].[Options] AS select * from Options#1 union all select * from Options#3
The problem is that when we issue a query like
SELECT count(*) from Options where ControlID = 1 and ModelCode = '2004NIC9'
The resulting query (as checked via the query plan and SET STATISTICS IO on) will get €œpartitioned€?, running against the proper table, but it will ignore the query, perform a table scan, and churn through 200+MB of data. A Similar query run against the underlying table
SELECT count(*) from Options#1 where ControlID = 1 and ModelCode = '2004NIC9'
(with or without the ControlID = 1 clause) will perform a Clustered Index Seek and read maybe 4 pages.
Analyzing the execution plan shows that the table query work like you€™d think, but for the query against the view we get a Clustered Index Scan, with predicate:
[DBName].[dbo].[Options#1].[ControlID]=(1) AND CONVERT_IMPLICIT(char(8),[ DBName].[dbo].[Options#1].[ModelCode],0)=€™2004NIC9€™
I get the same results when explicitly listing all columns in the view. The code page on the view and tables is the same (as determined by checking properties via SSMS).
Why is the table data column being implicitly converted to the data type that it already is? Why does this occur when working with the partitioned view but not with the actual table? Can this behavior be controlled or modified without losing the (incredibly useful) data loading management benefits of the partitioned view? I€™m guessing (and hoping) it€™s some subtle quirk or mis-setting, please set me on the right path!
Is it some how The following sp can be optimized? IF @groupID='812846' BEGIN IF (SELECT count(*) from Employee where SSN= @SSN and groupID=@groupID) > 0 BEGIN UPDATE Employee SET NameLast=@LastName, NameFirst=@FirstName, NameMiddle=@MI,
WHERE SSN= @SSN and GroupId=@GroupId select @EmpId=EmpId from Employee where SSN= @SSN and groupID=@groupID END ElSE BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle,SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
END
else BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle, SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
SELECT phase, stat, subject, CASE WHEN phase = 'Initial/Data Collection' THEN '1' WHEN phase = 'Screening' THEN '2' WHEN phase = 'Assessment and Selection' THEN '3' WHEN phase = 'Placement' THEN 4 END AS PhaseSort
FROM (SELECT subject, stat, CASE WHEN stat = 'Application Received' THEN 'Initial/Data Collection' WHEN stat = 'Shortlisted' OR stat = 'For Screening' THEN 'Screening' WHEN stat = 'For Assessment' OR stat = 'Passed Initial Evaluation' OR stat = 'Passed Profiles Exam' OR stat = 'Passed Technical Exam' THEN 'Assessment and Selection' WHEN stat = 'For Placement' THEN 'Placement' END AS phase
FROM (SELECT subject, CASE WHEN subject = 'Process Application' OR subject = 'Application Received' THEN 'Application Received' WHEN subject = 'Screen Application' THEN 'For Screening' WHEN subject = 'Phone interview' THEN 'Shortlisted' WHEN subject = 'Initial Interview' THEN 'For Assessment' WHEN subject = 'Profiles assessment'THEN 'Passed Initial Evaluation' WHEN subject = 'Technical Exam and Interview' THEN 'Passed Profiles exam' WHEN subject = 'background and reference check' THEN 'Passed Technical Exam' WHEN subject = 'Job Offer' OR subject = 'Contract Signing' THEN 'For Placement' END AS stat
FROM dbo.filteredtask WHERE (subject = 'application received') OR (subject = 'process application') OR (subject = 'screen application') OR (subject = 'initial interview') OR (subject = 'profiles assessment') OR (subject = 'technical exam and interview') OR subject = 'background and reference check' OR subject = 'phone interview' OR subject = 'shortlisted' OR subject = 'For Placement' OR subject = 'job offer' OR subject = 'contract signing') Phases) stats ORDER BY phasesort
__________________________________________________ Your future is made by the things you are presently doing.
For example, the SiteName & SLAClass field using select statements each time may bog down the system.
Also, I’d like to feed the CustID and Subject fields from another table call Profile instead of typing the CustID field each time.
The result of this statement is to search for customers in the subject line and if customer is found then add the customer information into the Detail table. The Profile table contains all customer information.
UPDATE [TEST3].[dbo].[Detail] SET [CustID] = 'Book Fairs' /*fill in with field from the Profile table automatically*/ ,[SiteName] = (SELECT distinct([Profile].[SiteName] ) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) ,[SLAClass] = (SELECT distinct([Profile].[SLAClass]) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) WHERE [Detail].[CallID] IN (SELECT [CallLog].[CallID] FROM [TEST3].[dbo].[CallLog], [TEST3].[dbo].[Subset], [TEST3].[dbo].[Asgnmnt] WHERE [CallLog].[CallType] = 'DREAM' AND [CallLog].[Subject] LIKE '%Book Fairs%' ) /*fill in with field from the Profile table automatically*/
I have two tables in SQL 6.5 database with identical fields and indexes. Onecontains the data of August 2003 and other July 2003. Now the august tableis larger ( about 40000 more rows ) than the july table but i've noticedthat the same queries perform much faster on the august table than the julytable. Ive tried this with many different queries so i'm wondering whats thereason behind this. Is there a way to optimize a table? Remember , I'm usingSQL 6.5thx
Hi , I created a page that list the total of hours, lunch time and expenses for the employees of the company. I am trying to optimize this stored procedure , but it still takes more than 40 seconds for 50 employees. select @StartDate As DateLigne, TPerson.Name, TPerson.idperson, (select sum(coalesce(hours,0) - coalesce(lunch,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As hours, (select sum(coalesce(nonbillable,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As nonbillable, (select sum((coalesce(miles,0)*@mil)+ coalesce(perdiem,0)+coalesce(supplies,0)+coalesce(airfare,0)+ coalesce( gas,0) + coalesce(autorental,0)+ coalesce(other,0) ) FROM ThereportWHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As Expenses FROM TUserProject, TPerson WHERE TUserProject.etridperson=TPerson.idperson AND etridproject =89
Do you have any idea of how I could optimize this stored procedure?
Hi, I am used to writing Sub-Correlated queries within my main queries. Although they work fine but i have read alot that they have performance hits. Also, as with time our data has increased, a simple SELECT statement with a few Sub-Queries tends to run slower which may be between 10-15 seconds. Following will be a simple example of what i mostly do: SELECT DISTINCT C.CusID, C.Name, C.Age, ( SELECT SUM (Price) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Price, ( SELECT SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Concession, ( SELECT SUM (Price) - SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Difference FROM Customer C INNER JOIN CustomerOrder CO ON C.CusID = CO.CusID_fk ...... WHERE (conditions...) My question is what would be a better way to handle the above query? How can i write a better yet simple query with optimized performance. I would also mention that in some of my asp.net applications, i use inline queries assigned to SqlCommand Object. The reason i mention it that since these queries are written in some class files, how would we still accomplish what i have mentioned above. Kindly could any Query Guru guide me writing better queries. I shall be obliged...
I have a question concerning where to put certain database files for the followinig RAID configurations. The server has 2 RAID configs: 2 hds in a RAID 1 and 4 hds in a RAID 10. The server will host 4 database instances: A replicated db, a Reporting Services db (which technically constitutes 2 db instances) and an application db. In order to get the best performance, should I put the OS, SQL binary and log files on the RAID 1 config with the data and tempdb on the RAID 10? If not, please explain the best solution. Thank you!
I'm just beginning to experiment with memory optimised tables.
I have two sets of near identical tables - one set normal, the other set memory optimised with DURABILITY=SCHEMA_ONLY - and am running test queries against these. When I say that the two sets are "near identical", I mean that they are the same except for the primary keys: for the normal tables these are defined as PRIMARY KEY CLUSTERED whereas for the memory-optimed ones they are defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=nnnn) as per the requirements for such tables.
I then run a pair of test queries, again identical but one referencing the normal tables and the other referencing the memory optimised ones.
(The query uses an inner join on three tables with row counts of approx 3m rows, 100000 rows and 5000 rows.)
The query against the normal tables runs noticeably faster than that against the memory optimised ones. To try to find out why, I examined the execution plans. the plan for the memory optimised query suggests that I have a missing index: but of course I can't create this againsty a memory optimised table. Is this a bug or am I missing something? Why the performance between the two should be so different?
We are planning to upgrade. We are using Sql 2008R2 now. Which is the better option migrating to SQL 2012 or migrating to 2014?I am thinking 2014 has memory optimized tables and updatable column stored index. So it is better option.
CREATE TABLE [Sales].[Test_inmem] ( [c1] [int] NOT NULL, [c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db Go select (SUM(size)*8.0)/1024.0 as Space, FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Typically, this table is accessed with a query that includes:
SELECT ... FROM SummaryTable WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.
-- Get the new Customer Identifier, return as OUTPUT param SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned -- users. IF @FK_UserIDList IS NOT NULL EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record -- Retrieve Address reference into @AddressId -- EXEC spInsertForUserNote -- @FK_UserID, --@NoteID, -- @BeenRead -- @Fax, -- @PKId, -- @AddressId OUTPUT
COMMIT TRANSACTION
-------------------------------------------------- GO
ok can someone tell me why i get two different answers for the same query. (looking for last day of month for a given date)
SELECT DATEADD(ms, - 3, DATEADD(mm, DATEDIFF(m, 0, CAST('12/20/2006' AS datetime)) + 1, 0)) AS Expr1 FROM testsupplierSCNCR I am getting the result of 01/01/2007
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
Hey, i've written a query to search a database dependant on variables chosen by user etc etc. Opened up a new sqldatasource, entered the query shown below and went on to the test query page. Entered some test variables, everything works as it should do. Try to get it to show in a datagrid on a webpage - nothing. No data shows. SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches FROM dbo.MAKES INNER JOIN dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 ORDER BY count(*) DESC
Here is the page source
<%@ Page Language="VB" MasterPageFile="~/MasterPage.master" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="	SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches 	FROM dbo.MAKES INNER JOIN 				 dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN 				 dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN 				 dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN 				 dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID 	WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) 	GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID 	HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 	ORDER BY count(*) DESC "> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="ATT_ID1" PropertyName="SelectedValue" /> <asp:ControlParameter ControlID="TextBox1" Name="VAL1" PropertyName="Text" /> <asp:Parameter Name="ATT_ID2" /> <asp:Parameter Name="VAL2" /> <asp:Parameter Name="ATT_ID3" /> <asp:Parameter Name="VAL3" /> <asp:Parameter Name="ATT_ID4" /> <asp:Parameter Name="VAL4" /> </SelectParameters> </asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="SELECT * FROM [ATTRIBUTES]"></asp:SqlDataSource> <br /> <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="SqlDataSource2" DataTextField="ATTRIBUTE_NAME" DataValueField="ATTRIBUTE_ID"> </asp:DropDownList> <asp:TextBox ID="TextBox1" runat="server" AutoPostBack="True"></asp:TextBox><br /> <br /> <br /> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="DERIVATIVE_ID" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="DERIVATIVE_ID" HeaderText="DERIVATIVE_ID" InsertVisible="False" ReadOnly="True" SortExpression="DERIVATIVE_ID" /> <asp:BoundField DataField="Matches" HeaderText="Matches" ReadOnly="True" SortExpression="Matches" /> </Columns> </asp:GridView> </asp:Content> AFAIK I have configured the source to pick up the dropdownlist value and the textbox value (the text box is autopostback). Am i not submitting the data correctly? (It worked with a simple query...just not with this one). I have tried a stored procedure which works when testing just not when its live on a webpage. Please help!
(Visual Web Devleoper 2005 Express and SQL Server Management Studio Express)
However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.
I'm trying to find the command to open up an odbc conection inside sql2005 express. I only have ues of an odbc connector, we're conection to remedy. We will eventually be using stored procedures to extract the data we need from remedy and doing additional data crunching. I'm a foxpro programmer so once I get the correct syntax for making the odbc connector I shold be ok. Also I need a really good advanced book on sql2005. The type of book that would have my odbc answer. I've spent all morning trying to find this information and was unable to.
Thanks in advance
Daniel Buchanan.
If this was the wrong forum to post this on, please move this question to the correct one. I need this answer soon.
We have a issue with a MDS server that have been over us for a couple of days, the original error msg from SQL Server Engine is the one "The query processor could not produce a query plan" but the ones we get on the Excel-Addin are "Sequece contains no elements" or "The value cannot be null" T
• Using Microsoft SQL Server 2012 (SP1) - 11.0.3393.0 (X64) for 6months on this server without issues
• Two weeks ago we started to have 2 errors: "Sequence Contains No Elements" | "The Value Cannot Be Null"
• We are using the last version of Excel Add-in
• We try to reinstall the MDS feature
• If I backup/restore MDS database to other server it works
• We updated to SQL 2012 SP2 + CU4 but the error persisted ...
Looking at the MDSTraceLog we are routed to the this msg
SQL Error Debug Info: Number: 8624, Message: Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services., Server: bbdvsql03inst01, Proc: udpMetadataEntityGetDetailsXML, Line: 28
At line 28 udpMetadataEntityGetDetailsXML is calling udfMetadataEntityGetDetailsXML … and here is where we stopped
** Error found when try to get data from a entity using Excel add-in ** =================================== Sequence contains no elements ------------------------------ Program Location: Â Â at Microsoft.MasterDataServices.AsyncEssentials.AsyncResultBase.EndInvoke() Â Â at Microsoft.MasterDataServices.ExcelAddInCore.AsyncProviderBase`1.EndOperation(IAsyncResult ar)
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120)) insert into #temptable SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10 --column name declare @cname varchar(30)