I have a question concerning where to put certain database files for the followinig RAID configurations. The server has 2 RAID configs: 2 hds in a RAID 1 and 4 hds in a RAID 10. The server will host 4 database instances: A replicated db, a Reporting Services db (which technically constitutes 2 db instances) and an application db.
In order to get the best performance, should I put the OS, SQL binary and log files on the RAID 1 config with the data and tempdb on the RAID 10? If not, please explain the best solution. Thank you!
I have been tasked with moving our SQL server estate onto new 64bit SQL 2008 Virtual servers on a VM base. Each Virtual server will be attached to our SAN that i will have no control over. Do i ask for multiple LUNs pretending that there is a COS), Etemp), FData) and Glog) disk structure or do I just present a very big space as a single C: drive and let it go.We are consolidating lots of old physical servers onto fewer (more powerful) virtual servers (according to the VM and SAN administrators)
Please excuse the following question from a newbie to SQL.
I have been asked by our DBA to set-up a new SAN tray to cope with our company data bases.
We have an MSA1000 fibre channel that the Database Servers connect to via a Fibre link.
We have a second SAN tray attached that currently has no disks attached to it, this is the one that we are going to use for the new Database set-up.
We will be purchasing 300 GB 15k disks but the question i have is how many disks and what configuration will i need for the following scenario.
100 GB PRIMARY DATA 50 GB DATABASE LOG 100 GB SECONDARY DATA 300 GB ARCHIVE DATA 50 GB TEMP DATABASE
I have been reading as much as i can about SQL and Disk I/O and it appears on the face of it that Raid 10 would be the fastest (and most fault tolerant) for the this set-up.
If anybody has any ideas they would be gratefully received.
We are going to implement either a replicated database or use log shipping to another database to support our reporting needs. Basically we are moving reporting over data off our transactional database.
With this in mind, I have several questions: Are there any special configurations in terms of disks for a database in this situation?What are the recommended raid types? Also, we currently store our data in one filegroup and indexes in another so obviously I'll want those on 2 separate physical disks other than the C:. What about the log files? Do I care where they go (on C: or another physical drive separate from C:, data filegroup drive and index filegroup drive)?
hello,all I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) . is it possible to do it ? if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ? thanks in advanced!
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
I'm just beginning to experiment with memory optimised tables.
I have two sets of near identical tables - one set normal, the other set memory optimised with DURABILITY=SCHEMA_ONLY - and am running test queries against these. When I say that the two sets are "near identical", I mean that they are the same except for the primary keys: for the normal tables these are defined as PRIMARY KEY CLUSTERED whereas for the memory-optimed ones they are defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=nnnn) as per the requirements for such tables.
I then run a pair of test queries, again identical but one referencing the normal tables and the other referencing the memory optimised ones.
(The query uses an inner join on three tables with row counts of approx 3m rows, 100000 rows and 5000 rows.)
The query against the normal tables runs noticeably faster than that against the memory optimised ones. To try to find out why, I examined the execution plans. the plan for the memory optimised query suggests that I have a missing index: but of course I can't create this againsty a memory optimised table. Is this a bug or am I missing something? Why the performance between the two should be so different?
CREATE TABLE [Sales].[Test_inmem] ( [c1] [int] NOT NULL, [c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db Go select (SUM(size)*8.0)/1024.0 as Space, FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Typically, this table is accessed with a query that includes:
SELECT ... FROM SummaryTable WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
I have just finished installing SQL 2005 Ent Edition on Win 2000 Adv Server, SQL2005 SP2, and SP2 Hotfix KB934458. After the installation, I could see and configure all services via SQL Configuration Manager and SQL Server Surface Area Configuration tools. This worked for a couple of days and now both configuration tools no longer detect SQL2005 components. SQL Server Surface Area Configuration issued an error that said "No SQL Server 2005 components were found on specified computer. Either no components are installed, or you are not an administrator on this computer. (SQLAC)". SQL Configuration Manager did not list any installed services. I don€™t know what caused this. Anyone has any idea? Please help! Below is the Installation Report which shows installed components.
===================================
The following components are installed on this server
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
Is it some how The following sp can be optimized? IF @groupID='812846' BEGIN IF (SELECT count(*) from Employee where SSN= @SSN and groupID=@groupID) > 0 BEGIN UPDATE Employee SET NameLast=@LastName, NameFirst=@FirstName, NameMiddle=@MI,
WHERE SSN= @SSN and GroupId=@GroupId select @EmpId=EmpId from Employee where SSN= @SSN and groupID=@groupID END ElSE BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle,SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
END
else BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle, SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
SELECT procs.name as ProcName, params.name as ParameterName, types.name as ParamType, params.max_length, params.precision, params.scale, params.is_output, params.has_default_value FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT' ORDER BY procname, params.parameter_id
Now, all I need from it is the column params.is_output.
I have modified it down to what I need, but I'm wondering if I can remove some of the joins or anything else for better performance without losing the proper results:
SELECT params.is_output FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT'
SELECT phase, stat, subject, CASE WHEN phase = 'Initial/Data Collection' THEN '1' WHEN phase = 'Screening' THEN '2' WHEN phase = 'Assessment and Selection' THEN '3' WHEN phase = 'Placement' THEN 4 END AS PhaseSort
FROM (SELECT subject, stat, CASE WHEN stat = 'Application Received' THEN 'Initial/Data Collection' WHEN stat = 'Shortlisted' OR stat = 'For Screening' THEN 'Screening' WHEN stat = 'For Assessment' OR stat = 'Passed Initial Evaluation' OR stat = 'Passed Profiles Exam' OR stat = 'Passed Technical Exam' THEN 'Assessment and Selection' WHEN stat = 'For Placement' THEN 'Placement' END AS phase
FROM (SELECT subject, CASE WHEN subject = 'Process Application' OR subject = 'Application Received' THEN 'Application Received' WHEN subject = 'Screen Application' THEN 'For Screening' WHEN subject = 'Phone interview' THEN 'Shortlisted' WHEN subject = 'Initial Interview' THEN 'For Assessment' WHEN subject = 'Profiles assessment'THEN 'Passed Initial Evaluation' WHEN subject = 'Technical Exam and Interview' THEN 'Passed Profiles exam' WHEN subject = 'background and reference check' THEN 'Passed Technical Exam' WHEN subject = 'Job Offer' OR subject = 'Contract Signing' THEN 'For Placement' END AS stat
FROM dbo.filteredtask WHERE (subject = 'application received') OR (subject = 'process application') OR (subject = 'screen application') OR (subject = 'initial interview') OR (subject = 'profiles assessment') OR (subject = 'technical exam and interview') OR subject = 'background and reference check' OR subject = 'phone interview' OR subject = 'shortlisted' OR subject = 'For Placement' OR subject = 'job offer' OR subject = 'contract signing') Phases) stats ORDER BY phasesort
__________________________________________________ Your future is made by the things you are presently doing.
For example, the SiteName & SLAClass field using select statements each time may bog down the system.
Also, I’d like to feed the CustID and Subject fields from another table call Profile instead of typing the CustID field each time.
The result of this statement is to search for customers in the subject line and if customer is found then add the customer information into the Detail table. The Profile table contains all customer information.
UPDATE [TEST3].[dbo].[Detail] SET [CustID] = 'Book Fairs' /*fill in with field from the Profile table automatically*/ ,[SiteName] = (SELECT distinct([Profile].[SiteName] ) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) ,[SLAClass] = (SELECT distinct([Profile].[SLAClass]) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) WHERE [Detail].[CallID] IN (SELECT [CallLog].[CallID] FROM [TEST3].[dbo].[CallLog], [TEST3].[dbo].[Subset], [TEST3].[dbo].[Asgnmnt] WHERE [CallLog].[CallType] = 'DREAM' AND [CallLog].[Subject] LIKE '%Book Fairs%' ) /*fill in with field from the Profile table automatically*/
I have two tables in SQL 6.5 database with identical fields and indexes. Onecontains the data of August 2003 and other July 2003. Now the august tableis larger ( about 40000 more rows ) than the july table but i've noticedthat the same queries perform much faster on the august table than the julytable. Ive tried this with many different queries so i'm wondering whats thereason behind this. Is there a way to optimize a table? Remember , I'm usingSQL 6.5thx
I have this query that is taking more than 5 minutes to run, granted it involves 7 tables, 4 of which have over 100000+ rows, but there must be a quicker way of executing this.
Code Block
SELECT ACP.COMPANY_NAME, WOD.WO , WOH.SCHEDULED_DATE , WOH.JOB_ADDRESS_1, WOH.JOB_ADDRESS_2, WOH.CUSTOMER_CODE, ARC.CUSTOMER_NAME, ARC.BILL_TO_CUSTOMER_CODE, APS.SUPPLIER_NAME, APC.INVOICE_NUMBER as AP_INVOICE_NUMBER, APC.INVOICE_DATE as AP_INVOICE_DATE, APC.DATE_OF_RECORD as AP_DATE_OF_RECORD, WOD.AMOUNT, APC.CHEQUE_NUMBER, WOH.INVOICE_NUMBER as AR_INVOICE_NUMBER, ARI.DATE_OF_RECORD as AR_DATE_OF_RECORD FROM WO_WODDescription_tbl AS WOD LEFT OUTER JOIN WO_Headers_tbl AS WOH ON WOD.COMPANY_CODE = WOH.COMPANY_CODE AND WOD.WO = WOH.WORK_ORDER_NUMBER LEFT OUTER JOIN AP_CurrentDetails_tbl as APC ON WOD.COMPANY_CODE = APC.COMPANY_CODE AND WOD.DRILL_DOWN_NUMBER = APC.DRILL_DOWN AND WOD.AUDIT_NUMBER = APC.AUDIT_NUMBER LEFT OUTER JOIN AR_CustomerMaster_tbl as ARC ON WOD.COMPANY_CODE = ARC.COMPANY_CODE AND WOH.CUSTOMER_CODE = ARC.CUSTOMER_CODE LEFT OUTER JOIN AP_Suppliers_tbl as APS ON APC.COMPANY_CODE = APS.COMPANY AND APC.SUPPLIER_CODE = APS.SUPPLIER_CODE LEFT OUTER JOIN ADM_CompanyProfile_tbl as ACP ON WOD.COMPANY_CODE = ACP.COMPANY_CODE LEFT OUTER JOIN AR_InvoiceDetailCurrent_tbl as ARI ON WOD.COMPANY_CODE = ARI.COMPANY_CODE AND WOH.INVOICE_NUMBER = ARI.INVOICE_NUMBER WHERE (WOD.COMPANY_CODE = '01' OR WOD.COMPANY_CODE = '03') AND APC.CHEQUE_NUMBER <> 'X%' AND (APC.DATE_OF_RECORD < '20061101' AND ARI.DATE_OF_RECORD > '20061031') ORDER BY WOD.COMPANY_CODE, WOD.WO
Can anyone give me any suggestions of how I could speed this up? Also, I have noticed that sqlservr.exe is using more than 1.5GB of the 2GB in the machine while doing conversions from flat files to the database while the CPU is under 3% load, is this action typical of MSSQL2005?
Hi , I created a page that list the total of hours, lunch time and expenses for the employees of the company. I am trying to optimize this stored procedure , but it still takes more than 40 seconds for 50 employees. select @StartDate As DateLigne, TPerson.Name, TPerson.idperson, (select sum(coalesce(hours,0) - coalesce(lunch,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As hours, (select sum(coalesce(nonbillable,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As nonbillable, (select sum((coalesce(miles,0)*@mil)+ coalesce(perdiem,0)+coalesce(supplies,0)+coalesce(airfare,0)+ coalesce( gas,0) + coalesce(autorental,0)+ coalesce(other,0) ) FROM ThereportWHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As Expenses FROM TUserProject, TPerson WHERE TUserProject.etridperson=TPerson.idperson AND etridproject =89
Do you have any idea of how I could optimize this stored procedure?
Hi, I am used to writing Sub-Correlated queries within my main queries. Although they work fine but i have read alot that they have performance hits. Also, as with time our data has increased, a simple SELECT statement with a few Sub-Queries tends to run slower which may be between 10-15 seconds. Following will be a simple example of what i mostly do: SELECT DISTINCT C.CusID, C.Name, C.Age, ( SELECT SUM (Price) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Price, ( SELECT SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Concession, ( SELECT SUM (Price) - SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Difference FROM Customer C INNER JOIN CustomerOrder CO ON C.CusID = CO.CusID_fk ...... WHERE (conditions...) My question is what would be a better way to handle the above query? How can i write a better yet simple query with optimized performance. I would also mention that in some of my asp.net applications, i use inline queries assigned to SqlCommand Object. The reason i mention it that since these queries are written in some class files, how would we still accomplish what i have mentioned above. Kindly could any Query Guru guide me writing better queries. I shall be obliged...
Hello :-)My question is: If I query a partitioned view, but don't know the valuesin the "where x in(<expression>)" clause, i.e.: select * from viewAwhere intVal in(select intVal from tbl1) . Compared to: select * fromviewA where intVal in(5,6).Of course "intVal" is partitioning column.Will this result in an optimized query that searches only the relevanttables?*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
We have a situation where queries against a partitioned view ignore a suitable index and perform a table scan (against 200+MB of data), where the same query on the underlying table(s) results in a 4 page index seek. I can€™t find any mention of the situation, so I€™m trying a post here.
We€™re running SQL Server 2005 Enterprise edition sp2 on Windows 2003 Enterprise Edition sp1 on a two node cluster, and it also occurs on a stand-alone development box with Developer edition. We have four tables, named Options#0, Options#1, Options#2, and Options#3. All are almost identical (script generated by SSMS and edited down a bit):
SET ANSI_NULLS OFF SET QUOTED_IDENTIFIER ON
CREATE TABLE [dbo].[Options#0]( [ControlID] [tinyint] NOT NULL CONSTRAINT [DF_Options#0__ControlID] DEFAULT ((0)), [ModelCode] [char](8) NOT NULL, [EquipmentID] [int] NOT NULL, [AdjustmentContextID] [int] NOT NULL, [EquipmentCode] [char](2) NOT NULL, [EquipmentTypeCode] [char](1) NOT NULL, [Description] [varchar](50) NOT NULL, [DisplayOrder] [smallint] NOT NULL, [IsStandard] [bit] NOT NULL, [Priority] [tinyint] NOT NULL, [Status] [bit] NOT NULL, [Adjustment] [int] NOT NULL, CONSTRAINT [PK_Options#0] PRIMARY KEY CLUSTERED ( [ModelCode] ASC, [EquipmentID] ASC, [AdjustmentContextID] ASC, [ControlID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
ALTER TABLE [dbo].[Options#0] WITH CHECK ADD CONSTRAINT [CK_Options#0__ControlID] CHECK (([ControlID]=(0)))
ALTER TABLE [dbo].[Options#0] CHECK CONSTRAINT [CK_Options#0__ControlID]
The only differences between the tables are in the names and in the value defaulted to and CHECKed, which matches the table name (to support the partitioned view, of course).
We receive and load data ever week and every two month, and use an unlikely algorithm to load and manage its availability by running an ATLER on the view (to maintain the access rights defined for the hosting environment). Scripted out via SSMS, the view looks like:
SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE VIEW [dbo].[Options] AS select * from Options#1 union all select * from Options#3
The problem is that when we issue a query like
SELECT count(*) from Options where ControlID = 1 and ModelCode = '2004NIC9'
The resulting query (as checked via the query plan and SET STATISTICS IO on) will get €œpartitioned€?, running against the proper table, but it will ignore the query, perform a table scan, and churn through 200+MB of data. A Similar query run against the underlying table
SELECT count(*) from Options#1 where ControlID = 1 and ModelCode = '2004NIC9'
(with or without the ControlID = 1 clause) will perform a Clustered Index Seek and read maybe 4 pages.
Analyzing the execution plan shows that the table query work like you€™d think, but for the query against the view we get a Clustered Index Scan, with predicate:
[DBName].[dbo].[Options#1].[ControlID]=(1) AND CONVERT_IMPLICIT(char(8),[ DBName].[dbo].[Options#1].[ModelCode],0)=€™2004NIC9€™
I get the same results when explicitly listing all columns in the view. The code page on the view and tables is the same (as determined by checking properties via SSMS).
Why is the table data column being implicitly converted to the data type that it already is? Why does this occur when working with the partitioned view but not with the actual table? Can this behavior be controlled or modified without losing the (incredibly useful) data loading management benefits of the partitioned view? I€™m guessing (and hoping) it€™s some subtle quirk or mis-setting, please set me on the right path!