Hi,My company has a scenario where we would like to change the data typeof an existing primary key from an integer to a char, but we areconcerned about the performance implications of doing so. The scriptfor the two tables that we need to modify is listed below. TableFR_Sessions contains a column named TransmissionID which is currentlyan integer. This table contains about 1 million rows of data. TableFR_VTracking table also contains the TransmissionID as part of it'sprimary key and it contains about 35 millions rows of data. These twotables are frequently joined on TransmissionID (FR_Sessions is theparent). The TransmissionID column is used primarily for joins and isnot typically displayed.We need like to change the TransmissionID data type from int tochar(7), and I had a few questions:1) Would this introduce significant performance degradation? I haveread that char keys/indexes are slower than int/numeric.2) Are there collation options (or any other optimizations) that wecould use to minimize the performance hit of the char(7)...if so whichones?I am a software architect by trade, not a database guru, so please goeasy on my if I overlooked something obvious :)Any suggestions or information would be greatly appreciated.Thanks,Tim-------------------CREATE TABLE [FR_Sessions] ([TransmissionID] [int] IDENTITY (1, 1) NOT NULL ,[PTUID] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[PortNum] [numeric](6, 0) NOT NULL CONSTRAINT [DF_FR_Sessions_PortNum]DEFAULT (0),[CloseStatus] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[RecvBytes] [int] NULL ,[SendBytes] [int] NULL ,[EndDT] [datetime] NULL CONSTRAINT [DF_FR_Sessions_EndDT] DEFAULT(getutcdate()),[LocalEndDT] [datetime] NULL ,[TotalTime] [int] NULL ,[OffenderID] [numeric](9, 0) NULL ,[UploadStatus] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL CONSTRAINT [DF_FR_Sessions_UploadStatus] DEFAULT ('N'),[SchedBatchID] [numeric](18, 0) NULL ,[SWVersion] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[DLST] [bit] NULL ,[TZO] [smallint] NULL ,[Processed] [bit] NOT NULL CONSTRAINT [DF_FR_Sessions_Processed]DEFAULT (0),[CallerID] [varchar] (13) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[PeerIP] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[XtraInfo] [varchar] (1024) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[IdType] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,CONSTRAINT [PK_FR_Sessions] PRIMARY KEY CLUSTERED([TransmissionID]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]CREATE TABLE [FR_VTracking] ([TransmissionID] [int] NOT NULL ,[FrameNum] [int] NOT NULL ,[LatDegrees] [float] NOT NULL ,[LonDegrees] [float] NOT NULL ,[Altitude] [float] NOT NULL ,[Velocity] [float] NOT NULL ,[NumPositions] [smallint] NOT NULL ,[NavMode] [smallint] NOT NULL ,[Units] [smallint] NOT NULL ,[GPSTrackingID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[dtStamp] [datetime] NULL ,CONSTRAINT [PK_FR_VTracking] PRIMARY KEY CLUSTERED([TransmissionID],[FrameNum]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]
The following query performs acceptably (2 seconds against 126,000,000 rows in the main table):
SELECT Count(*)
FROM
Message1_2_3 INNER JOIN
VDMVDO ON Message1_2_3.VDMVDO_ID = VDMVDO.VDMVDO_ID INNER JOIN
NMEA ON VDMVDO.NMEA_ID = NMEA.NMEA_ID
WHERE
NMEA.NMEA_ID BETWEEN 14000000 AND 14086000 AND
VDMVDO.RepeatIndicator = 0 AND
NMEA.SentenceFormatterID = 'VDM'
When we change the first condition from an Int column to a DateTime as in:
NMEA.TimeDate BETWEEN CONVERT(DATETIME, '2007-07-09 8:30:00', 102) AND CONVERT(DATETIME, '2007-07-09 9:30:00', 102)
the query performance falls to 14 seconds, even though both columns are indexed and a similar number of rows are found. When the select clause changes from a simple Count to a complex Max expression, response time falls to over a minute!
Any thoughs on optimizing the DateTime search would be greatly appreciated...
I was hoping I wouldn't be another poster with performance issues after migrating to SQl 2005 from SQL 2000 but here I am.
I am in the process of testing out our databases on Sql Server 2005 for migration from SQL Server 2000 and there are certain portions of code that have been affected negatively. I have read thru many of the posts here and have tried out most of the recommendations. I will start out with things I've done and then provide the actual SQL.
1) I have rebuilt all indexes ( using the DBCC REINDEX using the table option). 2) Updated the db engine to latest hot fix (build 3239) that addresses speed related fixes. 3) I also ran sp_createstats using the 'fullscan' option to create stats on all columns of all tables (minus indexed columns) 4) Since nothing seemed to work, I even ran UPDATE STATICS with FULL SCAN on all tables even though I did not need it as the REBUILD woudl have created stats. But I was willing to try anything.
I have confirmed that the execution plans are different even though the data on both sql 2000 and sql 2005 are identical (i put a copy on 2005). The plans themselves are huge as the queries are huge. Here is the query.
SELECT InterimView.* ,TestView.*
FROM View_LabDataExport_TestFormData_55 TestView RIGHT OUTER JOIN ( SELECT ReqView.*, CDView.* FROM View_LabDataExport_FormData_55 ReqView LEFT OUTER JOIN View_LabDataExport_FormData_CD_55 CDView ON ( CDView.DB_SubjectID_CD = ReqView.DB_SUbjectID )
) InterimView
ON ( InterimView.DB_FormID = TestView.DB_FormID_T AND
The above query takes abotu 8 secs to run on 2000 and about 1 minute to run on 2005. This is for a small dataset and on larger datasets this is only going to more pronounced ( as confirmed by other teams that have already migrated in my company). Another point worth mentioning might be if I remove the TestView.* from the select list, it works in 5 to 6 seconds. Is there an issue with Sql 2005 and a large number of columns or anything of that sort? On 2000, the time remains the same , about 8 seconds if I remove this from the select list.
I have a cube with 2 many-to-many dimensions where a special mdx query needs about 5 seconds. When I resolve the many to many relationships by multiplying the data in the fact table the query needs 21 seconds.
In general do many-to-many dimensions slow down query performance of a cube?
Without the many-to-many dimensions of course the fact table has much more rows. Could this be the reason for the performance loss?
how to tweak query performance of a cube in general?
I want to be able to return the rows from a table that have been updated since a specific time. My query returns results in less than 1 minute if I hard code the reference timestamp, but it keeps spinning if I load the reference timestamp in a table. See examples below (the "Reference" table has only one row with a value 2014-09-30 00:00:00.000)
select * from A where ReceiptTS > '2014-09-30 00:00:00.000'
select * from A where ReceiptTS > (select ReferenceTS from Reference)
Can anyone comment on the engine performance difference between SQL2005 Enterprise Edition versus Standard? I'm talking generalized performance of the engine and not admin features (parallel index operations) or scaled-storage (partitioning)
The marketing literature makes note of two things:
Enterprise can use more then 4 processors
Enhanced read-ahead and scan (super scan) (note: I cannot find anything about this 'feature') One un-noted Feature:
only Enterprise supports 'lock pages in memory'
We are in the process of migrating from SQL2000 to SQL2005 in an OLTP environment. Based on the marketing literature; I would have chosen SQL2005-standard. But based on our limited testing, we are seeing some strange differences.
Query Performance
With MaxDOP=1 and using a large batch query (select top 1500000); SQL2005-Enterprise is twice as fast as SQL2005-Standard.
(Note: this difference persists regardless of lock-pages-in-memory setting)
CPU Utilization
In addition, taskmgr shows that SQL2005-Enterprise uses a single processor at ~90%. While SQL2005-Standard shows a single processor at ~20%.
Lock Behavior
We are also seeing lock-behavior differences. A single DML statement that attempts to modify ~5000 rows will cause Table-locks on SQL2005-Standard but obtain normal row-locks on SQL2005-Enterprise.
These empirical differences make me wonder if the engine codebase is fundamentally different between the two?
In SSIS, I have an Execute SQL Task that runs a direct SQL statement as follows to update an MS/SQL database table:
Update auditTable SET PackageName = @pkgname
In the parameters mapping, @pkgname is mapped to the System variable System::PackageName (as input).
This task runs fine if I use ADO.NET as the data source without any error.
But if I use OLEDB as the source it fails miserably and requires that the @pkgname variable be defined (as if it's a user variable, when in fact it was intended as a parm for the update statement only).
Hello,I have a test database with table A containing 10,000 rows and a tableB containing 100,000 rows. Rows in B are "children" of rows in A -each row in A has 10 related rows in B (ie. B has a foreign key to A).Using ODBC I am executing the following loop 10,000 times, expressedbelow in pseudo-code:"select * from A order by a_pk option (fast 1)""fetch from A result set""select * from B where where fk_to_a = 'xxx' order by b_pk option(fast 1)""fetch from B result set" repeated 10 timesIn the above psueod-code 'xxx' is the primary key of the current Arow. NOTE: it is not a mistake that we are repeatedly doing the Aquery and retrieving only the first row.When the queries use fast-forward-only cursors this takes about 2.5minutes. When the queries use dynamic cursors this takes about 1 hour.Does anyone know why the dynamic cursor is killing performance?Because of the SQL Server ODBC driver it is not possible to havenested/multiple fast-forward-only cursors, hence I need to exploreother alternatives.I can only assume that a different query plan is getting constructedfor the dynamic cursor case versus the fast forward only cursor, but Ihave no way of finding out what that query plan is.All help appreciated.Kevin
Write a CREATE VIEW statement that defines a view named Invoice Basic that returns three columns: VendorName, InvoiceNumber, and InvoiceTotal. Then, write a SELECT statement that returns all of the columns in the view, sorted by VendorName, where the first letter of the vendor name is N, O, or P.
This is what I have so far,
CREATE VIEW InvoiceBasic AS SELECT VendorName, InvoiceNumber, InvoiceTotal From Vendors JOIN Invoices ON Vendors.VendorID = Invoices.VendorID
I compared view query plan with query plan if I run the same statementfrom view definition and get different results. View plan is moreexpensive and runs longer. View contains 4 inner joins, statisticsupdated for all tables. Any ideas?
Using SQL Server 2k5 sp1, Is there a way to deny users access to a specific column in a table and deny that same column to all stored procedures and views that use that column? I have a password field in a database in which I do not want anyone to have select permissions on (except one user). I denied access in the table itself, however the views still allow for the user to select that password. I know I can go through and set this on a view by view basis, but I am looking for something a little more global.
I have a select statement (that joins 5 tables) and performs fine (when I execute with an additional 'and' statement that uses the 'like' keyword.)
select f1, f2, f3...f10 from table1, table2, .. ,table5 where table1.f3= table4.f4 and ... ... ... and table2.f7 = table1.f4 and table4.f4 like 'MYNAME%'
When I create a view with all of the statement except the last:
create view myview as select f1, f2, f3...f10 from table1, table2, .. ,table5 where table1.f3= table4.f4 and ... ... ... and table2.f7 = table1.f4
and then when I say
select * from myview where f4 like 'MYNAME%'
the performance tanks. The query plan completely changes and the resoponse time slows down 100+ times.
I know views are bad, but I am using PeopleSoft and I need to make this view work. Any ideas how?
I have a view on 5 tables. The first [tblOrder] 62000 records The second joined on the first [tblOrderRegel] 2140000 records The third joined on the second [tblAssortimentSamenstelling] 5800 records The fourth [tblArtikelVerpakking] 1650 records And the last also joined on the third [tblAssortiment] 30 records
When I use the next query SELECT * FROM [dbBIBSMonitor].[dbo].[vwOrderRegel_1] WHERE IDOrder in(SELECT IDOrder FROM [vwOrder] WHERE IDRelatie = 6) GO
It will take more than 60 seconds before the resuld is there
The sub select 'SELECT IDOrder FROM [vwOrder] WHERE IDRelatie = 6' returns a list of ID's. When I make the same select with this ID's instead of the sub select the result returns in 1 second
In the first select I have on all tables en index SCAN on the primairy Index with Hash Match/Inner Join to match.
In the second select I have on all tables en index SEEK on the primairy Index with Nested loop/Inner Join to match.
For the sub select 'SELECT IDOrder FROM [vwOrder] WHERE IDRelatie = 6' is not the primairy key used but an other Unique index Containing three columns. [IDOrder] (also the primairy Key), [IDStatus] (1, 2 or 3. 90% = 3), and a date field.
In the view on the subselect is an UNION of three selects. SELECT IDOrder, Date_1 From tblOrder WHERE IDStatus = 1 UNION SELECT IDOrder, Date_2 From tblOrder WHERE IDStatus = 2 UNION SELECT IDOrder, Date_2 From tblOrder WHERE IDStatus = 3
Thats the reason he usses the Unique Index
Is using this index the reason that he does not SEEK on the first table (also tblOrder) of the view in my select??? In other words: Does SQL-server can use only 1 index on a table for a SEEK?
I hope somebody can give me an answer on this complex story.
Thanks Jaap
By the way. I have already tried selects with a INNNER JOIN in stead of a sub query and also a corralated sub query.
Hi all,SQL Server 2000 service pack 3 running on Quad zeon windows 2003 serverwith 6GB of ram.A few months ago we split our databases into operational and completeddata...So now instead of doing select * from tblTest, we do select * fromtblTestboth where tblTestboth is a view like this:Select *From Operational.dbo.tbltestUNIONSelect *From Archive.dbo.tbltestThe tables on both databases are heavily indexed but it still takes muchlonger to do the select * from tblTestBoth than it is to doSelect *From Operational.dbo.tbltestUNIONSelect *From Archive.dbo.tbltestCan anyone please tell me why that is? We are starting to get timeoutsregularly...Should I use indexed views even though the data is inserted/updatedmultiple hundres of times per second?Thank you very much in advance for your help...Regards,Auday*** Sent via Developersdex http://www.developersdex.com ***
Hi, I created a view on two huge tables. I tried to run a simple SELECT statement on this view and it took me several hours to obtain the result. How can I improve the performance of a view? The view should make use of the indexes built in both table, am I right? Thanks.
I have the same database on two different servers. One for production and one for testing. A view that I use runs in less than 2 seconds on the test system, but takes almost 2 minutes on the production server.
What I have noticed is on the test server the view will use an index. The production server ends up scanning a whole table. All indices are the same on both machines for the tables involved and I have updated the statistics. I even went through the process of creating a new table with its indices for the table that is being scanned. Both machines have had service pack 1 installed on them.
I have a view that has been working fine. The data is returned with a second or 2. But in the last 2 or 3 days it will periodically take so long to execute that it will timeout.
I am fairly new to SQL Server 2005, so I am not sure what to check. I am opening the view in Access using an ADP and also in Management Studio.
Also, there are only a couple of users and at the time we are experiencing problems, there really isn't anything going on. No one is trying to run a large query or anything like that.
CREATE VIEW VW_Emplacamentos WITH SCHEMABINDING AS SELECTA.CD_Linha -- This is the unique clustered field A.DT_Emplacamento, A.NR_CNPJ, A.CD_Municipio, A.VL_Potencia, A.VL_CapacidadeCarga, A.NR_QtdPax, A.NR_AnoFabricacao, A.NR_Chassi, A.SG_UF, A.CD_MarcaModelo, A.CD_Procedencia, A.CD_Combustivel, B.NM_Municipio, C.NM_Combustivel, D.NM_MarcaModelo, D.CD_Segmento, D.CD_Fabricante, E.NM_Segmento, F.NM_SubSegmento, G.NM_Fabricante
FROM dbo.TB_Emplacamentos AS A INNER JOIN dbo.TB_Municipio AS B ON A.CD_Municipio = B.CD_Municipio INNER JOIN dbo.TB_Combustivel AS C ON A.CD_Combustivel = C.CD_Combustivel INNER JOIN dbo.TB_ModeloVeiculo AS D ON A.CD_MarcaModelo = D.CD_MarcaModelo INNER JOIN dbo.TB_SegmentoVeiculo AS E ON D.CD_Segmento= E.CD_Segmento INNER JOIN dbo.TB_SubSegmentoVeiculo AS F ON D.CD_SubSegmento = F.CD_SubSegmento INNER JOIN dbo.TB_Fabricante AS G ON D.CD_Fabricante = G.CD_Fabricante
The following query is not getting perfomance. I don't what's going on.
SELECT COUNT(*) FROM VW_Emplacamentos WHERE DT_Emplacamento BETWEEN '20040301' AND '20040316' AND NM_Fabricante = 'VW'
I've created an compost index on the following sequece DT_Emplacamento,NM_Fabricante,NM_Segmento
I recently added several indexed views to a high traffic table (high volume of both inserts and selects) because I needed to have some complicated unique constraints involving columns that allow NULL.
While I didn't notice performance problems at first, it appears to be that the CPU on the SQL Server is getting pegged when more than 10 or 15 simultaneous inserts are happening on the table. This is a quad proc 3Ghz Xeon, so the fact the CPU is hitting 90%+ while doing 15 inserts a second doesn't make sense to me.
Very quickly the sproc that is being repeatedly called by some middle tier components is taking 30+ seconds to execute, eventually causing timeouts. This sproc is very simple. It does a few quick select statements (that take well under 100ms), and then does an insert into the table in question. That's it.
The only thing I can think of is that the overhead of 3 separate indexed views on the table, each of which contains a significant subset of the total rows in the table (550,000+ rows), is causing SQL Server to get swamped trying to keep the indexes up to date.
Does this seem like a possibility? I'm planning on temporarily removing those indexed views to see how it performs without them, although this is dangerous because it creates the potential for invalid data.
Hi all, first of all I shoud mention it that I have posted a similar question in Oracle forum and recieve enough oracle-related feedback and solution, here (http://www.dbforums.com/showthread.php?t=1609238).
Now I want to know if LIKE searches are awful (performance wise) in SQL Server too, do you recommand any solution to enhance it by some indexing twick? I want to avoid using MS index service (if I spell it correctly) as far as I could ;)
Note: We have a lot of LIKE '%abc%' queries at this system.
Greetings,I have 3 servers all running SQL Server 2000 - 8.00.818. Lets callthem parent, child1, and child 2.On parent, I create a view called item as follows:CREATE view Item asselect * from child1.dbchild1.dbo.Item union allselect * from child2.DBChild2.dbo.ItemOn child1 and child2, I have a table "item" with a column named "id"datatype uniqueidentifier (and many other columns). There is anon-clustered index created over column "id".When I connect to the parent server and select from the viewSelect id, col1, col2, …. From item where id =‘280A33E0-5B61-4194-B242-0E184C46BB59'The query is distributed to the children "correctly" (meaning itexecutes entirely (including the where clause) on the children serverand one row is returned to the parent).However, when I select based on a list of idsSelect id, col1, col2, …. From item where id in(‘280A33E0-5B61-4194-B242-0E184C46BB59',‘376FA839-B48A-4599-BC67-25C6820FE105')the plan shows that the entire contents of both children item tables(millions of rows each) are pulled from the children to the parent,and THEN the where criteria is applied.Oddly enough, if I put the list of id's I want into a temp tableselect * from #bv1id------------------------------------280A33E0-5B61-4194-B242-0E184C46BB59376FA839-B48A-4599-BC67-25C6820FE105and thenSelect id, col1, col2, …. From item where id in (select * from #bv1)the query executes with the where criteria applied on the childrendatabases saving millions of rows being copied back to the parentserver.So, I have a hack that works (using the temp table) for this case, butI really don't understand the root cause. After reading online books,in a way I am confused why ANY of the processing is done on thechildren servers. I quote:================================================Remote Query ExecutionSQL Server attempts to delegate as much of the evaluation of adistributed query to the SQL Command Provider as possible. An SQLquery that accesses only the remote tables stored in the provider'sdata source is extracted from the original distributed query andexecuted against the provider. This reduces the number of rowsreturned from the provider and allows the provider to use its indexesin evaluating the query.Considerations that affect how much of the original distributed querygets delegated to the SQL Command Provider include:•The dialect level supported by the SQL Command ProviderSQL Server delegates operations only if they are supported by thespecific dialect level. The dialect levels from highest to lowest are:SQL Server, SQL-92 Entry level, ODBC core, and Jet. The higher thedialect level, the more operations SQL Server can delegate to theprovider.Note The SQL Server dialect level is used when the providercorresponds to a SQL Server linked server.Each dialect level is a superset of the lower levels. Therefore, if anoperation is delegated to a particular level, then Queries involvingthe following are never delegated to a provider and are always it isalso delegated to all higher levels.evaluated locally:•bit•uniqueidentifier================================================This suggests to me that any query having where criteria applied to adatatype uniqueidentifier will have the where criteria applied AFTERdata is returned from the linked server.Any ideas on the root problem, and a better solution to get the queryand all the where criteria applied on the remoted linked server?Thanks,Bernie
I am running A View that INSERTS into #Temp Table - On Only Certain Days the INSERT Speed into #tempDB is so slow. Â Attached snapshot that shows after one minute so many few records are inserted - and it dosent happen every day somedays its very fast.Â
Hi,I have an UPDATE statement which runs very slow (about 1-2 minutes) inWeb as well as in Query Analyzer. Very plain UPDATE statement; usesindexes, etc. I though the reason might be the table being unorganizedbecause of the row expansions due to updating an existing column valuewith a larger value. Therefore, I ran a maintanenance plan whichreorganizes data and rebuilds indexes. But, it seems that there is noimprovement.Any thought or advise will be greatly appreciated.Sincerely,Pelin Bali.
hiFor an unavoidable reason, I have to use row-by-row processing(update) on a temporary table to update a history table every day.I have around 60,000 records in temporary table and about 2 million inthe history table.Could any one please suggest different methods to imporve the runtimeof the query?Would highly appreciate!
Below is a simple UPDATE that I have to perform on a table that hasabout 2.5 million rows (about 4 million in production) This queryruns for an enourmous amount of time (over 1 hour). Both theChangerRoleID and the ChangerID are indexed (not unique). Is thereany way to performance tune this?Controlling the physical drive of the log file isn't possible at ourclient sites (we don't have control) and the recovery model needs tobe set to "Full".UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHEREChangerRoleID IS NULLAny Help would be greatly appreciated!
I have a function and batch witch consisted of same sql statement, and they will get the same result. but time they take is different, and produce a little different query plans, another significant difference is the estimated numbers. the function is always slower than the batch.
does anyone know why same sql would produce different query plans performance? how do i can to let the function as fast as the batch?
We are trying to create a view that references lengths in both metres and feet.
What we want to do is to create a baselength column which either holds the value of the metres column or calculates the metric value of the feet column if there is no value in the metric column.
We can do all the maths for the calculations etc, it is just putting the IF statement into the view to test the values that we are struggling with.
I am executing a case statement list below,USE NorthwindSELECTMONTH(OrderDate) AS OrderMonth,SUM(CASE YEAR(OrderDate)WHEN 1996 THEN 1ELSE 0END) AS c1996,SUM(CASE YEAR(OrderDate)WHEN 1997 THEN 1ELSE 0END) AS c1997,SUM(CASE YEAR(OrderDate)WHEN 1998 THEN 1ELSE 0END) AS c1998FROM OrdersGROUP BY MONTH(OrderDate)ORDER BY MONTH(OrderDate)According to BOL I should be able to save this query as a view.However when I try to save the query as a view I get a error messagestating"View definition includes no output columns or includes no items inthe FROM clause"According to what I have read although the case statement is notsupported via the enterprise query pane, the query should still runand be saved. In my case however I cannot seem to save it no matterwhat I try.Can anyone shed any light on the matter?Thanks in advanceBryan
I am using dynamic sql in my report. I cannot see the sql the report is generating in sql profiler. Is there somewhere else I can view it? Thanks, Linda
I've just inherited a system and have some concerns about the speed ofconnections to a remote server (SQL2000). If I do a simple selectstatement on the table below, it takes 14 minutes to retrive 6 millionrows across a 2Mb line. Obviously it's a reasonable amount of data toretrieve, but I would have thought this would be quicker if I'm honest.Run locally, this is 50 seconds.My thoughts are that there may be some issues with our connection (weget general network errors sporadically, which are being looked at),but wanted some thoughts if the performance is acceptable for what itis doing with what is available. I don't think there is a SQL issue,but want to check if this sounds about right.It's early days, so I'm after a general impression of the speed ofretrieval for the amount of data on the available bandwidth. Assuming abest performance scenario, what is the minimum time it should take as abest guess ?ThanksRyanCREATE TABLE [FIELD_VALUES] ([DEALER_DATA_ID] [int] NOT NULL ,[FIELD_CODE] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[FIELD_VALUE] [numeric](15, 5) NULL ,[CHANGED_TYPE] [int] NULL ,CONSTRAINT [PK_FIELD_VALUES] PRIMARY KEY CLUSTERED([DEALER_DATA_ID],[FIELD_CODE]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]GO
Have a View where I need to enter a conditional IF_THEN_ELSE statement in a new field (field has an alias). If this were an Access query I would do following:
IIf([dbo.AR1_CustomerMaster.EmailAddress] Is Null, "A", "B")