I can't seem to place the "option (recompile)" in any valid position so that the following procedure executes without a syntax error .
USE [PO] GO /****** Object: StoredProcedure [dbo].[npSSUserLoad] Script Date: 4/18/2015 3:57:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ...
-- Generated code - DO NOT MODIFY
-- From Object Schema: 'C:XXXXXX.NetPOPOModel\_ObjectSchema
-- To regenerate this procedure use the 'Open With' option on file _ObjectSchema and select POCodeGen.exe
Declare @SqlCmd nvarchar(max) Declare @ParamDefinitions nvarchar(1024) Set @ParamDefinitions = N'@UserId int,NTUser varchar(30), @XmlResult XML OUTPUT' Set @SqlCmd = N'Set @XmlResult = ( Select [UserId] [a], [UserName] [b],
Hi everyone In my SqlServer Management Studio Express, on start up it shows the server type option, but greyed.So that value is fixed to database engine. ( I'm trying to work on an SqlServer Compact Edition database through the SSMStudiothat's why I'm trying to get this to change.)Besides, after I connect i go to the Object Explorer, expand the server node, and go to Replication.When i expand replication, i get the "Local Subscription" option, but nothng for Publication.( I want to work on Merge Replication, that's why I desparately need Publication to work)Am i missing something here? I did not install SqlServer separately, I only have what comes bundled with the Visual Studio 2005 Setup.
I've been trying to get a range of values out of my SQL Server 2000 db without sucess. The field in question has the data type of char(8) and looks like this:
House_numbr_pub (leading spaces in front of each value) 140A 140 141 142 143 144 145 146 147 148 149 150 151 . . . 14500 . . .
Does anyone know how write a sql statement that will return 140-150, but excluding the ' 14500' and 100-1000. I tried the following where clause to return a range between 100-1000.
WHERE (cook.STREET_PUB LIKE 'lincoln%') AND (LEN(LTRIM(cook.HOUSE_NUMBR_PUB)) BETWEEN 3 AND 4) AND ( (LTRIM(cook.HOUSE_NUMBR_PUB) >= '100') and (LTRIM(cook.HOUSE_NUMBR_PUB) <= '1000') )
This where clause only return two records (100 and 1000). I want it to return 100-1000.
I also tried the following where clause:
WHERE LTrim(cook.HOUSE_NUMBR_PUB) like '1[45][0-9]' OR LTrim(cook.HOUSE_NUMBR_PUB) like '1[45][0-9]'
However, building this on the fly with .net will take some effort if someone is trying to search range 1-10000000.
Is it possible to do the following without cursors or creating an identity column:
I have a table from legacy data with ~ 1 million records. I need to insert this into the new table which has a unique varchar(11) key. For the new system this key is generated by calling a SP that returns the next key in sequence. To put the legacy data records in the same table I want to first create a new column at end of legacy data table and populate this using SQL without going thru using cursor and calling the SP for each and every record to get a unique varchar(11) key.
Hello forum. I have a problem that is kill me. Initial dates: a table (Tbl_1) to collect dates from users (within form in VB), a view (View_1) to compute some columns (is a part from business€™ logic) and a table (Tbl_2) designated for a trigger. I will try to resume the contents of above table€¦ create table Tbl_1 ( id int not null, code varchar(10) null, TBO int not null, -- (Time between Overhauling)..hours (life cycle) T_Hrs_AtLastOvh int null, -- total running hours from last overhauling TRH int null, -- total running hours (the life of equipment) ) GO Create view View_1 as select code, [TBO]-([TRH]-[T_Hrs_AtLastOvh])as HrsTo_NextOvh from Tbl_1 GO create table Tbl_2 ( id int not null, code varchar(10) null, Start_dt smalldatetime, Stop_dt smalldatetime, ) GO --drop table Tbl_1 --drop view View_1 --drop table Tbl_2
Let to insert some dates into Tbl_1:
Insert into Tbl_1 select 1,'cod1',2000,2500,3000 union all select 2,'cod2',3000,4000,7000 union all select 3,'cod3',1000,2000,2000 union all select 4,'cod4',1500,3000,3000 GO The result of View_1 is in Fig.1:
Fig.1
cod1
1500
cod2
0
cod3
0
cod4
2500
cod1
1500
cod2
3000
cod3
0
cod4
2500
Fig.2
The operator perform requested job for the equipment and the life cycle starts counting again. Suppose to have:
Update Tbl_1 set T_Hrs_AtLastOvh=7000 where id=2 GO The result of View_1 is Fig.2. I wish to insert (within trigger) into table Tbl_2 all codes that have [HrsTo_NextOvh]=0 from View_1 and automatic to record the date when the record is done with a propertie like €™starting job€™. After the operator executed the job, he will update the Tbl_1 (the result is in Fig.2) and the trigger has to record this process with the propertie like €˜completed job€™. Depending by the time between overhauling and the operating hours of equipments, this task happens more or less often. My intentions are to record the time requested to executed a job and to make a history of events.
Any suggestion to solve my problem is full apreciated.
I try do some tests and I get one doubt, why the optimizer don€™t make a constant scan in normal tables, for instance:
Code Snippet --drop table #tmp create table #tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into #tmp(name) values(NEWID()) insert into #tmp(name) values(NEWID()) go set statistics profile on go -- Execution plan create a Constant Scan select * from #tmp where id = 1 and id = 5 go set statistics profile off
GO
--drop table tmp create table tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into tmp(name) values(NEWID()) insert into tmp(name) values(NEWID())
go set statistics profile on -- Why execution plan does not create a Constant Scan for this case? select * from tmp where id = 1 and id = 5 go set statistics profile off
Hi, tell me please how I can trace the modification on the table such as "insert" record into one and syncronize mirror table at the same time once the insert has happend, BUT - no indexes no trace jobs, no any modification or objects on the master table... ha?
I'm still a database newbie so I would like to solicit thoughts aboutthe smartest way to do something in sqlserver.My company has a web application that we customize for each client.We can do this because everything is database driven. We havedatabase tables that contain our HTML and database tables as well assome standard tables for each database. We have an in house app thatlets us tweak both of these things and creates a new web site anddatabase tailored to each project.Each of these sites has a table that stores a schedule are clientsuse.The records in this schedule table change when information in othercustom generated tables change.My company currently uses a legacy foxpro app to update the scheduletable.The foxpro app contacts sqlserver, reads a table with a list of tablesand scheduling information to check, checks each of those items andupdates the schedule table.I would like to lose the foxpro app.At first thought.........as a database newbie.......putting triggersin each of the tables to update the schedule when something changesseems the way to go.However, since we change a part of the schema ( we have an app thatgenerates the database tables unique to each client ) for each clientI would like a scheme that would not involve having to create adifferent trigger for each new table.I would also like something that updates in real time. Right now thefoxpro app is executed once a day.I was thinking of making a large stored procedure and putting anidentical call to that procedure in each table.Each table would have the same trigger in it that would get fired whenthe record was altered. It would call the stored procedure withrelevent arguments to update the schedule.Does this sound like a smart way to solve this problem or am I notthinking "database enough"?Any thoughts are welcome.I would like to build a better solutionSteve
I'm a bit stuck with this one... hope someone can help.
I'm trying to develop an application that will run on a pocket PC with Windows CE 4.2
I'm using .Net 2003 and the application is in VB.Net.
I can run the application on the pocket pc fine (ie. form paints, buttons work) , until I need to connect to Sql DB on the server.
When I try to create a connection object (Dim dbconnection As New SqlClient.SqlConnection)
I get an error stating .. "This application (test.exe) requires a newer version of .Net Compact Framework than the one installed on the device" .... "could not load System.Data.SqlClient.SqlConnection from assembly System.Data.SqlClient Version=1.0.5000.0"
The version that it is looking for is. 1.0.5000.0 . The VS2003 is using this version.
I've downloaded the compact framework v1. sp3 , ran all the cabs on the Win CE device ... it looked that it installed fine.... but the problem still exists.
I have a SQL command which I run on two separate servers. Both servers and configured and built the same. On server 1 it takes mere seconds, but on server 2 it takes over 5 minutes.
I have checked the execution plan on both servers and they are completely different. I ran UPDATE STATISTICS WITH FULLSCAN on both servers, but the execution plans were still different.
My question is why are the execution plans so different and how do I get them to execute with the same plan.
I tried this:use northwindgoSELECT OrderDateFROM Orders WHERE OrderDate > '19950101'see the query plan? okSELECT OrderDate, EmployeeIdFROM Orders WHERE OrderDate > '19950101'see the query plan? what appened?the only way to make an index seek instead of an index scan is toforce theindex usage ( with(index=orderdate) ), but I don't like this solutionalso try this:SELECT *FROM Orders WHERE employeeId > 9andSELECT *FROM Orders WHERE employeeId > 8Can someone explain why this appens? and how can I overturn theperformance loss problem (well not in orders table, but in my tablethere are 300K records and making a scan to retrieve 50 records is notexactly what I want)thanks to all
I have an 05 VB.NET windows application that will be used as a smart client for our folks in the field. The windows application includes 05 SQL Server Express. I have included in the Data Sources of my project and attached file going through the wizard Microsoft SQL Server Database File (SqlClient) ='s (myfile.mdf) and then selected all tables, views, stored procedures, and functions... the corresponding myfileDataSet.xsd with the myfile.mdf are now located in the root of the project. I now recompile the project without error and go to the properties section Publish tab... select the Application Files button and myfile.mdf Publish Status is set to Include and the Download Group set to Requried. With this in place I right click on the myfile.mdf from the Soultion Explorer and under the properties section have set the build action to compile and use the copy always setting for the Copy to Output Directory.
After the publish is completed on the client machine... install for Windows Installer 3.1, SQL Server Express, and the Windows Application contains no data but everything else works fine.
My problem is that I need to attach the myfile.mdf to the new SQL Server Express instance on the client machine during the installation process so that when the application fires it will be pointed to the above location on the client.
Any ideas... scripts... includes for an ApplicationEvents.vb on how to do this? Thanks a ton... :)
Kind regards,
BillB
Your mind is like a parachute.. It has to be able to open, for it to work.
First of all I would like to announce that this is my first time I post here.. However, I'm pretty sure that I'm in the best place to ask what I want. To cut the story short, I'm querying SQL database on a remote machine and having the result saved (mapped) to another table on another database on the same remote machine. The thing is the destination table was empty before the query was run the first time. I have been searching for some smart way so that when I modify the source tables that my query is based on, it doesn't affect except the modified rows. In other words, it should be like if the row is already there, do nothing. otherwise, it updates the existig record. else, it's a new record and it's inserted. I think what i need will include some coding for sure, yes? I don't know if i'm clear about the requirement or not though! but I know that you are experts and can direct me. Waiting for your valuable replies.
I'm looking for an in depth book, article, faq, whatever, regarding the query optimizer...
I've read the books online pretty thoroughly and have been sql coding for a number of years. The system I work on relies heavily on real time access to data and the number crunching procedures we use are a critical part of the design. For the most part, sometimes through trial and error, I have been able to find ways to achieve the performance we need, but I'm often surprised by the methods that prove most effective.
For example, I have cases where I can only get the performance I'm looking for using table functions, and other cases where indexed temporary tables are the only way. I have statements that run fast as a select statement, but when converted to an update statement limp along, forcing me to resort to cursors, temp tables, or table hints with varying degrees of success.
I'm wondering if anyone has come across material that takes an in depth look at the various technologies available and how to tweek queries. I want to get away from hours of testing and hacking.
Way back when, and at least in version 7 IIRC, the query optimizer gaveup when the where clause in a statement contained more than 4 searchconditions.Does anyone know if such a limitation still exist in MS SQL 2005? TheBOL seems to be silent on the issue.Boa
I'm very puzzled by the choice of NC index being made by the optimizerin this example. I don't actually think it should use an NC index atall.I have:Table: CustomerStatus_TSingle data page19 recordsClustered Index on CustomerStatusID:CREATE TABLE [CustomerStatus_T] ([CustomerStatusID] [int] NOT NULL ,[Name] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Description] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[Code] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CodeAlt] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Ordinal] [int] NULL ,[Default] [int] NULL ,[Display] [bit] NOT NULL ,[StatusType] [varchar] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[DateCreated] [smalldatetime] NULL ,[DateUpdated] [smalldatetime] NULL ,[DateArchived] [smalldatetime] NULL ,CONSTRAINT [PK_ROMS_CustomerStatus] PRIMARY KEY CLUSTERED([CustomerStatusID]) ON [PRIMARY]) ON [PRIMARY]If I run the following query, it does exactly what I expect and scansthe clustered index:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘and gives the following QEP and IO statistics:|--Clustered Index Scan(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[PK_ROMS_CustomerStatus]),WHERE:(([CustomerStatus_T].[DateArchived]=NULL AND[CustomerStatus_T].[StatusType]='Q') ANDConvert([CustomerStatus_T].[Display])=1))Table 'CustomerStatus_T'. Scan count 1, logical reads 2, physicalreads 0,read-ahead reads 0.If I now put a NC index on the statustype column:create index ix_nci_statustype on customerstatus_t(statustype)the query plan changes to:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘|--Filter(WHERE:([CustomerStatus_T].[DateArchived]=NULL ANDConvert([CustomerStatus_T].[Display])=1))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T]))|--IndexSeek(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[ix_nci_statustype]),S EEK:([CustomerStatus_T].[StatusType]='Q') ORDEREDFORWARD)Table 'CustomerStatus_T'. Scan count 1, logical reads 7,physical reads 0, read-ahead reads 0.For some bizarre reason, the optimizer thinks that a NC index lookupon a single-page table, which ultimately costs 7 IOs, is cheaper thana table (or Clustered Index) scan of a single page. Why? Theshowplan cost also shows that it expects the NC index to be cheaper(which is presumably why it goes and uses it), but even after runningUPDATE STATISTICS on the table it still chooses the same idiotic queryplan.Any thoughts, or has anyone seen similar behaviour before, and cananyone please explain it to me?p.s. I don't actually WANT to put a NC index on this table, but Inoticed the behaviour by accident which is why I'm asking the question:-)
I've created a simple application that uses a SQLCE 3.5 database. When I debug it SQLCE 3.5 is deployed to the emulator. However, I made a "smart device cab project" for my application and copied the cab file to my windows mobile 6 device and it does not deploy SQLCE 3.5. I don't see a way to specify the prerequisites of the "smart device cab project" like you can in a normal setup project. How can I get SQLCE 3.5 to deploy with my application...or even just get it on my device? I've tried installing it on my desktop with the device connected via active sync, but it doesn't install on the device like the compact framework did.
So in a previous thread I discovered that in order to actually subscribe to any publication, the publisher needs to be a well-known network name, requiring DNS resolution. You can't simply point a SQLExpress instance at an ip addressinstance and have it resolve the communications.
Trying to develop a smart client which will have data centric approach for storage of local data. The server is SQL Server 2000 and foot print database is going to SQL Server Express 2005. Is Merge replication a vaiable option. Can somebody guide me on this approach
Is there any other architecture proposed for Smart client arcjitecture where the data tranfer will be in a couple of GBs. Can somebody tell me more about SOA as well
Is there a DMV or similar in SQL 2012, or SQL 2008, that shows when a statistic was last used by the optimizer? I would like to cleanup some of the auto-generated stats, assuming it's possible to do so. In particular I'm looking to drop those statistics that were created by one-off queries, data loads, etc, and are now doing nothing but adding to the execution time of Update Statistics jobs.
Guys,what I need is a tool which gives details on the choice of anexecution plan by the SQL Server. For example, the cost for a hashjoin might be 200 and 100 for a nested loop, and therefore a nestedloop is used. Same thing for the access paths for each table/viewinvolved. In Oracle, we turn on event 100053 to see this kind of info.ThanxDaniel
I am trying to resolve performance issues in a third party application. I have run the profiler and found a transaction that performs a table scan against a 6 million row table. This transaction occurs repeatedly, so I thought, just add an index on the columns in the where clause used here. After adding the index, I looked at the estimated execution plan in Query analyzer, and I find that it is still performing the table scan. If I run the query it takes over 60 seconds to run, if i add an index hint, it runs in under a second. I ran DBCC SHOW_STATISTICS to see if the statistics were up to date:
Statistics for INDEX 'IX_Finish_dept'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Jun 26 2007 5:18PM 6832336 6832336 150 2.1415579E-7 18.0
(1 row(s) affected)
All density Average Length Columns ------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.1875491E-7 8.0 finish 1.9796084E-7 18.0 finish, dept