I'm trying to come up with a replacement for @@IDENTITY, because I have SQL code I want to make more portable.
Original:
ID = MyDataLayer.Execute("INSERT INTO X(a,b,c) VALUES(A,B,C); SELECT @@IDENTITY")
Proposed solution:
lock(MyDataLayer)
ID = MyDataLayer.Execute("SELECT MAX(id)+1 FROM X")
if(ID==null) ID=1
MyDataLayer.Execute("INSERT INTO X(id,a,b,c) VALUES(ID,A,B,C)")
unlock(MyDataLayer)
(This is of course pseudocode, for SQL Server I'd need SET IDENTITY_INSERT.)
Do you think the preceding solution is equivalent to the original?
Do you know something better?
Equivalent should mean here, not necessarily generating the same ID's,
but maintaining functionality and consistence all over the database.
I have a SSIS package that imports an Excel file using Visual Basic 2005 into my SQL 2005 database. All the fields are the same in the DB and xls. The package runs with no problem but I need one of the fields to be autoincrement. I tried to set up the properties of one of my fields "ID" to be an Identity. This didn't seem to work at all. There are about 1300 records in the DB so far with the last "ID" number being 10001415. Before now, the numbers were inputed manually. I want the "ID" to be assigned when the SSIS package imports the xls file.
One of the kernel components of our application(s) is the concept of using surrogate keys (aka system assigned keys [SAKs]) for significant tables. We chose early on not to use IDENTITY due to limitations of cross-DBMS compatibility and @@identity retrieval issues relating to trigger value resets.
We are now in the process of reengineering our own "System Counter" scheme. Our current scheme runs in the user's transaction space (not using an identity column, but a separate "counters" table) -- we're running into performance problems in high concurrency situations as the counters table is holding locks for the duration of a user's transaction (e.g. if the insert trigger on one row's insert inserts rows into a log table which must fetch counters first, the counter rows are locked until the original insert is committed).
Our best performing solution currently is using an extended SP which opens its own connection, gets the counter then closes the connection (running quick this way, running in its own transaction space, it doesn't cause concurrency blocks from the longer running outer user transaction). It appears to work very well (with "pooled connections" its taking appx. 2ms to run) and is easy to use (e.g. a "black box solution"). However we're still considering using IDENTITY to avoid the extra DB connections that are taken up from the middle-tier solution.
Does anyone have experiences working with this topic, that you could share?
Hi, i am using the DTC in my code to connect to two different servers on the network through a SQL query which is unfortunately very slow; can u please guide me with an alternative for the same
SELECT *FROM organizationWHERE (departmentID = divisionID) AND (divisionID = branchID) AND(branchID = sectionID) AND (sectionID = unitID)Is there anyway I can make this query more simlified w/o repeating thesame column in the where clause?thankss/RC
is there a way to get around not using USE in a PROCEDURE?
I need to because I have a main site that inserts information into other DB's that i use for various subdomains. But without being able to use USE i cant select which database is needed.
My company develops software that is distributed to thousands of customers. We chose MSDE as the database engine. Over the past 4 months, we have spent countless hours with customers, Microsoft, Installshield and web searches trying to resolve issues with installing MSDE. The issues seem to vary by customer and most take a great deal of support time. We understood MSDE to be a product that requires little support but in hindsight, it appears that it requires a great deal of knowledge just to get installed. We make small steps but no leaps forward.
It has come time to evaluate other products. If there is a magic bullet, I would love to hear about it. In its absence, does anyone have success to share with other products?
Just curious, is there any alternative to SQLXMLBULKLOAD for shredding and loading very large (800 megs) XML files ? Due to the nature of the XML data sent to me (which I have no control over)I am having great difficulty loading data into tables. More specifically, I can load parent data but not the child data beneath it despite using sql:relationships.
I have a situation where my SQL works everywhere else but my COBOL compiler complains wherever I use PARTITION BY. I can't find a workaround for that problem so I would like to remove all the PARTITION BYs. I'm not confident that I can do this accurately and would like some help getting started.
Here is my simplest example:
SELECT FESOR.REGION, FESOR.TYPE, COUNT(*) OVER (PARTITION BY FESOR.REGION, FESOR.TYPE) FROM FESOR, FR where FESOR.phase = 'Ref' and FESOR.assign is null and FESOR.comp_date is null and FESOR.region = FR.REGION and FESOR.type = FR.TYPE and FR.REP_ROW='A' GROUP BY FESOR.REGION, FESOR.TYPE
What I'm looking for is a modified version of the SQL above which returns the same result set without using PARTITION BY.
In a stored procedure I'm processing, via a cursor, a table of, potentially, 100,000 rows on a daily basis. The only column in each row is a 12-byte transaction control number. I know that using cursors can cause performance issues. Is there an alternative to using a cursor that has less of a performance impact ?
Cameron writes "Thanks for taking a look @ my question....
Basically, is there an alternative to indexing that maintains the fast searching capability (or possibly faster)?
We maintain over 500 databases on a single SQL server and currently (the way I am told) the server is limited to indexing 256 databases, so we have to basically create a new database with ALL the searchable data and use it for searches. While this works, it seems like there should be an alternate method. Any suggestions?
I work as a production dba and our development team are trying to push a project which involves using triggers. The aim is to transfer information between to databases (on two differents servers) because currently users have to type in the same info into the two different systems. The triggers will be defined on a couple of tables, checking for inserts, updates, deletes, and then insert this into staging tables within teh same database. However the trigger does more complex processing than just inserting the same records from the production table into the staging table. Because the schema between the source database and destination database is different, the trigger needs to do some manipulation before it updates the staging tables. It basically does massive selects from a number of different tables to get the desired column list & then puts that into the staging tables. We have basically asked them to reimplement this solution using other methods (such as timestamping the necessary tables and then putting the trigger login into a stored proc and scheduling it to run through a job).
However, we've found out the triggers make use of the 'deleted' and 'inserted' special trigger tables to compare new data to old data - i.e. not all inserts/updates/deletes need to be pushed to the staging tables - it depends on certain criteria based on this comparison of old and new data.....that throws a spanner in the works. What alternatives could provide this functionality, without just making the whole process a a headache to maintain - which is why we recommended not using triggers in the first place!!
Sorry for the long post - needed to explain the issue properly. Hopefully some of you will be able to provide some feedback - teh sooner the better as I have a meeting with the developers later today and would like to offer some alternatives.
Hi All,I want to pass XML and the data in the XML should be stored in thetables of the database. However, I do not want to use the OpenXMLstatement. Please let me know.Regards, Shilpa
I have a procedure that take several paramters and depending of whatvalues is submitted or not, the procedures shall return differentnumber of rows. But to simplyfy this my example use just oneparameter, for example Idnr.If this id is submitted then I will return only the posts with thisidnr, but if this is not submitted, I will return all posts in table.As I can see I have two options1. IF @lcIdNr IS NOT NULLSELECT *FROM tableWHERE idnr = @lcIdNrELSESELECT *FROM table2. Use dynamic SQL.The first example can work with just one parameter but with a coupleof different input paramters this could be difficult, anyway this isnot a good solution. The second example works fine but as I understanddynamic sql is not good from the optimizing point of view. So, I don'twant to use either of theese options, so I wonder If there i a way towork around this with for example a case clause?RegardsJenny
I have several SCD components in my project. As I have to process millions of records, SCD's are taking a lot of time. Is there a way to speed them up? Work arounds?
I found it a bit annoying to type Go after some very simple query and I wonder is there a short cut to execute the query i type right after I press enter?
1> select * from Table 2> go <enter>
instead, how to you execute line 1 without entering go?
There is another alternative ... if you are using MSCRM 3.0
follow this steps
1. Navigate to your report manager --> //<<Server Name>>/reports 2. Navigate to your datasource --> HOME > <<Datasource>> 3. At the top tab select Properties 4. Make sure that there is a user role call "NT AUTHORITYNETWORK SERVICE" thus apply the appropiate roles for it
While I have learned a lot from this thread I am still basically confused about the issues involved.
.I wanted to INSERT a record in a parent table, get the Identity back and use it in a child table. Seems simple.
To my knowledge, mine would be the only process running that would update these tables. I was told that there is no guarantee, because the OLEDB provider could write the second destination row before the first, that the proper parent-child relationship would be generated as expected. It was recommended that I create my own variable in memory to hold the Identity value and use that in my SSIS package.
1. A simple example SSIS .dts example illustrating the approach of using a variable for identity would be helpful.
2. Suppose I actually had two processes updating these tables, running at the same time. Then it seems the "variable" method will also have its problems. Is there a final solution other than locking the tables involved prior to updating them or doing something crazy like using a GUID for the primary key!
3. We have done the type of parent-child inserts I originally described from t-sql for years without any apparent problems. (Maybe we were just lucky.) Is the entire issue simply a t-sql one or does SSIS add a layer of complexity beyond t-sql that needs to be addressed?
Hi,I'm trying to create a stored procedure that checks to see whether the parameters are NULL. If they are NOT NULL, then the parameter should be used in the WHERE clause of the SELECT statement otherwise all records should be returned.sample code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
CREATE PROCEDURE [dbo].[GetProjectInfo] (@ProjectTitle varchar(300), @ProjectManagerID int, @DeptCode varchar(20), @ProjID varchar(50), @DateRequested datetime, @DueDate datetime, @ProjectStatusID int) AS BEGIN SET NOCOUNT ON IF @ProjectTitle IS NOT NULL AND @ProjectManagerID IS NULL AND @DeptCode IS NULL AND @ProjID IS NULL AND @DateRequested IS NULL AND @DueDate IS NULL AND @ProjectStatusID IS NULL SELECT ProjID, ProjectTitle, ProjectDetails, ProjectManagerID, RequestedBy, DateRequested, DueDate, ProjectStatusID FROM dbo.tbl_Project WHERE ProjectTitle = @ProjectTitle; ELSE IF @ProjectTitle IS NOT NULL AND @ProjectManagerID IS NOT NULL AND @DeptCode IS NULL AND @ProjID IS NULL AND @DateRequested IS NULL AND @DueDate IS NULL AND @ProjectStatusID IS NULL SELECT ProjID, ProjectTitle, ProjectDetails, ProjectManagerID, RequestedBy, DateRequested, DueDate, ProjectStatusID FROM dbo.tbl_Project WHERE ProjectTitle = @ProjectTitle AND ProjectManagerID = @ProjectManagerID; ELSE SELECT ProjID, ProjectTitle, ProjectDetails, ProjectManagerID, RequestedBy, DateRequested, DueDate, ProjectStatusID FROM dbo.tbl_Project; I could do this using IF-ELSE but that would require a ridiculous amount of conditional statements (basically 1 for each combination of NULLs and NOT NULLs). Is there a way to do this without all the IF-ELSEs? Thanks.
Hi all, I have a sql problem i'd like to put to the masses because its driving me crazy! Before I start, this is a database i inherited so I cant change the schema. I have a table which holds field information for a form, namely the table name, column name and some other irrelevant stuff (X/Y coordinates for printing onto a document). Here is some sample data to explain better: TableName FieldName Xpos Ypos ---------- --------- ----- ----- FruitTable FruitName 10 20 VegTable VegName 10 40 FruitTable FruitColour 20 10 (Thats not the real data of course) What I need is a calculated field which returns the value of each field from each table – probably by constructing a dynamic sql statement(?) It would look something like this: Select @FieldName From @TableName Where bla bla bla – don’t worry about the where clause. The completed dataset will hopefully then look like this: TableName FieldName Xpos Ypos FieldValue (calculated field) ---------- --------- ----- ----- --------- FruitTable FruitName 10 20 Oranges (result of: Select FruitName From FruitTable Where....) VegTable VegName 10 40 Parsnips (result of: Select VegName From VegTable Where....) FruitTable FruitColour 20 10 Green (result of: Select FruitColour From FruitTable Where....)
I have tried creating a scalar-valued function which takes TableName and FieldName as parameters and creates a dynamic sql string, but i cannot seem to execute the sql once I have built it. Here is a general idea of how I was trying to use the function: Main query:Select TableName, FieldName, Xpos, Ypos, dbo.GetFieldValue(TableName, FieldName) As FieldValue From tblFieldAndPosition---------------Function: CREATE FUNCTION GetFieldValue (@TableName nvarchar(255),@FieldName nvarchar(255))
RETURNS nvarchar(255) AS BEGIN
Declare @SQL nvarchar(max) Set @SQL = 'Select ' + @FieldName + ' From ' + @TableName
sp_executesql @SQL??
return ???
END ------------------------- The alternative to getting this data all out at once is contructing the sql statement in code and going back to the database once for every row - which i really dont want to do. If anyone has had a situation like this before, or can point me in the right direction I will be very very grateful. Hope thats clear. Thanks in advance
Good Day, please help me, i have a data driven site that displays computed data to a Complete Gridview, this is my problem, when i run my site, it all displays the informations and the gridview a i intended to. generally my site works fine BUT!!!!! my site have different levels of display, like thisSite 1:it has 3 dropdown menus, dropdown 2 is dependent to dropdown 1 and dropdown 3 is dependent to dropdown 2 and the complete gridview is dependent to dropdown 3, it is a postback process of dependencies, i hope you get what i mean, Site 2:this is the same as the site 1 but this time, is has 2 dropdown menus only, and the complete gridview has a gridview in side of it thru details template. Site 3: almost the same as Site 1 and 2 but this time there will be no dropdown menus, only the complete gridview, but the gridview has 3 levels, the same level sa the level of dropdown menus in site 1, where the main gridview has a gidview below it and a gridview again below it and soon, The Problem:the site works fine but it runs super SLOW!!! specially in the Site 3, where it has to display and compute different gridviews from the complete gridview at the same time, sometimes it cause the TIME OUT EXPIRE.that can i do to speed the process other than upgrading my server? is there an alternative?please helpfor more info this is the SQL codes that i used in each level in the complete gridviewThe highest level in the gridview of the initial display of the complete gridview: select rbu,count(distinct dslamname) as NumberOfDslam, (select count(secode) from dslamdata as a where a.rbu = r.rbu ) as Capacity, (select count(secode) from dslamdata as a where a.rbu = r.rbu and dnum > '1') as Used, (select count(secode) from dslamdata as a where a.rbu = r.rbu and dnum < '1') as Remaining, (select (select count(secode) from dslamdata as a where a.rbu = r.rbu and dnum > '1')*100/(select count(secode) from dslamdata as a where a.rbu = r.rbu )) as Utilization, (select sum(dwlink) from dslamdata as a where a.rbu = r.rbu ) as Sold_Bandwidth from dslamdata as r group by rbu The Second to the highest when you would click on the +/- button in the complete gridview select aco,count(distinct dslamname) as NumberOfDslam, (select count(secode) from dslamdata as a where a.aco = r.aco ) as Capacity, (select count(secode) from dslamdata as a where a.aco = r.aco and dnum > '1') as Used, (select count(secode) from dslamdata as a where a.aco = r.aco and dnum < '1') as Remaining, (select (select count(secode) from dslamdata as a where a.aco = r.aco and dnum > '1')*100/(select count(secode) from dslamdata as a where a.aco = r.aco )) as Utilization, (select sum(dwlink) from dslamdata as a where a.aco = r.aco ) as Sold_Bandwidth from dslamdata as r where rbu = ? group by aco The Last in the display when you would click on the +/- of the second gridview select dslamname, (select count(secode) from dslamdata as a where a.dslamname = r.dslamname ) as Capacity, (select count(secode) from dslamdata as a where a.dslamname = r.dslamname and dnum > '1') as Used, (select count(secode) from dslamdata as a where a.dslamname = r.dslamname and dnum < '1') as Remaining, (select (select count(secode) from dslamdata as a where a.dslamname = r.dslamname and dnum > '1')*100/(select count(secode) from dslamdata as a where a.dslamname = r.dslamname )) as Utilization, (select sum(dwlink) from dslamdata as a where a.dslamname = r.dslamname ) as Sold_Bandwidth from dslamdata as r where aco group by dslamname if you would look at it, all the codes are the same except for the selected field, i hope someone can help me with this, thanks and good day SALAMAT PO.,
I am having two databases(MS-Sql) on two different servers.Let say they are Server1 and Server2.I am having some stored procedures(sps) which are executing on Server1.Results given by these sps are 6 different tables(Theses tables are temporary tables e.g #Table1 and they are created in one of the sps on Server1). And I want to use these 6 tables on Server2. But constraint here is, i can create link server for Server1 from Server2 but not from Server1 to Server2.So i can not access Server2 directly from Server1. Even if i am using custom tables here instead of temp tables(#) it will take me to solution but that is again a constraint i can not do this. Is there any alternative solution for Link server in this case? I dont want to go for OPENROWSET and OPENDATASOURCE b'coz of performance issue.
What do you do if you need to select item 20 to 40 from a table? Do you just do 1 to 40 and let PHP ignore the first 20, or do you have another equivalent of the MySQL LIMIT Clause for MSSQL?
I have 2 views that I can join to give me the recordset I want. Both views are currently filtered on a particular column (ie 'WHERE colName = myValue')
The problem is that I want to use this from a web page with user input in which the user would specify myValue.
Is there any alternative to having two views? Can I combine them into one SQL statement? Access allowed me to do it by specifying an alias for a select statement, then joining this to another. I'm not sure if that makes sense, but I guess what I want to do it specify one view then another, then join them - all in the same statement!
Something like this:
Code:
SELECT view1.*, view2.* FROM ((SELECT tbl1.*,tbl2.*,tbl3.* FROM tbl1 JOIN tbl2 ON tbl1.id = tbl2.tbl1_id JOIN tbl3 ON tbl2.id = tbl3.tbl2_id AND tbl3.id = myValue) AS view1) ((SELECT tbl4.* FROM tbl4 WHERE tbl4.id = myValue) AS view2) RIGHT JOIN view1 ON view2....
Soooo, now I can handle this programatically by reading my xml data and inserting this data into the database as its read by the xml parser. But, what I'm wondering is if there is faster way of doing this. Possibly using some sort of bulk insert?
The xml data files will at times be massive. For example during the holiday season, there could be millions of records needing to be inserted. However, there will also be days when the data is small.
I know this is a mess, but they set up this environment before I got there. There is a very minute chance I may get another SQL server, which is why I ask question 1. But I have to move forward with question 2 until I can confirm another box is possible.
1) Is transactional replication possible if the subscriber is in a failover cluster? 1b) Am I correct in the idea that the publisher and the distributor can live on the same SQL Server.
2) Transactional replication aside... do you know an alternative to it?
Here' s my situation. I wish they would have consulted with me prior to setting this up but nonetheless...I have a database cluster with 2 servers in it. Failover setup between them. The staging database and production database live on the same SQL server. When data is written to, or updated on the staging database, I need to replicate it to the production database. I obviously can't setup transactional replication when I virtually only have one server to work with.
>> Individual triggers are out because there are 38 tables and I would need a trigger for each action. SO that's approx 114 triggers. Not viable or safe and just plain wrong.
>> I've considered kicking off a DTS package that will copy the table up but then I would need 38 DTS pkgs. And I don't want to copy the whole database every time for obvious performance issues.
>> I've considered modifying the code in their software so that it reruns their DB update object after I temporarily change the data source... which is a great alternative, except then I have to worry about data integrity; making sure the primary keys match between staging and production.
Thanx for any input or any ideas beyond what I've written. Egotistical slander not welcome.