I want to split the data in a table into two parts, say 30%-70%.
Here are my query statements:
-- 30%
SELECT top 30 percent * FROM DataTable ORDER BY Col1
--70%
SELECT top 70 percent * FROM DataTable ORDER BY Col1 DESC
Somebody says the result is not guaranteed to be correct since in SQL Server 2005, the query optimizer may choose different strategy to execute the query statement.
I think select should execute after order by, which has nothing to do with query optimization. Am I correct?
I am trying to select a record from a table where it has the smallest priority how would you go about doing this is there a cool sort command or is there a select command syntax that can do this thanks
hi, i have a stored procedure SELECT PictureID, Left(Name, 16) + '...' AS ShortName FROM Pictures some pictures has their name lower that 16 characters, and when i show on the page is something like "Sunrise..." how can i 'sort' the Select so to return the ShortName if the name is greater than 16 characters or return Name if is lower i hope you understand what i mean... thanks
I Have a table with columns: - CustomerID - DateTransactionCompleted - AmountPaid
Illustration data as follow
GO INSERT INTO Cus_Tab_Dev_2 VALUES(30,'12-12-2004','18000.00') INSERT INTO Cus_Tab_Dev_2 VALUES(30,'12-15-2004','2000.00') INSERT INTO CUS_Tab_Dev_2 VALUES(30,'1-16-2005','15000.00') INSERT INTO CUS_TAB_DEV_2 VALUES(42,'2-2-2005','12000.00') GO
What I want is a report that tells me the Total Sales per month Per Customer in the following manner:
Customer ID Month/Year Total Per Month ----------- ----------- ---------------- 30 12-2004 20000.00 30 1-2005 15000.00 42 2-2005 12000.00
This is a homework, and I have been trying to get it to work since this morning. Any help is appreciated.
PS: If you do not feel you want/should help. Please don't call me names! I don't ask people to help me with my homework usually. But, I am running out of time on the assignment.
We have a database where the nvarchar columns currently holding English only data. It keeps the training information. Currently the default collation is Latin Case insensitive accent insensitive.
Now we are planning to allow multiple language support. When we go for it, we will upload the data from different languages. Based on user preferences, he/she should be able to query the data (One language at a time).
I am worried on how the existing queries will work if I load all language data in the same database.
For example, some characters in English are used in Norwegian too. But they have different sort order. ALso LIKE conditions too may fail.
So far I thought of few solutions: Add the Collation information along with select , order by clauses. It means we need to add more procedures (one set per collation) The other option is to create new database for each language. Each will have its own collation. The dowside is we need more databases which may lead to more servers and more maintenance work.
I am trying to set sorting up on a DataGrid in ASP.NET 2.0. I have it working so that when you click on the column header, it sorts by that column, what I would like to do is set it up so that when you click the column header again it sorts on that field again, but in the opposite direction. I have it working using the following code in the stored procedure: CASE WHEN @SortColumn = 'Field1' AND @SortOrder = 'DESC' THEN Convert(sql_variant, FileName) end DESC, case when @SortColumn = 'Field1' AND @SortOrder = 'ASC' then Convert(sql_variant, FileName) end ASC, case WHEN @SortColumn = 'Field2' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, Convert(varchar(8000), FileDesc)) end DESC, case when @SortColumn = 'Field2' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), FileDesc)) end ASC, case when @SortColumn = 'VersionNotes' and @SortOrder = 'DESC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end DESC, case when @SortColumn = 'VersionNotes' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end ASC, case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, FileDataID) end DESC, case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'ASC' THEN CONVERT(sql_variant, FileDataID) end ASC And I gotta tell you, that is ugly code, in my opinion. What I am trying to do is something like this: case when @SortColumn = 'Field1' then FileName end, case when @SortColumn = 'FileDataID' then FileDataID end, case when @SortColumn = 'Field2' then FileDesc when @SortColumn = 'VersionNotes' then VersionNotes end
case when @SortOrder = 'DESC' then DESC when @SortOrder = 'ASC' then ASC end and it's not working at all, i get an error saying: Incorrect syntax near the keyword 'case' when i put a comma after the end on line 5 i get: Incorrect syntax near the keyword 'DESC' What am I missing here? Thanks in advance for any help -Madrak
DB1 contains a log of multiple events for each record while DB2 contains the latest log for the record. I have a trigger(trigger1) in DB1 that automatically updates/inserts the log in DB2 depending on whether the record already exists in DB2.
I also have another trigger that checks for a specific value in the 'tag' field.
I used to set this trigger(trigger2) up in DB2 but it was executed twice. The first one when trigger1 inserts/updates a DB2 record and the second one when trigger2 updates DB2. So instead of having only a value of 1 in tag, I get a 2.
So what I did is I moved trigger2 to DB1. My only problem is I don't know which trigger get executed first.
I'm not sure whether this is the best way of updating the 'tag' field. The purpose of the tag field is that when it reaches a certain number, it stops all events and inserts a termination log in DB2.
I have three stored procedures that need to run nightly in SQL 7. The threeprocedures are not related; but to keep the procedures from running at thesame time, I placed them as three steps of a single job. The first two stepsare set to "Goto next step" on success; the last step is set to "Quit withsuccess."The job runs every night. However, only the first step/procedure isexecuted. Also, the first step has a green flag next to its ID in the Stepstab of the job propertiesI suppose I could just call all three SPs from a single stored procedurethat is run nightly. But I thought that putting them as three steps in asingle job would cause all three to run. What am I doing wrong?Thanks!Neil
I have a developer using SQL 7.0 Enterprise Manager and the DTS Wizard to create and save simple DTS imports on the SQL Server. The DTS job runs only on his workstation using his NT logon ID. Server is set for mixed security. Other users can edit the saved DTS job and resave it, then anyone can run the job. I tried having the developer log in with his NT logon ID on another workstation; he can't run the job from there if he was the last person to edit it from his workstation.
Any ideas, anyone? This is the only person having this problem in my shop.
I am trying to bulk map products to a catergoryID table. Each product has a SKU code and then is mapped to a CategoryID table. The script executes fine but when I look the products are not mapped.
when i run the query it takes less that 5 seconds to give results
but when i run the view it takes about a minute
can anyone help please, it keeps giving me time out errors
here is the query.
SELECT MAX(B.Code) AS Code, MAX( T.Description) AS Type, MAX( WH.Description) AS Warehouse, MAX(B.Barcode) AS Barcode, MAX(B.BatchNo) AS BatchNo, MAX(B.CustomField) AS CustomField, MAX(B.Colour) AS Colour, MAX(Q.Quality) AS Quality, MAX(round( M.ConvFactor, 2, 2)) AS ConvFactor, MAX( M.Multiply) AS Multiply, MAX(CONVERT(VARCHAR(20), round((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty), 2))) + ' ' + MAX( M.UoM) AS Available, CASE WHEN MAX( M.Multiply) = 'M' THEN MAX(CONVERT(VARCHAR(20), round(((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty) * M.ConvFactor), 2))) + ' ' + MAX(M.AUoM) ELSE max(CONVERT(Varchar(50),convert(float, round(((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty) / M.ConvFactor), 2))) ) + ' ' + MAX( M.AUoM) END AS AvailableAlternative, MAX(BC.Supplier) AS Supplier, MAX( OD.SupplierCode) AS SupplierCode FROM cvrbatches B LEFT JOIN SciposA.dbo.cvrmaster M ON M.Code = B.Code LEFT JOIN cvrbatchctrl BC ON B.Code = BC.Code AND B.Type = BC.Type AND B.BatchNo = BC.BatchNo INNER JOIN SciposA.dbo.cvrwhcontrol WH ON WH.Warehouse = B.Warehouse INNER JOIN SciposA.dbo.cvrtypes T ON T.CoverType = B.Type INNER JOIN cvrquality Q ON Q.Code = B.Quality AND B.Type = Q.Type LEFT JOIN SciposA.dbo.cvrgrndetail GD ON GD.BatchNo = B.BatchNo AND B.Warehouse = GD.Warehouse LEFT JOIN SciposA.dbo.cvrorderdetails OD ON OD.OrderNo = GD.OrderNo AND OD.LineN = GD.LineN GROUP BY B.Barcode
Is this a limitation of SQL server. I am running a quite complex sp that I wrote which uses exec to execute an SQL string. Running the SP produces the desired results but if I try to use this sp with an insert statement then I get an error message that exec cannot be nested in an insert statement.....any help would be appreciated
I have created a rule in outlook to run this Visual Basic Script when I receive a specific email. The codes runs a SQL Server DTS Package that imports data and runs a couple of stored procs. If the DTS package takes a while to execute my outlook will lock up until the entire package is complete. I want to run the DTS package to run but I don’t want the application to remain locked till its complete. Anyone have any ideas of how to get around this....
Public Sub RunTVDTSPackage(Item As Outlook.MailItem) Dim oPackage As New DTS.Package On Error GoTo eh
I am trying to delete data from a table prior to populating with new data via a trigger if certain critria matches to eliminate duplicates. I have copied the trigger below. The syntax checks ok but I get any error message saying 'Incorrect syntax near 'GO'. 'ALTER TRIGGER' must be the first statement in a query batch' when I try to save.
Can someone tell me if this is possible please.
IF EXISTS (SELECT * FROM hold_complete WHERE fkey = hold_complete.fkey AND actiontext = 'hold' and Subactiontext = 'pending user') delete from hold_complete where hold_complete.fkey = fkey GO CREATE TRIGGER tr_hold_complete ON CallsHistory for INSERT AS SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED insert hold_complete select ins.AddedDT, ins.fkey, ins.actiontext, ins.subactiontext, con.emailaddress, ca.loggeddt, (con.forename + ' ' + con.surname) as contactname, ca.summary, ca.notes,co.coordinator, co.coordinator,getdate(), ca.status,ca.lastsubaction,getdate(),ca.dateopened ,ca.companyname,getdate(),(null),ch.notes FROM inserted as ins with (nolock) join calls as ca with (nolock)on ins.fkey = ca.callid join contact as con with (nolock) on ca.contactid = con.contactid join company as co with (nolock) on ca.companyid = co.companyid join callshistory as ch with (nolock) on ins.historyid = ch.historyid where ins.actiontext = 'hold' and ins.Subactiontext in ('completed','pending user')
I am trying to understand how to predict what user context a package will execute under, more specifically when it comes to operating system access.
For example we have a package that creates a flat file, during developement it was simply created on the C: drive of the users pc.
Now it must create the files on a remote server.
How do permissions to folders work is this case?
When you execute it in BIDS debug does it use the window account you logged onto the server as to connect to the remote folder ?
If so what happens when deployed to sql server and scheduled in a job? Does it use the domain account that the sql server agent account runs under ? Or will it use the credentials of the proxy account?
I am unclear in this scenario ? Obviously we need to grant the correct permissions on the remote server shared folder to the correct user, but i am not sure under what user account the package will be trying to create files on the remote server as ?
In a nutshell i need clarity on what user a package is executed as when running in a sql job ?
Anyone have a good link or the time to clarify this for me please.
I'm currently updating one of our web sites and have encountered astrange problem. The page that is giving me the problem is written inASP and hits a SQL 2K DB. When I click submit I have 4 procs thatneed to be executed. I always get a sql server timeout error. Iopened up Profiler and traced the events. The page hangs on the lastproc called which is basically a select * where id = @id. If I runthe same proc that hangs in query analyzer, literally copy fromprofiler into query analyzer the proc runs no problem but when I'mhitting it from the web it hangs. The proc takes in 1 argument whichI see when I view the trace in profiler. The id getting passed to theproc is correct. I don't think this is a web problem because the COMobject that runs the last 2 procs works on other productionapplications with no problems. I was wondering if anyone has anysuggestions. I'm not a dba but would like to know what the problem isfor future reference. I pasted the proc below just for the hell ofit. FYI, I changed the name of the proc, table and columns forsecurity purposes.CREATE PROCEDURE [dbo].[spName]@TNum integerASSELECT M.*FROM tblName MWHERE (M.[idColumn] = @TNum)GOThanks in advance,Bob
Hi,I have stored procedure (MS SQL Server 2000) which operateson around 600 000 rows (SELECT, UPDATE, INSERT)and executes in 5 minutes,when I put it in SQL transaction it slows down to more than 5 hours (!!)I have to admit that it is not problem with data locks (beside thatprocedurenothing else is executed on db),It is not also problem with that exact procedure, other proceduresalso slow down heavily when wrapped by SQL transactionvery very seldom stored procedure within transaction executescomparably long that its copy without transactionI guess it could be MS SQL Server 2000 configuration/tuning problem.Any ideas ?Chris
I have a problem with a report set up to execute a snapshot and subscription on day 1 of each month. On July 1, it ran and was emailed to the user; on 8/1 it created the snapshot but did not send to the user because of a user error in the subscription, so it was sent manually. the parameters issues were corrected and on 9/1 the snapshot executed but the subscription did not send. In the subscription window, the last good email of a snapshot was sent 7/1. So again 9/1 was sent manually. After fixing the parameters shouldn't this have emailed the snapshot correctly on 9/1?
Do I need to delete this subscription and create a new one to get it to send on 10/1 now?
any suggestions?
the set up is now verified to be identical to other reports which created a snapshot and emailed successfully, the only difference is the original problem with only the 7/1 subscription the only one you see when you look at the "subscription tab" for this report in Reporting services:
Description:Send e-mail to name.one@plife.com, name.two@plife.com
Trigger:SnapshotUpdated
Lastrun: 7/1/2006 7:00 AM
Mail Sent to:Mail sent to name.one@plife.com, name.two@plife.com
Snapshots successful
5/1/2006 7:00:02 AM
6/1/2006 7:00:03 AM
6/1/2006 8:26:31 AM
7/1/2006 7:00:07 AM
8/1/2006 7:00:24 AM
9/1/2006 7:00:19 AM
Logfile for "Ops Specialists/1035 Exchange Issue - Details (200407-R01)" report Portion of log file: ReportingServicesService!dbpolling!f!9/1/2006-07:00:04:: EventPolling processing item e869f577-b189-4e6a-a9be-37b61d737340 ReportingServicesService!dbpolling!13!9/1/2006-07:00:04:: EventPolling processing item 2b275311-0ee4-4cef-972d-86de48f4f360 ReportingServicesService!dbpolling!c!9/1/2006-07:00:04:: EventPolling processing item 3df3960a-ee28-4d5e-a46e-326ab5a0060d ReportingServicesService!library!c!9/1/2006-07:00:04:: Schedule e11a00d3-56b6-46b3-b2d3-9e3156805be8 executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!13!9/1/2006-07:00:04:: Schedule 177e9473-2c81-4152-a89f-137d25a556e1 executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!4!9/1/2006-07:00:04:: Schedule 89071f26-435c-4cc7-bd9a-23907ef32b5c executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!f!9/1/2006-07:00:04:: Schedule 3720d20c-460e-4549-9b57-c3cc97919f5f executed at 9/1/2006 7:00:04 AM. ReportingServicesService!schedule!13!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/1035 Exchange Issue - Details (200407-R01) ReportingServicesService!schedule!c!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/1035 Exchange Bridged - Details (200407-R01) ReportingServicesService!schedule!f!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/Policy Counts (200401-R01) ReportingServicesService!schedule!4!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Inforce Admin/CAAB226 (200501-R03) ReportingServicesService!dbpolling!d!9/1/2006-07:00:08:: EventPolling processing 4 more items. 8 Total items in internal queue.
I have a SSIS package that contains a DTS 2000 package in it. The DTS 2000 package imports data into several tables from an ODBC data source. When I execute the package through BIDS, no problems. Everything works great. I am now trying to execute the SSIS package in my stored procedure & it gives me the following error: Error: 2007-01-30 11:54:24.06 Code: 0x00000000 Source: Populate IncrTables Description: System.Runtime.InteropServices.COMException (0x80040427): Execution was canceled by user. at DTS.PackageClass.Execute() at Microsoft.SqlServer.Dts.Tasks.Exec80PackageTask.Exec80PackageTask.ExecuteThread() End Error
I did a search for this & found KB 904796. It had the exact error message but I don't believe my packages uses 2000 metadata services. Just to be safe, I reinstalled the backward compatibility features & the DTS 2000 tools on the server. That still did not fix anything. I found another forum that suggested loading the DTS 2000 package internally, which I did & it did not fix anything. I am using a password for the protection level so that is not causing my issue. Does anyone else have any suggestions as to what I might be able to try?
SQL 2005 Dev Ed SP1 & post SP1 hotfixes installed Win 2k3 server Thanks! John
All- Is there a way that I can embedd a call to a stored procedure into an existing INSERT section in a table adapter? Say my objective is to call a stored procedure called personfill automatically RIGHT AFTER the TableAdapter inserts a row into the person table. One catch is that the stored procedure must be sent the value of unique identifier field person_id, which was created for the new person record automatically by the db. (If this is not possible to do, I might try using a TRIGGER in the person table.) Below is the INSERT code of the TableAdapter. My guess is that if I could call a procedure, I would want to put the call between lines 12 and 13. Your comments would be most appreciated!!! -Kurt1 <InsertCommand> 2 <DbCommand CommandType="Text" ModifiedByUser="false"> 3 <CommandText>INSERT INTO [person] ([family_id], [circle_id], [person_type_id], [last], [first], [username], [password]) VALUES (@family_id, @circle_id, @person_type_id, @last, @first, @username, @password)</CommandText> 4 <Parameters> 5 <Parameter AllowDbNull="false" AutogeneratedName="" DataSourceName="" DbType="Int16" Direction="Input" ParameterName="@family_id" Precision="0" ProviderType="SmallInt" Scale="0" Size="0" SourceColumn="family_id" SourceColumnNullMapping="false" SourceVersion="Current" /> 6 <Parameter AllowDbNull="false" AutogeneratedName="" DataSourceName="" DbType="Int16" Direction="Input" ParameterName="@circle_id" Precision="0" ProviderType="SmallInt" Scale="0" Size="0" SourceColumn="circle_id" SourceColumnNullMapping="false" SourceVersion="Current" /> 7 <Parameter AllowDbNull="false" AutogeneratedName="" DataSourceName="" DbType="Int16" Direction="Input" ParameterName="@person_type_id" Precision="0" ProviderType="SmallInt" Scale="0" Size="0" SourceColumn="person_type_id" SourceColumnNullMapping="false" SourceVersion="Current" /> 8 <Parameter AllowDbNull="false" AutogeneratedName="" DataSourceName="" DbType="AnsiString" Direction="Input" ParameterName="@last" Precision="0" ProviderType="VarChar" Scale="0" Size="0" SourceColumn="last" SourceColumnNullMapping="false" SourceVersion="Current" /> 9 <Parameter AllowDbNull="false" AutogeneratedName="" DataSourceName="" DbType="AnsiString" Direction="Input" ParameterName="@first" Precision="0" ProviderType="VarChar" Scale="0" Size="0" SourceColumn="first" SourceColumnNullMapping="false" SourceVersion="Current" /> 10 <Parameter AllowDbNull="true" AutogeneratedName="" DataSourceName="" DbType="AnsiString" Direction="Input" ParameterName="@username" Precision="0" ProviderType="VarChar" Scale="0" Size="0" SourceColumn="username" SourceColumnNullMapping="false" SourceVersion="Current" /> 11 <Parameter AllowDbNull="true" AutogeneratedName="" DataSourceName="" DbType="AnsiString" Direction="Input" ParameterName="@password" Precision="0" ProviderType="VarChar" Scale="0" Size="0" SourceColumn="password" SourceColumnNullMapping="false" SourceVersion="Current" /> 12 </Parameters> 13 </DbCommand> 14 </InsertCommand> 15 <SelectCommand> 16 <DbCommand CommandType="Text" ModifiedByUser="true"> 17
I have found an (encrypted) SP which takes ~20 seconds to run on one of our client DBs.
If I drop and recreate (or alter) the SP giving it the same contents it will run in ~1 second.
I need to know 1. why the sp started running so slowly 2. how to stop it happening again 3. how to fix it without resorting to drop/recreate (I can't exactly write a batch script to recreate all our sps every so often)
I have tried sp_updatestats and sp_recompile before running the sp to no avail...
I have a stored procedure which executes about forty other stored procedures in several different databases. All of these other procedures truncate tables and insert new data into those tables selected from still other tables.
I want to run this top-level procedure using an account which can't do anything else.
Is there a simple way to give it all the permissions it needs without empowering it to do anything else?
I am running a DTS Package.I have a temp table with 1494 records. I am inserting a 'Y' or'N'into a temp table #HasClaims.The TempTable name with the Provider Id's(PRPR_ID) is#TempFACETSNODupesThe @identityID is an identity field counting back from 1494 to 1I count back from the Max value of the identityid (1494) in the Whileloop until I get through all the records. The idea is to check for theexistance of a claims and authorization record and put a 'Y' or 'N'record in the temptable #HasClaims.This is running in the Execute SQL Task object of the DTS Package.The Package runs successfully but only inserts 200 rows into the newtemp table. There should be a row for each provider. Each time itruns, the number of rows it returns is different. Sometimes it is 205,then 185, then 210, before it completes the DTS package.Has anyone run into While looping problems within an Execute SQL taskin a DTS package(SQL 2000)like this--------------------------------------------------------------SELECT @identityID = MAX(IDENTITYID) FROM #TempFACETSNODupesWhile @identityID >= 1BEGIN@PRPRID is the placeholder for the PRPR_ID (Provider)SELECT @PRPRID = PRPR_ID FROM #TempFACETSNODupes WHERE IDENTITYID =@identityIDIF exists( SELECT CLCL_ID FROM dbo.CMC_CLCL_CLAIM CLCL WHERECLCL.PRPR_ID = @PRPRID)BEGININSERT INTO #HasClaims (PRPR_ID, HasClaims, IdentityID)VALUES( @PRPRID, 'Y', @identityID)ENDELSE INSERT INTO #HasClaims (PRPR_ID, HasClaims, IdentityID)VALUES( @PRPRID, 'N', @identityID)------------------------------------------------------------SELECT @identityID = @identityID - 1END
I have an T-SQL statement that runs in under 4 minutes on the Management Studio, and freezes forever in Integration Services. The statement ran fine before in both places, as well, but now I'm running on x64 OS with everything 64 bits including SQL.
The statement is this:
DECLARE @data datetime set @data = DATEADD(DAY,-1,DATEADD(YEAR,-1, GETDATE()))
INSERT INTO TempDateModelo (Data) (SELECT Data FROM [tPosting Date] a WHERE a.Data>=@data)
UPDATE fModelo SET [Vendas MM Anterior]= (SELECT SUM(b.[Custo Vendas]) FROM fValues b WHERE (b.[Posting Date] BETWEEN DATEADD(YEAR,-1,DATEADD(DAY,-27,fModelo.[Posting Date])) AND DATEADD(YEAR,-1,fModelo.[Posting Date])) AND b.[Item Department]=fModelo.[Item Department] AND b.[Centro Custo]=fModelo.[Centro Custo])
UPDATE fModelo SET [Vendas MM Posterior]= (SELECT SUM(b.[Custo Vendas]) FROM fValues b WHERE (b.[Posting Date] BETWEEN DATEADD(YEAR,-1,fModelo.[Posting Date]) AND DATEADD(YEAR,-1,DATEADD(DAY,27,fModelo.[Posting Date]))) AND b.[Item Department]=fModelo.[Item Department] AND b.[Centro Custo]=fModelo.[Centro Custo])
UPDATE fModelo SET Dia = DATEPART(DAY, [Posting Date]), Mes = MONTH([Posting Date])
We have a sort of complex user structure in the sense that depending on the type of user the data resides in different tables. Therefor I needed a stored procedure that finds out what table to look for a certain column in. Below is such a stored procedure and it works like it should but my problem is that I don't know how to retrieve the result (which should be a string so can't use RETURN).
I've tried using an OUTPUT variable but since I just run EXEC (@statement) in the end I can't really set an output variable the common way (as in EXEC @outputVariable = PMC_User_GetUserValue(arg1, arg2..)) or can I?
I have also tried to use SELECT to catch the result somehow but no luck and Google didn't help either so now I'm hoping for one of you... Notice that you don't have to bother about much of the code except for the end of it where I want it to return somehow or figure out a way to call this stored procedure and retrieve the result.
Thanks in advance ripern
-- Retrieves the value of column @columnName for credential id @credID ALTER PROCEDURE [dbo].[PMC_User_GetUserValue] @credID int, @columnName nvarchar(50) AS
DECLARE @userDataTable nvarchar(50) DECLARE @userDataID int DECLARE @statement nvarchar(500) SET @statement = ' '
SET @userDataID = (SELECT PMC_UserMapping.fk_userDataID FROM PMC_UserMapping INNER JOIN PMC_User ON PMC_UserMapping.fk_user_id = PMC_User.id WHERE PMC_User.fk_credentials_id = @credID)
SET @userDataTable = (SELECT PMC_UserType.userDataTable FROM PMC_UserType INNER JOIN PMC_UserMapping ON PMC_UserType.id = PMC_UserMapping.fk_usertype_id INNER JOIN PMC_User ON PMC_UserMapping.fk_user_id = PMC_User.id WHERE PMC_User.fk_credentials_id = @credID)
SET @statement = 'SELECT ' + @columnName + ' AS columnValue FROM ' + @userDataTable + ' WHERE id=' + convert(nvarchar, @userDataID)
-- Checks whether the given column name exists in the user data table for the given credential id. IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=@userDataTable AND COLUMN_NAME=@columnName ) BEGIN EXEC (@statement) END
Is there a way ( using an included SSIS task rather than coding a script task) to detect whether a package has run longer than a specified period of time?
So I can send an email to operators notifying them that a job is taking longer than usual.
I apologize for the long post but I am trying to give as much information as I can about the steps I've taken to troubleshoot this.
We have a stored procedure that builds a sql statement and executes it using the Execute command. When I execute the stored procedure through query analyzer it takes close to 5 seconds to execute. When I print out the exact same statement and execute it directly in query analyzer as "raw sql", it takes 0.5 seconds - meaning it takes 10 times longer for the code to execute in the stored proc. I altered the stored proc to execute the printed sql instead of building but it still takes the full 5 seconds and there were no changes in the execution plan. This makes me confident that the issue is not caused by the dynamic sql. I've used with recompile to make sure that the stored procedure caches the most recent execution plan. When I compare the execution plans, the stored proc uses a nested loop whereas the raw sql statement uses a hash join. Seeing that, I added the hash hint to the stored proc and doing so brought down the execution time down from 5 secs to 2 secs but still the raw sql statement uses a clustered index whereas the stored proc uses a non-clustered index and that makes the statement 4 times slower. This proves how efficient clustered indexes are over non-clustered ones, but it doesn't help me since, as far as I know, I can't force SQL Server to use the clustered index.
Does anyone know why sql server is generating such an inefficient execution plan for the stored proc compared to the execution plan that it generates when executing the raw sql statement? The only thing I can think of is that some stats are not updated and that somehow throws off the stored proc. But then again, shouldn't it affect the raw sql statement?
There is a query which when executed in the grid mode(ctrl+d) takes approx 0.02 seconds(about 21,000 rows) But when I execute in the text mode, it takes about 0.40 seconds!! Why is this difference? Also, when the records from this table are read from a VB application, they are equally slow (as in the text mode!) Why is it so slow on the text mode & relatively faster in the grid mode? Has anyone got any idea on ‘Firehose’ style cursor ?(which may speed up access of data in the VB application)
I am able to run the package successfuly in test database. but not in production database. It throughs up error saying
Description: Unable to load the package as XML because of package does not have a valid XML format. A specific XML parser error will be posted. Description: Failed to open package file "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" due to error 0x80070015 "The device is not ready.". This happens when loading a package and the file cannot be opened or loaded c orrectly into the XML document. This can be the result of either providing an incorrect file name was specified when calling LoadPackage or the XML file was specified and has an incorrect format. End Error Could not load package "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" because of error 0xC0011002. Description: Failed to open package file "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" due to error 0x80070015 "The device is not ready.". This happens when loading a package and the file cannot be opened or loaded corr ectly into the XML document. This can be the result of either providing an incorrect file name was specified when calling LoadPackage or the XML file was specified and has an incorrect format.
We are using a modeling technique called Anchor Modeling in our data warehouses. You can read more about the technique itself at our homepage http://www.intellibis.se, where we have published a fact sheet and a recently held presentation (TDWI European conference). One of the features with this technique is its simple way to historize data. This is done by having a fromDate column which together with the surrogate key will yield a unique combination. On the tables that has this kind of historization we add a primary key, which in turn will create a clustered index, with the following specification (surrogateKey asc, fromDate desc). This will physically order data on the storage media according to the specificed columns and ordering. Now I move on to create a "latest view" of this table which does a subselect to find the latest version for every surrogateKey using max(fromDate). Should not the optimizer now figure out that data is ordered so that the latest version always comes first for every surrogateKey, hence any sorting would be unneccessary? If I look at the actual execution plan after running a query that uses the view there is a sort in the plan, but the cost is always 0%. Does this mean that it did not sort the data, or that it did call a sorting routine, but it actually took very little time to do the sorting? If so, is there a reason that is has to do the sorting or could it have been left out by an even smarter optimizer?
I would also like to applaud the people behind the optimizer, since it will figure out which tables are in fact necessary to query and eliminate others, even if I have left joined them into the view I am using. This speeds up performance and makes anchor modeling feasible. Unfortunately optimizers from other vendors seem to have trouble doing this...
I've been racking my brain all day and I finally decided to ask for help. I've got two tables with rows from the first that need to be sorted by the second. The problem is that the rows don't always exist in the second table. I've tried various forms of INNER, LEFT, RIGHT, OUTER, LEFT OUTER, CROSS, etc., etc., etc. and nothing (oh yeah UNION too). Every time I get close, I lose the records that don't have matches.
Something close-
SELECT A.IDDoc, B.First FROM A LEFT JOIN B ON A.IDDoc = B.IDDoc WHERE B.Dept = 'A' ORDER BY B.First