Does anyone know why SSIS sometimes just sits in the Pre-Execute phase of a data flow and does nothing? It doesn't matter how elaborate the data flow is or the volume of data. It can sometimes take 80% of the task's run time.
I have a SSRS report with 6 columns each column containing count of total# of applicants meeting certain criteria. User want to click on each column and see the basic information and also want to get the ability to export the data into excel.
I know that I can create 6 drillthrough reports with basic information of applicants and link it to the count from each column respectively but I was wondering if it is possible to write a Stored procedure with all 6 select queries and execute only 1 select query based on the column that user clicks on main report ?
Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
I want to update Flag column in second table based on the Adder names.
If the Applicatiion has atleast one AIX and Adder name is UDB then the flag would be True. If the Application has more the one AIX and Adder names are diferent then the flag would be null.
APpName OS Adder
App1 ||| Windows|||Null App1 ||| Linux |||UDB App1 ||| AIX |||UDB App1 ||| Linux |||Sql
SELECT h.JobNum, (CASE WHEN MONTH(h.JobCompletionDate) = 1 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS JAN, (CASE WHEN MONTH(h.JobCompletionDate) = 2 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS FEB, ... FROM JobHead h INNER JOIN LaborDtl l ON h.JobNum=l.JobNum WHERE JobCompletionDate>='20070101' AND JobCompletionDate <'20080101' AND l.ClockInTime<>0 GROUP BYh.JobNum ,h.JobCompletionDate
The query shows, for each job, the month in which the job completed, and the number of hours it took to complete. I€™m calculating the number of days€™ duration by doing a datediff between the oldest and newest clockindates. I need to ignore adjustment transactions in the labordtl table €“ these rows are easily identified as they have clockintime values of 0. So far, so good. Now here€™s my problem.
There are some jobs which have only one €śreal€? labor transaction €“ this could happen if the job only took one day to complete. Other labor transactions may exist for that job, but let's say they are adjustments which we can ignore -- the date they were entered should not extend the duration of the job. In this situation, my datediff between the oldest valid transaction and the newest, returns 0. I don€™t have to count hours between clockintime and clockouttime. The rule is simply that if there is only one "real" labor transaction, I need to count this as a 1 day job.
I thought a nested CASE statement or expression might be the way to go but I didn't make any real progress.
Any ideas to solve this problem would be appreciated.
I had got the below error when I execute a DELETE SQL query in SSIS Execute SQL Task :
Error: 0xC002F210 at DelAFKO, Execute SQL Task: Executing the query "DELETE FROM [CQMS_SAP].[dbo].[AFKO]" failed with the following error: "The transaction log for database 'CQMS_SAP' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
But my disk has large as more than 6 GB space, and I query the log_reuse_wait_desc column in sys.databases which return value as "NOTHING".
So this confused me, any one has any experience on this?
I have a 'Execute SQL Task' in my 'control flow', my 'Execute SQL Task' will return a value which I am assigning to a variable. Based on the value of the variable, I need to control my other flows. If the variable's value is 1 then I should invoke a dataflow, else I should write a failure error message in event viewer. Please could someone provide some inputs on how this can be done.
'Execute SQL Task' ----->value 1 ------>data flow to be executed
'Execute SQL Task' ----->value !=1 ------> write some error message in the event viewer and no tasks should be executed after that.
I am trying to create a CLR based stored procedure in C#. When i tried printing simple "Hello" from it, it works fine. Now requirement is to run an exe file from it. For that i use process.start. But when i try to execute the procedure i get all the security execptions. Can someone please help. Following is the code snippet. -------------------------------------------------------------------------------------------------------------------------------------------------------------
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void RunProc(string arg)
{
SqlPipe pipe = SqlContext.Pipe;
pipe.Send("Hello");
Process.Start("E: est.exe");
}
}
CREATE ASSEMBLY [RunProcess]
FROM 'RunProcess.dll'
CREATE PROCEDURE dbo.sqlclr_RunProc
(
@arg nvarchar(1024)
)
AS EXTERNAL NAME [RunProcess].[StoredProcedures].[RunProc]
I am writing a number of data migration scripts, like the (simplified version) below
Code: INSERT INTO newDatabase.dbo.DaDestinationTable(col1, col2, col3) SELECT col1, col27, col13 FROM OldDatabase.dbo.DaSourceTable
There are no validation rules defined in the database (don't ask why), only in the program code. Because of the absence of validations in the database, the data migration scripts could violate business rules (NOT NULL, FK violations, domain values in a column, ...).
A solution could be that both the program and the data migration call the same stored procedure to insert records in the destination table, so the validation rules must only be defined in one place (the stored procedure).
I have never used stored procedures in a "set-based" situation. Is this even possible? The only solution I see is to call those sp's in a loop (WHILE or CURSOR). Something like
Code: SET @my_id = -1 SELECT @my_id = MIN(id) FROM OldDatabase.dbo.DaSourceTable WHERE id > @my_id WHILE @my_id IS NOT NULL
[Code] ....
For each record, the table OldDatabase.dbo.DaSourceTable is hit twice, once to get the next id (@my_id), once to get the column values. I could combine both with one SELECT script by using ROW_NUMBER(), but I doubt that that will perform faster.
I only work set-based, I have barely ever used a WHILE loop, let alone a CURSOR. What the consequences will be on performance. There are millions of records that have to be migrated.
I was thinking of CROSS APPLY, but you can't use that in combination with a SP.
I am trying to programmatically execute a package that contains an Execute SQL Task component bound to a variable for its "SqlStatementSource" property (via an expression). The variable is of type String and contains a simple value of "SELECT 1". The Execute SQL Task contains an expression that sets the SqlStatementSource property to the value of this variable.
The package runs fine when I execute it via dtexec or BIDS, but when I attempt to run it via the object model, I receive the following error message:
The result of the expression ""@[User::Sql]"" on property "SqlStatementSource" cannot be written to the property. The expression was evaluated, but cannot be set on the property.
I did a search on this forum and noticed quite a few threads about this same issue, but no explanation/solution. We have quite a few packages that have dynamically constructed SQL statements for Execute SQL Tasks, and they are all failing to run via the object model. Is there something that I am missing?
My product was developed for and works correctly on SQL Server 2000. However, when we upgraded to 2005, we found that certain system stored procedures were different, causing our product to break.
We can easily change our stored procedures to work in 2005, but we have a large client base, some of whom will be using each version. Our current solution is to check the version of SQL Server during installation and choose which script to use at that time in order to have an appropriate stored procedure for that version, but we are concerned about users who install our product with SQL Server 2000 and then upgrade to SQL Server 2005.
How can I make a stored procedure that will run differently depending on the version? I tried something like:
if (select charindex('2000', @@version)) > 0
begin -- SQL Server 2000
SELECT
... FROM
... WHERE end else -- SQL Server 2005
SELECT
... FROM ... WHERE end
Unfortunately, the system tables I'm selecting from have different stuctures in the different versions (one example is msdb.dbo.sysjobschedules and msdb.dbo.sysschedules), and even though the code never gets into the SQL Server 2000 section on 2005, it parses the whole procedure for errors before allowing it to be saves and will not allow this.
I am new to this, but have scoured the web and not found an answer to my question...
I have an execute process task that runs a simple batch file. When this batch file completes with an ERRORLEVEL greater than 0, I would like the task to fail. I thought this simply meant setting the "FailTaskIfReturnCodeIsNotSuccessValue" property to true, and setting the SuccessValue to 0. However, this does not appear to work.
Even with a simple batch file forcing the errorcode to 1 as follows, the task still completes "successfully".
We are building a dataload application where parameters are store in a table. And there are multiple packages for each load.There is a column IsChecked column if it is 1 then only the child package should execute.Created a master package. In which i have taken execute SQL task in that storing a results in variable and based on the result the child package should execute. But In executesql task i selected result set as full result set. Â I am getting the below error.
[Execute SQL Task] Error: Executing the query "SELECT Â isnull(ID ,0) AS ID FROM DataLoadParameter..." failed with the following error: "The type of the value (DBNull) being assigned to variable "User::LoadValue" differs from the current variable type (Int32). Variables may not change type during execution. Variable types are strict, except for variables of type Object.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I need to update the price of one row based on another. In this case it takes the price from a sku which has a value in the "ShadowOf" field. Shadow Of is a parent sku. It takes the parent sku price, adds 2% and applies it to the child sku. It's all working well except I'm not sure how to handle it when a parent has more than one child. It updates the first & sets the rest to null.Enclosed is the sample data + query.
The query below brings up a set of data below that. All items are valued at $9.99. If a row has any information in the column "ShadowOf", I want to update that row. It should take the price from the row where the contents of shadowof appear & add 2%. To give an example 'GreenApples' is a shadow of 'BoxOfApples'. 'BoxOfApples' is valued at $9.99, so I want to make 'GreenApples' $10.19. How would I do this?
We are developing a database in SQL and we are trialing some of our typical analysis undertaken on out dataset.
I have a problem with a update function. ID direction Holiday Lat Long Speed obstime - Datestamp LicenseID - varchar(7) status - int (0 or 1) O-Unoccipied, 1-occupied Pickup - Boolean Dropoff - Boolean
I am trying to update the 'Pickup' or 'Dropoff' when the status changes from 0 to 1 or from 1 to 0 if the difference in the datestamp is less than 2 minutes. Pickup is when the status goes from 0 to 1 Drop off is when the status goes from 1 to 0
Here is my situation;I have two tables in a MS-SQL DB. One table with dollar amounts and servicecodes. I have a second table that I want to move some information into fromthe first table. The catch is I want to move one field as is from the firsttable to the second, but the rest of the fields in the second table arecalculations based on fields in the first table.The first table is called XFILE. It has fields SVCCODE, PRICE, DWAGES,DMATLS, etc. The second table has the same field names and I want to movethe SVCCODE from XFILE to Cost_Percent with no changes. For DWAGES in theCost_Percent table I want to do the following calculation;[ XFILE.DWAGE] divided by [XFILE.PRICE] and put the results in Cost_Percenttable DWAGES fieldSo basically I am putting a percent in the Cost_Percent table. I can movethe data from one table to another ok, but I can not figure out how to writethe query in the Query Analyzer to do this.I am ruining SQL2000 Standard on a Win2K3 server. I am using Query Analyzerand SQL Enterprise Manager from an XP-Pro WS.I have looked in the 'Books On-Line' for the answer but I sort of new to SQLand can't find the answer that I am sure is staring me in the face.Thanks in advance for any help.Mike Charneym charney at dunlap hospital dot org
I need to update the status of a client when they make a payment of a certine amount. My problem is this, the two pieces of information needed to do this are comming from two tables. For example; @ClientID Int, @PmtAmt Money IF @PmtAmt >= tblSettings.TopAmt THEN Update tblClients SET ClientStatus='High' WHERE ClientID=@ClientID ELSE Update tblClients SET ClientStatus='Medium' WHERE ClientID=@ClientID ENDIF How do I do this in a stored procedure? I need to select the TopAmt from the table tblSettings and then update the table tblClients.
I want to update the startdate column (for all rows) so that when period is 0 then the new value is a hardcoded value (say '01-Dec-2000') but for all other rows it takes the value in the enddate column for the row of the previous column (with the same freq)
ie the startdate column for period 1 takes the enddate value for period 0 and so on for a particular freq
create table #periods (period int , startdate datetime , [enddate] datetime , freq int) insert #periods ( period , startdate , enddate , freq) select 0 , '01-Jan-1900' , '31-Jan-2001' , 1 union all select 1 , '01-Jan-1900' , '28-Feb-2001' , 1 union all select 2 , '01-Jan-1900' , '31-Mar-2001' , 1 union all select 3 , '01-Jan-1900' , '30-Apr-2001' , 1 union all select 4 , '01-Jan-1900' , '31-May-2001' , 1 union all select 0 , '01-Jan-1900' , '31-Jan-2002' , 3 union all select 1 , '01-Jan-1900' , '28-Feb-2002' , 3 union all select 2 , '01-Jan-1900' , '31-Mar-2002' , 3 union all select 3 , '01-Jan-1900' , '30-Apr-2002' , 3 union all select 4 , '01-Jan-1900' , '31-May-2002' , 3
/* I know I need a case statement to test for column 0 and to join the table on itself and have put something together but it fails for column 0 and updates to NULL - I think it must be to do with the join ??
This is what I've got so far :
UPDATE PA1 SET PA1.Startdate = CASE WHEN PA2.period = 0 THEN 2000-12-01 00:00:00.000 ELSE PA1.Enddate END FROM #periods AS PA1 JOIN #periods AS PA2 ON PA1.Freq = PA2.Freq AND PA1.Period = PA2.Period + 1
Hi all I was wondering if it was possible to update a table based off of information from Excel. here is what I though would have worked.
update MyTest Set acctNumber='111' FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=C:MyFile.xls;HDR=YES', 'SELECT * FROM [Sheet1$]') where [ProductGroup]='Hal Butts'
with 'Update MyTest' being the table name. It does have the same name as the excel file. Just to rule that out.
the following criteria. i have the selection all done but am trying to figure out how to do the following: if column4 < 0 then add column4 to column3, move 0 to column4; if column3 < 0 then add column3 to column2, move 0 to column3; if column2 < 0 then add column2 to column1, move 0 to column2; add column3 to column4; move column2 to column3; move column1 to column2; if column0 > 0 move column0 to column1, move 0 to column0 else move 0 to column1;
I have a large website with over 100,000 images and the location of the images are stored in a column (img_url) as below:
/images/imagename.jpg
Because all these images are stored in the same folder it is hard to manage so we want to store each image under a directory based on the 1st letter of the image name, ie:
/images/a/aimage.jpg /images/b/bimage.jpg
I can automate the physical move of the images into the correct directory but I need SQL update query that will update each column based on the 1st letter of the image.
I have a view based on two tables. Now I want to update that view in such a manner that the columns of both the tables are going to update. Can you suggest me what code I should write so that I can update that view.
I have a table with eight (8) fields, including the primary key (rfpid). Three of the fields are foreign keys, which take their values form lookup tables. They are int fields (pmid, sectorid, officeid). One of the fields in this table is based on putting together the descriptive field in the lookup table for sector (tblsector). The two other fields to be part of this string are the rfpname and rfpid. This creates the following string:
The words rfp, proposals are words that have to be part of string;
the slashes are to also appear.
current_year would be defaulting to datepart = year (2008)
The part that has the last two digits of the current year then the underscore and then the rfpid should be connected by an underscore to the rfpname. I am at a loss and would greatly appreciate any help.
I'm attempting to update a variable using the Execute SQL task, I've read a lot of posts on this and seems reasonably simple but obviously not. The first time I ran it the variable was updated correctly, I then manually changed the variable value and since then it doesn't work.
I have a task with the following properties;
Resultset: SingleRow
SQLStatement: SELECT MAX(Player_Daily_Data_Pull_Date) as 'PlayerDaily' From Job_Control
On the resultset tab I have the resultsetname = PlayerDailyand the variable I want to update.
The variable has a type of datatime and it's scope is the container that I'm running the sql task within.
I am running an update statement in an execute sql task, it will run one time but then fails after that. Whats going on?
Here is my query I'm running.
UPDATE encounter SET mrn = r.mrn, resourcecode = r.resourcecode, resgroup = r.resgroup, apptdate = r.apptdate, appttime = r.appttime FROM SCFDBWH.SigSched.dbo.EncounterNARaw AS r INNER JOIN encounter ON encounter.encounter_number = r.Encounter
What I'm doing is pulling info from a flat file, inserting it into the encounter table, I then run this sql statement to pull additional info from another table on another server, and update the encounter table with the corresponding info. Pretty straight forward. But what is really getting me is that the package will run 1 time but if I wait ten minutes and try running it again it bombs. Any ideas?