Hi All. How do i loop over to extract data out of a xml parameter in order to insert that into a table along with other input.
For example if the xml is <Id>1</Id> <Id>2</Id> <Id>3</Id>
I want to input values into table (1,data2,data3) and (2,data2,data3) and (3,data2,data3).
Out of one messed up table I need to create two related tables. I am stuck in trying to get 'some' of the columns from the messed up table to the new table. How can I select only the columns I need and insert them in the new table? Thanks in advance!
hi all i have the following stored procedure. declare @AreaID int select @AreaID = 1
select DATEPART(hh, dbo.JobEvent.JobEventDateTime) AS hourbracket, count(ID) as NUMCOUNT from table1 INNER JOIN table2 ON table1.JobId = table2.JobEvent_JobId where (table1.Job_AreaId = @AREAID) the there are about 300 AREAID's so my question is do i do a "LOOP" or is that no efficient and chews up resources on the server ?what would be a nice clean way to do this ? thanksrobby
I am using Report Writer for Ingres II.Is it possible to write a query in a loop?e.g. My table is like thistime position09:01 pos0109:02 pos0309:02 pos0109:04 pos05Can I loop a query to count the number of times each position occurs in each30 minute period in a day?I wish to generate a report that goes something like09:00 09:30 10:00----------------------------------------pos01 3 5 3pos02 0 6 4pos03 4 3 1
I am trying to run a UNION ALL query in SQL SERVER 2014 on multiple large CSV files - the result of which i want to get into a table in SQL Server. below is the query which works in MSAccess but not on SQL Server 2014:
SELECT * INTO tbl_ALLCOMBINED FROM OPENROWSET ( 'Microsoft.JET.OLEDB.4.0' , 'Text;Database=D:DownloadsCSV;HDR=YES', 'SELECT t.*, (substring(t.[week],3,4))*1 as iYEAR, ''SPAIN'' as [sCOUNTRY], ''EURO'' as [sCHAR],
[Code] ....
What i need is:
1] to create the resultant tbl_ALLCOMBINED table
2] transform this table using PIVOT command with following transformation as shown below:
PAGEFIELD: set on Level = 'Item' COLUMNFIELD: Sale_Week (showing 1 to 52 numbers for columns) ROWFIELD: sCOUNTRY, sCHAR, CATEGORY, MANUFACTURER, BRAND, DESCRIPTION, EAN (in this order) DATAFIELD: 'Sale Value with Innovation'
3] Can the transformed form show columnfields >255 columns i.e. if i want to show all KPI values in datafield?
P.S: the CSV's contain the same number of columns and datatype but the columns are >100, so i dont think it will be feasible to use a stored proc to create a table specifying that number of columns.
Hi,I'm probably missing something obvious (either that or doing this totally wrong).I'm trying to use a nested loop to generate the following results:Unit Day1 Day2 Day3 Day4 Day5Name1 25 45 89 54 76Name2 48 54 81 74 98What I have so far is this:WHILE @FCount < @TotalFoodUnitsBEGINSELECT (SELECT Unit FROM tbl_acc_FoodVenues WHERE UnitID = (@FCount + 1)) AS Unit WHILE @FDCount < @Days BEGIN SELECT (SELECT FdRevenue_a FROM tbl_acc_aud_SportsAudits WHERE AudDate = DATEADD(day, @FDCount, @pdStartDate)) AS Rev SET @FDCount = @FDCount + 1 END SET @FCount = @FCount + 1ENDAny suggestions please
I am quite new in query building so I would like to know where can I find some useful information about if, for,.. clauses to use in query analizer. I would like to set a variable for date(datetime) and use it in if, for,... clause to get results for each day in selected month.For example:
SELECT Uporabnik AS Expr1, MIN(Ura) AS Expr3, MAX(Ura) AS Expr5, COUNT(*) AS Expr4, Datum AS Expr2 FROM BatchIndeksNEW GROUP BY Uporabnik, Datum HAVING (Datum = CONVERT(DATETIME, '2001-11-30 00:00:00', 102)) ORDER BY COUNT(*) DESC
I'm a relative novice at using sql and I'm struggling with a particular problem. I'm not sure if this is the appropriate place to post this sort of thing but after much fruitless research I need to try and enlist someone's help.
I have a table of pay dates with a paid amount. I want to write a statement that creates a table that breaks those out into the daily amount paid. So it would create a table with a row for each date in the pay period and the amount paid each day.
For example: My existing table has an employee Bob who was paid $1000.00 on January 14. I want then create a table that has records for January 1 - 14 and $71.43 for each date. I think it I need some kind of loop for the whole thing to run through but I can't get the syntax right. I've thought about creating a temp table with all of the dates in that pay period but again, in order to create that temp table I feel like I need some sort of while loop that creates the rows for each date.
Any help or suggestions would be greatly appreciated. I'm at wits end .
@strSql is a dynamic query in while loop which can return single record, single row of records, multiple rows of records.
So only if it returns single record then I have to store it otherwise convert to xml before storing.
1) If it returns 1 record(1row and 1 column) then save as it is. 2) if it returns a row with more than 1 columns then convert to xml before saving. 2) if it returns data rows then convert to xml before saving.
I have to pull values from a mysql table, then loop through the result set using the value in an mssql query as shown below. I also have an array ($all_lobs[]) of about 100 values that must be looped through for each value pulled from the mysql table:
$tod=date("n/j/Y",time());
//this is pulling the data from the mysql table $query2="select CustId from prospect_CustId "; $result2=mysql_query($query2,$link_id_mysql); while($custs=mysql_fetch_row($result2))
[Code] .....
If I pull only 100 records from the mysql table, this takes about 1 second to run. However, if I pull 200 records, it takes about 60 secs to run. And, if I pull 300 records, it takes about 200 secs to run. After about 500 records, it takes almost a second per record to run!
Since I have 20,000+ records to pull, this takes hours...
Unfortunately, we are not allowed to modify the mssql tables at all, only query them.
I currently have an asp script that is generating a 12 month rolling report. From asp I'm running a for loop with 12 iterations, each one sending the following query:
select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = " & Mo & " and year(aReportDate) = " & Yr
where the Mo and Yr variables are incremented accordingly.
I actually have 4 sets of data being pulled back to populate a graph, so this results in 48 queries with each page load! Obviously not ideal. So I'm hoping to reduce this to 4 queries. I was playing with the following in enterprise manager:
DECLARE @DT DATETIME DECLARE @CNT INT SET @DT = '10/31/07' SET @CNT = 1 WHILE(@CNT < 12) BEGIN select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = month(@DT) and year(aReportDate) = year(@DT)
SET @CNT = @CNT + 1 END
I haven't yet added any logic to increment the date, but my concern is that it looks like it is returning 12 separate results. Is there any way to combine this all into one resultset that will be passed back to my asp script? Hopefully this makes sense?
Suggestions on a completely different approach would also be welcome.
I want to make a SP to update table Product with information I get from table Orderdetail.
Create Procedure UpdateVoorraad §OrderId (int) As Select ProductId, Tal From Orderdetail where OrderId = @OrderId
-- this query get info from table orderdetail : ProductId (integer) and Tal (smallint)
-- Tal = Number of Products
-- Here I want to loop through the query above
-- and for each record in the query I want to update
-- table Product.
Update Product Set Product.Voorraad = Product.Voorraad - Tal where ProductId = ProductId
To do this must I make a create a tempory table, store the query result in the table loop through the table and update table product, or can I try to create a function without a temporary table.
I need to create a query to list all the subfolders within a folder.
I have a database table that lists the usual properties of each of the folder.
I have another database table that has two columns
1. Parent folder 2. Child folder
But this table maintains the parent child relationship only to one level.
For example if i have a folder X that has a subfolder Y and Z. And Y has subfolders A and B. and B has subfolder C and D and C has subfolder E and F
The database table will look like
parentfolder child folder X Y X Z Y A Y B B C B D C E C F
I want to write a query which will take a folder name as the input and will provide me a list of all the folders and subfolders under it. The query should be based on the table (parent - child) and there should not be any restriction on the subfolder levels to search and report for.
I have been banging my head to do this but i have failed so far. Any help on this will be highly appreciated.
I'm have created a data flow that uses an OLEDB source with a SQL Query. In the WHERE statement of this query is a condition for the storecode. I want to figure out how to create a loop that will cycle through a list of storecodes using a variable which is passed to the dataflow in turn to the OLEDB source's query and runs through the process for each store.
The reason i'm using a loop is because there are about 15 million records that are merge joined with 15 million others which is causing a huge performance problem. I'm hoping that by looping the process with one store at a time it should be faster. Any ideas would be greatly appreciated.
Hi,I've written a job to export user and database permissions for alld/b's on a server. As you can see below, the T-SQL commands are thesame for each d/b. Can anyone assist with regard to re-writing this sothat any new d/b's added do not require ammending the job (loop)?Thx,GC.use mastergoSELECT db_name()EXEC sp_helpuserEXEC sp_helprotect NULL, NULL, NULL, 'o s'use msdbgoSELECT db_name()EXEC sp_helpuserEXEC sp_helprotect NULL, NULL, NULL, 'o s'use test1goSELECT db_name()EXEC sp_helpuserEXEC sp_helprotect NULL, NULL, NULL, 'o s'use test2goSELECT db_name()EXEC sp_helpuserEXEC sp_helprotect NULL, NULL, NULL, 'o s'
SELECT [Patient Identifier], Date, [Operator Index], Time FROM (SELECT ISNULL(t9.[Patient Identifier], t8.[Patient Identifier]) AS [Patient Identifier], ISNULL(t9.Date, t8.Date) AS Date, ISNULL(t9.Rows, t8.Rows) AS Rows, c.[Operator Index], c.Time, ROW_NUMBER() OVER (PARTITION BY ISNULL(t9.[Patient Identifier], t8.[Patient Identifier]), ISNULL(t9.Date, t8.Date) ORDER BY c.Time) AS RowNum FROM (SELECT [Patient Identifier], Date, 2 AS [Rows] FROM [First Step] WHERE [Operator Index] >= 90 GROUP BY [Patient Identifier], Date HAVING COUNT(*) >= 2) AS t9 FULL JOIN (SELECT [Patient Identifier], Date, 4 AS [Rows] FROM [First Step] WHERE [Operator Index] >= 80 GROUP BY [Patient Identifier], Date HAVING COUNT(*) >= 4) AS t8 ON t8.[Patient Identifier] = t9.[Patient Identifier] AND t8.Date = t9.Date INNER JOIN Complete AS c ON c.[Patient Identifier] = ISNULL(t9.[Patient Identifier], t8.[Patient Identifier]) AND c.Date = ISNULL(t9.Date, t8.Date)) AS d WHERE d .RowNum <= d .[Rows]
Current Input Patient IDDATE Time Operator Index 5170000318OCT2006 11:48 91 5170000318OCT200611:50 100 5170000417OCT200611:41 89 5170000417OCT200611:50 93 5170000417OCT200611:52 91 5170000417OCT200612:00 93
Current Output
Patient IDDATE Time Operator Index 0517_0000318OCT200611:48 91 0517_0000318OCT200611:50 100 0517_0000417OCT200611:41 89 0517_0000417OCT200611:50 93
It should be Patient IDDATE Time Operator Index 5170000318OCT2006 11:48 91 5170000318OCT200611:50 100 5170000417OCT200611:50 93 5170000417OCT200611:52 91
The data is organized by patient id, date, time (ascending) For a given patient id, on a certain data, testing was performed. A value between 80 and 100 is acceptable data. I need either the first 2 tests with a score above 90 or the first 4 tests above 80. (The tests are further sorted by time because the testing is time dependant. On some occassions, there is just too much data. What is wrong with my current query?
I have a report which is getting its parameters from an ASP.net page. My ASP developer wants to send in simple values, such as the list 1,2,3,4 for a parameter. However my report needs that list to look like [CD RSRC].[RSRC].&[1], [CD RSRC].[RSRC].&[2], [CD RSRC].[RSRC].&[3], [CD RSRC].[RSRC].&[4].
Is there any way, on the report services side, to capture an incoming report parameter, parse it, loop over the parsed values and format them?
I don't think there is, but I wanted to check before I go back to the developer and tell him he has to send in tuple lists.
I want to, for each month of the year 2014 say, to create a loop that will enter data into a table.
Right now I have:
Select [Member Number], sum(case when [Receipt Date]='2014/01/01' then Amount else 0 end) as [Rec 2014/01/01] From [Receipts Table] Group by [Member Number] Insert into [Receipts 2014/01/01]
[Code] ....
Instead I would just like to do something like…
Declare i date For i=2014/01/01 to 2014/12/01
Select [Member Number], sum(case when [Receipt Date]=i then Amount else 0 end) as [Rec +i]
From [Receipts Table] Group by [Member Number] Insert into [Receipts + i]
In my Control Flow, I execute a data flow that opens a flat file and populates the file into a recordset.
Back to my Control Flow, I have a ForEach container that uses a ForEach ADO Enumerator. Inside the ForEach, I execute an "Execute SQL Task" that updates a table.
This is where I'm confused, while in the ForEach, I also want to call a Data Flow and use the current record (record in my ForEach) and perform several lookup tasks. Unfortunately, I'm now sure how to use the ForEach record as a source in my Data Flow....What am I missing?
if the value of a paticular cell in the table has changed since last poll,
then initiate the second task
2. do a select query that picks about 10,000 new rows off another db table,
the 10,000 rows should then be stored in a in-memory dataset.
Every time the poll initiates a new select query, it should insert the new rows to the existing in-memory dataset.
thus if the select runs for 2 times in 2 minutes, the the in-memory dataset would contain a maximum of 20,000 rows.
3. Then I want to apply a set of transformations on the dataset and then finally update some db tables, push some records to the ssas database. (push mode incremental processing)
which sub tasks can be achieved and which cannot.
if not, Is there a workaround?
Please do provide some specific links that accomplish some of these similar tasks.
I have tested some functionality, like
doing a full processing of a ssas database.
reading from a database table and inserting into a flat file.
I tired to use the ExecuteSQLTask, and i also assigned the resultant to an user:variable. the execution completed succesfully but I am not able to see the value of the variable change. also I am not able to use the variable to figure out a change in previous value and thus initiate a sql select. or use the variable to do anything.
My insert statement for #Data - I only need to process each @EmployeeID one time, which is why I thought a loop would be sufficient, but I let this process run for 2 hrs and it still had not completed, so I feel I must have set-up something incorrectly!
This is my syntax, I am creating a table of active users, then wanting to get data for each of those active users. Â But only get the data for each active user 1 time.
Declare @EmployeeID varchar(50) CREATE TABLE #ActiveUsers ( ID INT IDENTITY NOT NULL ,EmployeeID varchar(50) ,processed int ) Create Table #Data
All: I am sure I am missing something really silly but I am not able to figure out what. The For Each Loop uses an ADO Enumerator and passes variable values to a data flow. In executing the package the loop runs fine but nothing is happening to the data flow. When I move the data flow out of the loop it runs fine. What is going on?
I am trying to build a Package that selects a list of Uniqueidentifiers by "Execute SQL-Task" and then loops through the ResultSet of the query using ADO.Net Enumerator to do something with this GUID Value, namely deleting all entrys with this PK in a different DataBase.
The main Problem I am facing is that you can´t select the type "GUID" for a user Variable in "Execute SQL-Task". All the Datatypes are there except for GUID!
This leads to the following error:
Error: 0xC001F009 at Package: The type of the value being assigned to variable "User::ImagicID" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object.
Is this a bug? Or am I doing something terribly wrong?
Hi, I am attempting to create SSIS packages to extract data from our AS/400 DB2 databases and populate tables for analysis and reporting in SQLServer. We have recently installed a new 2005 SQL Server to replace our existing 2000 databases. I have several DTS packages setup to connect to the 400s and copy the needed data to SQL2000 and these work great but do not transition to SSIS very well. I can get my SSIS packages to work if I create a seperate data flow task for each table but that does not seem appropriate for a tool like SSIS. I would think that what I am attempting is a very common thing to do. Maybe other enumerators could be used but I have not been able to get it to work.
I am trying to use a foreach loop, currently with a forech item enumerator, to first truncate the table on the SQL Server database using an Execute SQL task, and the to use a Data Flow task to refresh the data using an ole db source and destination. I have the truncate portion working, but when I try to setup the Ole db source using a variable it tells me I have no input columns. I have a variable from the forech loop that loops through the table names, this variable is used in a second variable which is set to EvaluateAsExpression with the expression set to the follwoing: "select * from " + @table_name
I have set delayvalidation to true and validateexternalmetadata to false. What do I do in order to get around this issue? Also, how should the old db destination be set so that it load the data into the table specified in the variable?
Inside of a for each loop (looping through an ADO record set of objects to import) I have a data flow task (along with many other processes).... if the dataflow task suceeds I log success in a table. If it errors I want it to fail the dataflow task (which will fire off my Event Handler for that data flow and log the failure, email etc) BUT I want it to continue the loop - I can't seem to figure out how to get the data flow object not to fail the whole loop. If any other objects inside the foreach, other than the data flow, fail I would like the whole loop to fail. Also if possible (but not a requirement) I would like it to have a threshold where if the data flow fails X variable times it will fail the package.
I am having difficulty how to not fail the loop when the import data fails..... just looking for a simple "on error next" type logic for that specific object in the foreach but not the rest. Thanks in advance for the help/advice.