Transact SQL :: Iterating Through A Schedule With Recursions And Factors?
Jul 30, 2015
I have a scenario in which a schedule is recorded like the top table below. Notice the start and end times, the meeting length, and the fact that you could book more than 1 meeting (book factor) during the times slot. The second table is the result needed. I have it working using the dreaded cursor, but I know there's got to be a more elegant solutions.
Here below is the perfect query i made which is working fine and giving me the sql output but just only need is how to convert to excel and automate the job scheduling so that it run on everyday and send the mail with attachment .
SELECT DN, cn, displayName, mail, objectClass, sAMAccountName, Company, givenName, sn FROM ( SELECT DN, cn, displayName, mail, objectClass, sAMAccountName, Company, givenName, sn, 1 [ordering] FROM alpha.dbo.DCADFeed where sAMAccountName collate SQL_Latin1_General_CP1_CI_AS in
I am new to SQL Server 2000. I am eager to learn what factors/parameters are key for obtaining good retrieval performance of SQL Server 2000 (prompt response to user query).
I recall that someone told me that a recordset with asOpenStatic cursor type has higher speed than that of a recordset with other cursor types.
Is this true or false. Are there really some key parameters for perfomance tuning .
I'm currently working on a project at work to test the effects of database compression, trying to obtain measurable data on the impact of the compression on other server resources, and therefore whether the reduction in space used is worth the extra overhead. This has involved taking a trace of a production customer's workload for a period of time and replaying it against a backup using Distributed replay in synchronised mode.
I'm then taking a trace of that replay, as well as using perfmon to record useful data about the server, before and after compression is enabled. Finally, I'm loading the traces into a tool called Qure to analyse the impact of the compression on reads, writes, CPU, overall duration etc.
What I'm finding is that even across 2 different 'baseline' runs, which are replaying the exact same workload against the exact same database, performance etc differs to a significant enough degree that it calls into question the validity of the test. I can only put this down to the fact this server is on a VM, which is affecting available resources, which in turn affects execution plans the workload is generating and causes different replays of the same workload. I'm therefore looking at doing this on a standalone server, but I still can't be sure the differences will go away.
How to make tests such as this as similar as possible on multiple runs, when elements outside of SQL Server are in effect out of my control?
I need to estimate the effort required in writing some SSIS packages. Could anyone provide some pointers to the various factors (E.g. number of tables, source, etc.) which influence the effort estimate for SSIS packages?
Hi, I have a 6 different textboxes in my web application. I have 6 different tables in my database such as tbl1,tbl2,tbl3 etc. When the user clicks the submit button I have to check whether the values in the textboxes match the value in the database. (if in txt1 the user enters 3 I need to go to tbl1 and check if there is such a value). What is the most efficient way to perform such a check? Will I need to write 6 select statements or can I use a loop and if I can use a loop I would appreciate an example Thanks
I have an SQL task which returns a set of dates, and I would like to iterate over this set, re-assigning the date to a global variable each time (User::CurrentDate), so that I can perform a number of tasks based on this date.
I am wanting to continuously monitor a source table throughout the day and as data becomes available, process it and insert it into one of a number of tables.
I have tried achieving this using a FOR LOOP and setting the halt condition such that it is not stisfiable. However, this has a couple of problems:
1) It runs in a tight loop and consequently degrades system performance enormously.
2) I can't get transactions to work. I would like each iteration of the loop to spawn a new transaction under which the tasks in the loop can run. Therefore, if one of the tasks fails during such an iteration, only the updates affected by that iteration are lost.
Ideally, I would like to be able to put a wait statement within the loop container so that it runs every couple of seconds. And would also like to implement transactions as described above.
Hello everyone, I have a table in which I need to iterate field, possibly several rows, when I enter a new record with the same item ID number. An example will make this much clearer.
I dont want to delete any old rows so that I can keep a history of where each item has been. The iterative column is in reverse order so that 0 is the newest value (location) and higher numbers are older locations. An item could go through a CurrentLocation several times.
Now, if I insert a row with ItemID = A01 and Current Location = Polishing, I want the Iter field of all previous rows to iterate by +1 and this new row to have Iter = 0.
What would be the easiest, best way to do this? Use a stored procedure or do it in my code or what? I'm pretty new at SQL server so if i'm missing a better way to accomplish the same thing, then please point me in that direction. Thanks for your help and/or time.
Situation: In this stored procedure, I have to calculate in some manner: Font, FontSize, BoldText, ShowBox and number of characters to see how many lines it will take on a Crystal Report. Wondering if you have seen some like this on Web or have an ideas? Measurements(length, width) and character count seem appropriate. How about a function?
I have recently started working on a project which involves using MSSQL to access a simple database. I have worked with Postgres SQL before, so I have a general idea of what SQL can be used for, but I'm having some difficulties applying that knowledge to MSSQL.
Currently, I would like to do the following (in abstract terms):
declare tmp record select column1 from tableA into tmp for each entry from above selection do insert into tableB values (tmp[column1], 0, 0, 0)
I remember doing something like this fairly easily in postgres. Trying to put that into MSSQL, I have:
CREATE FUNCTION dbo.newDay (@mDate datetime) RETURNS int AS BEGIN DECLARE @id int DECLARE item_cursor CURSOR FOR SELECT id FROM tblKitchenCat OPEN item_cursor FETCH NEXT FROM item_cursor INTO @id WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO tblKitchenList VALUES (@id, 0, 0, 0, 0, 0, @mDate) FETCH NEXT FROM item_cursor INTO @id END CLOSE item_cursor DEALLOCATE item_cursor RETURN 0 END GO
I get a syntax error next to AS... what is it?
Can somebody please help me out here... any articles related to moving to MSSQL from Postgres would also be highly appreciated.
In addition to that, I would like to schedule a particular function to run once a day, say at 2am. Is there a way to do this using MSSQL?
Hi,I want to log updates to specific fields, storing the new and oldvalues. Is there any way I can iterate the collection of updatedfields within a trigger in order accomplish this?Thanks in advance,Julie Vazquez
Can someone please let me know what is the best way to iterate the output rows of a script component and stick in those ids in a where clause of a select query (to retrieve additional info from a database)? Is this possible at all? If not, what is the best way to deal with this situation?
In one of my interfaces ,Source is flat file which has field called StoreID in the Detail Record. StoreID can be Multiple.Now I have to generate different files for Each StoreID present in the Source file. To achieve this first I populate the data from the file into a Temp Table and use ForEach ADO Enumerator to iterarate based on StoreID and produce different files.This is giving a satisfactory result.
But now i have to change the flow so that Temp table is not used,i.e i have to iterate directly from the flat file. Do we have a built in enumerator to achieve this. or should we do this in Script task only?? any other Options??
I'm trying write a reusable script component that takes data from rows that were rejected from a SQL Destination operation and put them into a common SQL error table.
This script would basically function to take the input columns selected in the script, and build a delimited string, (similar to the 'Flat File Source Error Output' that is contains redirected rows from reading a flat file) and insert this string into a SQL table called 'SourceData' to store errors.
I'm trying to script the component to iterate through all input columns (as selected in the input columns screen) and build a simple string.
Code Block Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) 'Use the incoming error number as a parameter to GetErrorDescription Row.ErrorDescription = ComponentMetaData.GetErrorDescription(Row.ErrorCode) Try
Row.ErrorColumnName = ComponentMetaData.InputCollection(0).InputColumnCollection(Row.ErrorColumn).Name Catch ex As Exception
Row.ErrorColumnName = String.Concat("Column Name retrieval failure. Details", ex.Message) End Try ' 'Build input data ' Dim inData As String
For Each inputCol As IDTSInputColumn90 In ComponentMetaData.InputCollection(0).InputColumnCollection inData = String.Concat(inData, "~", inputCol.Name) 'I don't want the name, but the value. Next Row.SourceData = inData ' End Sub
I've only got as far as iterating the names of columns in the input buffer, but how do i get the values?
The result i'm trying to achieve is : Selected columns in 'Input Column' screen : Name, Address, Phone OutPut column 'SourceData' value : Harry~Melbourne~None
I've got this issue with a query in SSIS. From a table in SQL Server I'm getting over 25000 different identifiers. These identifier are associated to many values in a table in one Oracle Database. This is the schema that I have implemented for doing this.
The problem is that some days the identifiers can be over 45000, and at this point perform a loop for every one is not the best solution (It can take to much time to get the result). Previously I have performed another query where from the SQL statement.
I am creating and sending a unique row with all the values concatenated and then I have recover this unique string from an object and use it to create the query in the ODBC Source that invoke the table in Oracle: something like this: 'Select * from Oracle_table' + @string_values
with @string_values = 'where value in (........)'. It works good because the number of values is small enough to be used, like 250. But in this case I can not use this approach because the number is really big and obviously the DBA of Oracle is going to cancel the query.
So I wonder, how can I iterate over the object getting only a few number of values everytime, something like 300 or maximum 500, to avoid the cancellation of the query but at the same time doing the minimum number of loops.
In several threads there has been discussion regarding adding connection managers to a package's data flow, etc. My challenge is that I have a large solution that contains many packages, and I need to change the connection manager linked to the data flow in all of the packages. When the solution was initially designed, data sources were used, and it has become a tedious maintenance issue to keep those in sync. We want to use a standard OLEDB connection manager, but adding a connection manager to each package and editing the corresponding data flow tasks in each package to use that new connection manager is a daunting task. I've coded a .Net module to access the packages, remove the old connection manager (data source) and add the new OLEDB data source. However, as I traverse the objects in the package hierarchy, when I come to the data flow object, the innerobject is not a dts object, but rather a _com object.. I can't seem to find any documentation/examples as to how to iterate the tasks within a data flow and change the connection manager. If you have any information, that would be quite helpful. If you reply with a code sample, if you would be so kind as to relate it to one of the sample packages provided with SSIS so I can run it, that would be great.
http://msdn2.microsoft.com/en-us/library/aa177009(SQL.80).aspx i need to schedule asp code execution using SQL Stored Procedure ...im not allowed to connect using Enterprise Manager is it possible in ASP and in case how?
Hi all, I want to make a job schedular in SQLServer 2005. That is i want to send email to users daily at 12:00 PM. how can i schedule using SQLServer Agent in SQLServer 2005. And i want to filter the users from user table. Please help me Thanks!
I have lots of DTS Packages that I am running manually daily and I am trying to create schedule all of those and all of them are failing. Some of those are accross domians,but the owner on both domains have same userId and password.
Any Idea why this is failing would be really helpful
I would like to import data via DTS from one db to another to run every half hour.What would be the easier way to do this?Should I setup a job to run every half hour?
If anyone could help with some suggestions , it would be appreciated.
When I try to schedule a DTS package which is an EXE it doesn't run. If I just run the DTS package it works fine. I have checked the permissions of the DTS scheduled package and made sure it had proper rights. SQL Server Services were started with the proper security rights. When I schedule the DTS package, it is being done on the Server, and not a workstation. Any ideas on why it will not run?...
Also, How do you schedule and EXE under SQL Server Agent Jobs. Even when I schedule an EXE that is not a DTS package it doesn't run...
I have almost completed a full project by myself which is soooo exciting. The last thing I need to do is schedule a job to run the query on a specific day and time. So how do I do this? It asks me for a command and I am not sure what I am supposed to enter in here.
Im trying to schedule a DTS package that I have created. When I right click on the package in Enterprise Manager and click on "Schedule" I get the options to set up the job to run on a schedule.
I fill out the time and click "OK". But when I immediately right click on the package again, all my settings are gone, and the defaults are back in place.
How do I know if the package will run- or if it did run?
I would actually prefer to run the job from a command line using a Scheduled Task... What would the syntax be? My DTS Local Package is named "IMPORT_DAILY_UPDATE"