Hi everyone
I want to know if it's possible to do a for/while-loop so i can use INSERT
Look:
I've this int [] test = new test[140];
But i need to insert for every value (140) a number
so normally it would be :
INSERT ... (case1, case2, case3 ...) value (test[1],test[2],test[3] ...)
But isn't there a way to it with a loop?
SOmething Like this ?
for( int i = 0 , i< 140, i++) {
INSERT case[i] value test[i]
}
What should I have done? Is there anything that can be done other than restoring from backup? How does one know if the database is really recovering or is EM just joken? I can wait 2 hours before starting the restore
I was BCPing 12 million rows into a staging table. II used the '-b' option every 20K which I thought would do a commit and clear the log in batches. After the process EM appeared to show the transaction log as empty. Upon inspecting the Bcp output file I discovered the message that the BCP did not complete because syslogs was full. I could not do a truncate transaction log or a dump database. I tried to do a truncate transaction with no_log and it appeared to just hang. I stopped the SQL Server thinking I could dump the transaction log, but could not start the Sql Server again. I then stopped the NT Server because 'if all else fails'. The SQL Server started but the user database if marked as recovering.
I have made some stored procedures to check if a user is involved with a certain record. basically every stored procedure contains the following logic.
example spCheckClientRelated: select @res = count(*) from client_role where client_id = @cid and employee_id = @eid
if (@res = 0) begin ... next select end if (@res = 0) begin ... next select end .... return @res end
so far so good. But the final check in CheckClientRelated tests if a user is related to one of the sales projects for that client.
I allready have the spCheckSalesProjectRelated that returns 1 or 0 similar to the example above
so I want to find an efficient method that selects all the sales_project_id 's from the sales_project table where client_id = @cid (i use offcourse select @sid = sales_project_id from sales_project where client_id = @cid at the moment)
And then I have to execute the spCheckSalesProjectRelated method for each @sid and @eid. This if offcourse where my problem is located. I don't know how to do a fast check for every selected @sid, until spCheckSalesProjectRelated returns 1
As you probably can determine from my question, sql is not really my domain, and I'm certainly not an expert, but I don't mind reading or looking up some stuff, so even a clue or a direction to look in would be most appreciated
Extremely thanks for the reply . I am using transactional replication for the database . I will try with snapshot replication as you suggested .
You mentioned that it will work with Transactional replication only if the application uses the option ' with log ' for those transactions .
Can you let me know where can i set this option for transaction replication ? I am sorry but i am not well versed with database replication procedures and management .
I need to insert data to a temp table in SQL , I have
CREATE TABLE TMP_X ( doc_name varchar(200) )
--select * from TMP_X
INSERT into TMP_X values ( '...,
but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column.. is there another INSERT type function ?
If I have a table with one column and i want to insert a few 100's rows of names I can't use the INSERT stmt as that does one row at a time , how can i achieve this ?
I'm looking for a way to insert 50k records into a SQL Server table, and need to get it done faster. right now using BULK INSERT takes 5-10 seconds, but faster would be better, and even better if it were a consistent amount of time.
I've heard of DTS but don't know quite how to use it - would be offer any performance gains? any clue what the bottleneck is for BULK INSERT? hard drive speed? amount of RAM (this was on a 512mb machine)? parsing the fields?
I'm seeing some strange behavior from the OLE DB Destination when using the "fast load" access mode and setting the "Maximum insert commit size".
When I do not set the "Rows per batch" or the "Maximum insert commit size", the package I'm working with inserts 123,070 rows using a single "insert bulk" statement. The data seems to flow through the pipeline until it gets to the OLE DB Destination and then I see a short pause. I'm assuming the pause is from the "insert bulk" statement handling all of the rows at once.
When I set the "Rows per batch" option but leave the "Maximum insert commit size" alone, I generally see the same behavior -- a single "insert bulk" statement that handles all 123,070. In this case, however, the "insert bulk" statement has a "ROWS_PER_BATCH" option appended to the statement that matches the "Rows per batch" setting. This makes sense. I'm assuming the "insert bulk" then "batches" the rows into multiple insert statements (although I'm unsure of how to confirm this). This version of the "insert bulk" statement appears to run in about the same time as the case above.
When I set the "Maximum insert commit size" option and leave the "Rows per batch" statement alone, I see multiple "insert bulk" statements being executed, each handling the lower of either the value I specify for the "Maximum insert commit size" or the number of rows in a single buffer flowing through the pipeline. In my testing, the number of rows in a buffer was 9,681. So, if I set the "Maximum insert commit size" to 5,000, I see two "insert bulk" statements for each buffer that flows into the OLE DB Destination (one handling 5,000 rows and one handling 4,681 rows). If I set the "Maximum insert commit size" to 10,000, I see a single "insert bulk" statement for each buffer that flows into the OLE DB Destination (handling 9,681 rows).
Now the problem. When I set the "Maximum insert commit size" as described in the last case above, I see LONG pauses between buffers being handled by the OLE DB Destination. For example, I might see one buffer of data flow through (and be handled by one or more "insert bulk" statements based on the "Maximum insert commit size" setting), then see a 2-3 minute pause before the next buffer of data is handled (with its one or more "insert bulk" statements being executed). Then I might see a 4-5 minute pause before the next buffer of data is handled. The pause between the buffers being passed through the OLE DB Destination (and handled via the "insert bulk" statements) is sometimes shorter, sometimes longer.
Using Profiler, I don't see any other activity going on within the database or within SQL Server itself that would explain the pauses between the buffers being handled by the OLE DB Destination and the resulting "insert bulk" statements...
Can anyone explain what is going on here? Is setting the "Maximum insert commit size" a bad idea? What are the differences between it and the "Rows per batch" setting and what are the recommended uses of these two options to try to improve the performance of the insert (particularly when handling millions of rows)?
Hello, I have 10 tables (T1Orj,T2Orj,€¦T10Orj) and I need to find modified rows from each table and insert them to T1Bak, T2Bak, €¦T10Bak. Although original and bak tables have the common fields , the original tables have more fields than bak tables, and T1Orj, T2Orj, €¦ T10Orj tables have different table structures. I can go head and write a insert into query for each table, I am just wondering Is there any way I can do this in a loop for all tables?
Help with Looping in a SSIS Package Scenario: We have a web app that lets our existing clients insert new locations into a table (Clients) in a SQL Server DB. This table has an Identity column as the PK. This table also has another ID field HBID that is the PK for another table (HLocation) in another Database system (Sybase). The HBID field is given a Default value of €˜1€™ during an insert operation in our (SQL Server) table, Clients. The Sybase database uses a Sequence table for inserting a PK into the table. Whenever our clients insert a new record in Clients (SQL Server), I need to generate a new HBID from the Sequence table in (Sybase), update the HBID in the Clients table (SQL Server) and then finally insert the record with the new HBID into table HLocation (Sybase). I have devised a SSIS package for this as following: Note all variables are scoped at the Package Level. Execute SQL Task #1 that drives a For Each Loop with the following properties: General: Connection Type: OLE DB Connection: SQL Server DB SQL SourceType: Direct Input SQLStatement: SELECT COID, HBID, Name, DateCreated FROM Client WHERE HBID = '1'
ResultSet: = Full Resultset
Result Set: Result Name = 0 VariableName = USER::rsClient Variable Type = Object
ForEachLoop with the follwing properties:
Collection: Foreach ADO Enumerator ADO Object Source Variable = USER::rsClient Enumeration Mode = Rows in first table Variable Mappings: Variable: Index: COID 0 HBID 1 Name 2 DateCreated 3
Inside of the ForEachLoop I have another Execute SQL Task to generate a new HBID from Sybase set up as following.
Execute SQL Task #2
Connection Type: OLE DB Connection: Sybase SQL SourceType: Direct Input SQLStatement: UPDATE Autoinc SET INC_LAST = (INC_LAST+1) WHERE INC_KEY ='HBID'; SELECT INC_LAST AS NewHBID FROM Autoinc WHERE INC_KEY ='HBID'
ResultSet = SingleRow
Result Set: Result Name = NewHBID VariableName = USER::NewHBID, VariableType = Int32
Also Inside the ForEachLoop is a Script Task that has all of my variables as ReadWrite = COID, HBID, Name, DateCreated and NewHBID. I concatenate the values in a string a pass the string into a Message Box to make sure they are looping correctly and they are.For example the results might look like the following:
12698, 1, John Doe Trucking, 10/1/2007, 14550 13974, 1, Joe Smith Trucking, 10/1/2007, 14551 10542, 1, Dave Jones Trucking, 10/1/2007, 14552 Etc.
The values 14550 -14552 are the new HBID being generated in the loop.
The problem is that when I try to Update the HBID in the Client table (SQL Server) with another Execute SQL Task I keep getting the same NewHBID number. In this case 14550 would be updated for every record instead of the next number in the loop.
I have set up Execute SQL Task #3 as follows:
General: Connection Type: OLE DB Connection: SQL Server DB SQL SourceType: Direct Input SQLStatement: UPDATE Client SET HBID = ? WHERE HBID = '1'(SELECT COID, HBID, Name, DateCreated FROM Client)
ResultSet: = Full ResultSet
Result Set: Result Name = 0 VariableName = USER::rsNewClient, VariableType = Object
Parameter Mapping: VariableName USER::NewHBID, Direction = INPUT, DataType = Long ParameterName = 0
I€™ve tried putting Execute SQL Task #3 inside of the ForEachLoop, connecting it to the output of the ForEachLoop. I€™ve tried setting up a dataflow with a Derived Column using the USER::NewHBID as the Expression.
I still get the same results, 14550 added to every row.
Can any one help or shed some light?
Any and all suggestion will be deeply appreciated!
I will be using execute SQL task to fetch the records from source,after this wanna use For each loop to access each record one at a time,perform some trnsformations and insert that record into destination.
Help me in accessing the data stored in the Variable(SQL task) in Dataflow task of foreach loop.
Here is the steps I should take... 1- Check for the log table and find run status ( there is a date field which tells the day run) 2- Lets say last day was 2008-05-15, So I have to check A1.DDDDMMYYY file exists in the folder for each day like A1.20080516,A1.20080517 and A1.20060518 ( until today) 3- if A1.20080516 text file exist then I have to move it to the table and same thing for other dates like if A1.20080517 exists I have to load it to table and so on
it looks like for each loop, first I have to get the last date and then I have to check the file exists for each date and if the date file exists then I have to load it into table...
Please tell me How can I do it. it looks complex looping...
I have ForEach Loop using Foreach File Enumerator. Within this loop I have SQL Task containing an Insert statement. When I run the Insert statement in query builder the transaction inserts data into a table as expected.
However, when actually running the process I am getting the error message:
Executing the query "INSERT INTO dbo.TEST_TABLE ..." failed with the following error: "Value does not fall within the expected range.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I currently have the ResultSet to "None" and have defined the parameter I am using. Where the process seems to joke is on my file_Name variable will I am trying to insert only part of the file name.
I was trying to insert some row from one table to another of different database. I was using Execute SQL task along with Foreach loop container.
In my execute SQL task I am using this query
SET IDENTITY_INSERT dbo.Table1 ON
INSERT INTO dbo.Table1
SELECT * FROM DB2.dbo.Table2 WHERE TableKey = ?
When executed I get this error: failed with the following error: "Syntax error, permission violation, or other nonspecific error". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
While the same query when executed in Management Studio Its successful.
The properties I set
For Each Loop Editor Settings: 1) Collection: a) Enumerator Set to ForEach ADO Enumerator b) ADO Object Source Variable: User:bjectVariablename c) Checked Rows in the first table 2) Variable Mapping: New Int Variable2 and Index = 0 to set it to first colunm. 3) Expression: Left blank
Execute SQL Task Editor: 1) General: a) Timeout : 0 b) CodePage: 1252 c) Result Set: None d) SQLSourceType: Directinput e) SQL Statement: SET IDENTITY_INSERT dbo.Table1 ON INSERT INTO dbo.Table1 SELECT * FROM DB2.dbo.Table2 WHERE TableKey = ? f) BypassPrepare: False 2)Parameter Mapping: Variable Name : New Integer variable2 selected Direction: Input DataType: Long ParameterName: 0
Can somebody help me in this regards.
Reference: a) http://www.whiteknighttechnology.com/cs/blogs/brian_knight/archive/2006/03/03/126.aspx
Hi Guys The problem is... When i try to bulk insert a single file its working fine. When i want to loop a set of files in a folder and use Foreach Loop and BulkInsert Task...its failing..
In the flat file connection When i specify usage type as existing file...its loading the same file "n" number of times where n is the number of files in the folder.
When i select usage type as existing folder i get error " Cannot bulk load because the file "....folder" could not be opened. Can someone help me out with this? I have sql 2005 as as separate instance...beside my 2000 which is default
Declare @StartDt date = '2015-03-15' Declare @EndDt date = DATEADD(M, 1, @StartDt) declare @Days int = DATEDIFF(d, @StartDt, @EndDt) declare @TBLSales as table(SaleDate date, Value money) DECLARE @Today date declare @TBLSalesCounts as table( StatusDesc varchar(100), Value money)
[Code] ....
I end up with the following result :
How would I alter my while loop to only insert the sum total of each day, instead of creating duplicates for each day.
E.g. 2015-04-22 1150.00 2015-04-21 Â 785.00 2015-04-20 Â 750.00
Is there a way to insert multiple records into a database table when you're just given "count" of the number of rows you want? I want to do this in ONE insert statment, so I don't want a solution that loops round doing 100 inserts - that would be too inefficient.
For example, suppose I want to create 100 card records starting it card number '1234000012340000'. Something like this ...
declare @card_start dec(16) set @card_start = '1234000012340000' declare @card_count int set @card_count = 100
I am running into an issue that seems to be collaborated here: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=188886&SiteID=1.
To summarize, I have a Foreach Loop that uses a Foreach File Enumerator. The loop writes the file names to a variable which is used in the Expressions property for a flat file's ConnectionString. The flat file is used as a source destination in a Bulk Insert Task that is inside the Foreach Loop. The flat file's connection string is not picking up the file name, resulting in this error: "The specified connection "test.txt" is either not valid, or points to an invalid object. To continue, specify a valid connection."
It doesn't work for a file connection manager or a flat file connection manager.
It does work if I replace the Bulk Insert Task with a Data Flow (flat file source -> ole db destination). It also works if I set the file's connection string manually in a script task.
It looks to me as though the Bulk Insert isn't calling whatever method in the connection manager that reevaluates the property expression. Am I missing something or is this a bug? I looked at the connect site, but couldn't find this particular scenario.
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
in sql server 2000 ,in query analyzer :for example i have a emp table ,when i execute a query ,i have to get the emp table column names with its data types...for example :empno ---intempname -- varchar like this...what is the query to get the output like this
I am not using the SA account when I log in using windows authentication it seems to pick up my domain and active directory ID which is "usapp1dxd" it appears greyed out as well as the password on the login screen so you can't change it.
If you go into the management studio and right click on the server name in the left hand pane go to properties security the radio button for sql server and Windows authentication mode is selected.
For the begining thank for your reply. I started with a new job and I was hopeless when all normaly claimed things can't be done. I asked a lot of peoples who works on MSSQL(including SWYNK) for my problems(I know that I'm beginer on MSSQL) but they didn't knows any solution. And afterall I wrote that. For your ilustration, we develope information system for Invest company. Back is on MSSQL and client is in ASP.
>So... if you need a database you need to look at what needs to be done and >pick your DBMS that meets your requirements. --I didn't resolve on which DB we will develope:((
--No named cursors I mean that in general is using curosr names bootless. And you must deal with names... In interbase: DECLARE var1,2 INTEGER; FOR SELECT column,column2 FROM TABLE WHERE ... INTO :var1,:var2 BEGIN some code in loop; END
In mssql: DECLARE .. DECLARE crs CURSOR FOR SELECT ... OPEN crs FETCH NEXT FROM crs INTO @var1,@var2 WHILE (@@FETCH_RESULT = 0) BEGIN some code in loop FETCH NEXT FROM crs INTO @var1,@var2 END CLOSE crs DEALLOCATE crs -I thing that in mssql it isn't elegant. -problem with recursive procs -you must deal with names -If you can you may use named cursors in Interbase in fact
>-it don't know create resultset's from >> stored procs asynchronously when in sp is something else then only one >> select(problem if I want check access rigths to sp. for exapmle "...AS >> CheckPrivilege( ... ) SELECT.." This is confusing if user runs large >> query & he must wait until it creates the whole recordset...Armageddon goes >> first... Here is a solution, but it isn't very elegant and in some >> cases don't exists good solution > >I'm not sure if configuring SQL Server's "cursor threshold" parameter would solve this. "When >set to -1, all keysets are generated synchronously. If the cursor threshold is set to 0, all >keysets will be generated asynchronously."
-"cursor threshold" resolves it only in single select's and procedures with only one select's -It's unacceptable when your your user runs large queries
>>-max of nested procs is 16. >That is the limit in V6.5. In V7.0 the limit is raised to 32. -cool:)
>Union operator can't be used in a Create View statement in V6.5 according to the documentation. >Does Oracle support this? It appears that V7.0 of MSSQL will support UNION in a view. -cool:))
>-It don't have good exception > handling.(something like EXCEPTION, WHEN... (oracle, Intebase)) > >There are whole sections on error handling in SQL Books OnLine depending on >how you are >accessing SQL Server. Maybe it's as good as Oracle/Interbase... maybe not. I >don't seem to have a problem with it.
-Books are very feeble:( -but I thing that it can't work with EXCEPTION blocs. Good Exception handling is important tool for fast developing of robust apps
-I used example(s) from INTERBASE because I worked more on it, but in Oracle it's alike.
For a query like below ... How do i have to select only the latest revisions, if i need to filter last current revisions of each document ... where the revision could be either alphabetical or even numerical ...
Presently I get all revisions with the below query ... Note: csd_revi is the field of CSD table for revisions.
I've gotten everything to work -- almost! Here's the scoop... I'm using SQL Server 7.0 and ColdFusion (you don't need to know anything about ColdFusion). I'm trying to get SQL Server to publish/share/etc. a database with the network so ColdFusion (our website management/creation program) can access it and use it in a webpage. Well, so far, ColdFusion can detect and access the database (called "iami"), but it cannot find any tables in the database (particularily one called "phase1"). Can anyone help me -- the sooner the better -- ???
If you reply before 4pm today and offer truly helpful advice/input, I'll send you a reward/incentive from me personally (just give me your mailing address) just for helping out!!! :-)
Can anyone help me with handy scripts/stored procedures to capture blocking info on the server? We have SQL 7.0 w/sp3. Ray replied earlier saying that I can use so_who2!!, I think everyone knows that,please reply with some valuable info!!Please.. Thanks. Sonali.
I think jthis is a bug. I have a table created and populated on its own filegroup. I backup the db(all filegroups) and the trans log then I drop the one table. When I try to restore from my backups, it insists that I back up the trans log again. I do, then do the restore of both the filegroup and the trans log. The restore finishes, but my table is still not there and I can never get it back.