I am tryung to execute a Store Proc using Execute SQL Task.
I am very aware that if there is any errors occur I have handled it sufficiently.
All I want to do is, when ever there are any errors in teh Store Proc then this Execute SQL task should not fail and it should go for the next Task in teh control flow.
Hello, I have the following code to iterate through each view in a SQLServer and call the "sp_refreshview" command against it. It worksgreat until it finds a view that is damaged, or otherwise cannot berefreshed. Then the whole routine stops working.Can someone please help me re-write this code so that any views thatfail the "sp_refreshview" command get skipped. I'm sure it's just amatter of putting some basic error trapping into the loop, but I've hada few goes at it and failed.Many thanks.DECLARE @DatabaseObject varchar(255)DECLARE ObjectCursor CURSORFOR SELECT table_name FROM information_schema.tables WHERE table_type ='view'OPEN ObjectCursorFETCH NEXT FROM ObjectCursor INTO @DatabaseObjectWHILE @@FETCH_STATUS = 0BEGINEXEC sp_refreshview @DatabaseObjectPrint @DatabaseObject + ' was successfully refreshed.'FETCH NEXT FROM ObjectCursor INTO @DatabaseObjectENDCLOSE ObjectCursorDEALLOCATE ObjectCursorGO
Does sql server have a way to handle errors in a sproc which would allowone to insert rows, ignoring rows which would create a duplicate keyviolation? I know if one loops one can handle the error on a row by rowbasis. But is there a way to skip the loop and do it as a bulk insert?It's easy to do in Access, but I'm curious to know if SQL Server propercan handle like this. I am guessing that a looping operation would beslower to execute?
Hi, I'm struck with a small issue.. would be great if somebody can help me out. Here is te scenario
1. There would be more than one CSV files in INPUT folder. 2. I'm using a Foreach Loop file enumerator to loop thru the files and load the data into database. 3. If loading is successful the file need to be moved to ARCHIVE folder and next file needs to be picked up for loading 4. In case if there is an error in loading the file has to be moved into ERROR folder, Error description should be logged to error log text file and next file needs to be picked up for loading.
I don't think increasing max error count is an option as I don't know how many no. of input files are available as it depends upon the feed.
I have one table and it contains a column named ID Number, and a column named Date. I have a Do While statement that runs a SQL select statement a few times based on the number of records with the same ID Number. During the Do While statement the information is copied into another table and deleted from the old table. After I look at the results, I see that at the second Do While loop, the data was not selected and the Select statement did not run... so the old variable value from varValue is used again... Any reasons on why?
Here is a code snippet of what is going on: Do While varCount < varRecordCount conSqlConnect.Open() cmdSelect = New SqlCommand ("Select * From temp_records_1 where [id number]=@idnumber and date<@date", conSqlConnect) cmdSelect.Parameters.Add( "@accountnumber", "10000" ) cmdSelect.Parameters.Add( "@date", dtnow ) dtrdatareader = cmdSelect.ExecuteReader() While dtrdatareader.Read() If IsDbNull(dtrdatareader("value")) = false Then varValue = dtrdatareader("value") End If End While dtrdatareader.Close() conSqlConnect.Close()
'#####The information above is copied to another table here '#####The record where the information was received is deleted.
I submitted an update query on a table of 80 million rows, in the weekend. When I returned on Monday, the transaction was still running. I thought some thing wrong happened and cancelled the transaction. It was taking long time to rollback the transaction. I recycled the SQL Server assuming it will do faster recovery. Now I realised that anyway it is going to take lot of time. And SQL server is not going to be up till the database is recoverd completely. Now can any body suggest me any thing to faster this process or skip this process. I dont know how long it is going to take rollback the transaction which ran for more than 70 hours.
I was attempting to use BCP today via xp_cmdshell. I have never done anything with BCP before, so it was very enlightening. However, I ran across a problem that maybe someone could help explain to me a little more.
I am using the "queryout" option, and when I run it, the error I get is that you "can't skip fields except for on inserts" or something like that.
The reason I was trying to use bcp is the ability to dynamically generate a filename, i.e. filename = 04182004 (the date). Because in the file name argument, I can use a variable. Make sense?
Since I apparently can't ignore fields, I am thinking of taking all of the information I need daily out, and into a seperate table, then I can use the xp_cmdshell to run a bcp that creates a file with the date as a filename, and I won't be ignoring any fields because I have just put the information I need in the new table. Am I making sense? Does this sound like an appropriate thing to do?
There is an option in ssis to skip one or more header rows, but there isn't any thing to skip one or more footer rows.
Example:
header bla bla 1;"Joe";24;"New York" 2;"John";54;"Washington" 3;"Phil";36;"San Francisco" footer bla bla
I skip the first record in the source definition. So I have left 4 records. How do I skip the fourth (last) record? The value contains some statistics so I cann't look for a special value. Is there a way to skip the last record with a script component?
Hello, im using sqldatareader to read my data and whenever time i loop through the reader it starts from second row why is that? here is my code:while (reader.Read()){hinfo.Name = reader["_name"].ToString();hi.Add(hinfo);} i look at the database and i have two rows but its reading only the second row, skiping the first row
Hi, I have the code below. I need to skip the first row in the datatable as it has the headers. This works now, but my gridview gets the header row inserted as a record.Private Shared Sub InsertData(ByVal sourceTable As System.Data.DataTable, ByVal destConnection As SqlConnection) ' old method: Lots of INSERT statements ' first, create the insert command that we will call over and over: destConnection.Open() Using ins As New SqlCommand("INSERT INTO [tblAppointmentDisposition] ([contactdate], [dnbnumber], [prospectname], [businessofficer], [phonemeeting], [followupcalldate2], [phonemeetingappt], [followupcalldate3], [appointmentdate], [appointmentlocation], [appointmentkept], [applicationgenerated], [applicationgenerated2], [applicationgenerated3], [comments], [newaccount], [futureopportunity]) VALUES (@contactdate, @dnbnumber, @prospectname, @businessofficer, @phonemeeting, @followupcalldate2, @phonemeetingappt, @followupcalldate3, @appointmentdate, @appointmentlocation, @appointmentkept, @applicationgenerated, @applicationgenerated2, @applicationgenerated3, @comments, @newaccount, @futureopportunity)", destConnection) ins.CommandType = CommandType.Text ins.Parameters.Add("@contactdate", SqlDbType.Text) ins.Parameters.Add("@dnbnumber", SqlDbType.Text) ins.Parameters.Add("@prospectname", SqlDbType.Text) ins.Parameters.Add("@businessofficer", SqlDbType.NVarChar) ins.Parameters.Add("@phonemeeting", SqlDbType.Text) ins.Parameters.Add("@followupcalldate2", SqlDbType.Text) ins.Parameters.Add("@phonemeetingappt", SqlDbType.Text) ins.Parameters.Add("@followupcalldate3", SqlDbType.Text) ins.Parameters.Add("@appointmentdate", SqlDbType.Text) ins.Parameters.Add("@appointmentlocation", SqlDbType.Text) ins.Parameters.Add("@appointmentkept", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated2", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated3", SqlDbType.Text) ins.Parameters.Add("@comments", SqlDbType.Text) ins.Parameters.Add("@newaccount", SqlDbType.Text) ins.Parameters.Add("@futureopportunity", SqlDbType.Text) ' and now, do the work: For Each r As DataRow In sourceTable.Rows For i As Integer = 0 To 16 ins.Parameters(i).Value = r(i) Next ins.ExecuteNonQuery() 'If System.Threading.Interlocked.Increment(rowscopied) Mod 10000 = 0 Then 'Console.WriteLine("-- copied {0} rows.", rowscopied) 'End If Next End Using destConnection.Close() End Sub
I have this code. It works, but inserts the header row into the gridview. I need to avoid the first row. Protected Sub excelimport(ByVal dataSrc As SqlDataSource, ByVal fileName As String) Dim intFileNameLength As Integer Dim strFileNamePath As String Dim strFileNameOnly As String Dim strpath As String If Not (uploadfile.PostedFile Is Nothing) Then strFileNamePath = uploadfile.PostedFile.FileName intFileNameLength = InStr(1, StrReverse(strFileNamePath), "") strFileNameOnly = Mid(strFileNamePath, (Len(strFileNamePath) - intFileNameLength) + 2) 'If File.Exists(paths & strFileNameOnly) Then 'lblMessage.Text = "Image of Similar name already Exist,Choose other name" 'Else If uploadfile.PostedFile.ContentLength > 40000 Then lblmessage.Text = "The Size of file is greater than 4 MB" ElseIf strFileNameOnly = "" Then Exit Sub Else 'strfilename = uploadfile.FileName.Substring(0, (InStr(uploadfile.FileName, ".") - 1)) strFileNameOnly = fileName & ".csv" strpath = "/sites/marketing/apps/disposition/content/excel/" '& strFileNameOnly uploadfile.PostedFile.SaveAs(Server.MapPath(strpath) & strFileNameOnly) 'lblmessage.Text = "File Upload Success." 'Session("Img") = strFileNameOnly Dim strConn As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & (Server.MapPath(strpath)) & ";Extended Properties=""Text;HDR=No;FMT=Delimited""" '"Provider=Microsoft.Jet.OLEDB.4.0;" & _ '"Data Source=" & "/" & strFileNameOnly & ";" & _ '"Extended Properties=Excel 8.0;" Dim conn As New OleDb.OleDbConnection(strConn) Dim myData As New OleDbDataAdapter("SELECT * FROM " & strFileNameOnly, conn) Dim myDatatable As New System.Data.DataTable Dim mySqlConnection = New SqlConnection(ConfigurationManager.ConnectionStrings("BDOConnectionString").ToString()) ''You must use the $ after the object you reference in the spreadsheet myData.Fill(myDatatable) InsertData(myDatatable, mySqlConnection) 'System.IO.File.Delete(Server.MapPath(strpath)) GridView1.DataBind() upload.Visible = False End If End If ' GridView1.DataSource = myDataset.Tables(0).DefaultView ' GridView1.DataBind() End Sub Private Shared Sub InsertData(ByVal sourceTable As System.Data.DataTable, ByVal destConnection As SqlConnection) ' old method: Lots of INSERT statements ' first, create the insert command that we will call over and over: destConnection.Open() Using ins As New SqlCommand("INSERT INTO [tblAppointmentDisposition] ([contactdate], [dnbnumber], [prospectname], [businessofficer], [phonemeeting], [followupcalldate2], [phonemeetingappt], [followupcalldate3], [appointmentdate], [appointmentlocation], [appointmentkept], [applicationgenerated], [applicationgenerated2], [applicationgenerated3], [comments], [newaccount], [futureopportunity]) VALUES (@contactdate, @dnbnumber, @prospectname, @businessofficer, @phonemeeting, @followupcalldate2, @phonemeetingappt, @followupcalldate3, @appointmentdate, @appointmentlocation, @appointmentkept, @applicationgenerated, @applicationgenerated2, @applicationgenerated3, @comments, @newaccount, @futureopportunity)", destConnection) ins.CommandType = CommandType.Text ins.Parameters.Add("@contactdate", SqlDbType.Text) ins.Parameters.Add("@dnbnumber", SqlDbType.Text) ins.Parameters.Add("@prospectname", SqlDbType.Text) ins.Parameters.Add("@businessofficer", SqlDbType.NVarChar) ins.Parameters.Add("@phonemeeting", SqlDbType.Text) ins.Parameters.Add("@followupcalldate2", SqlDbType.Text) ins.Parameters.Add("@phonemeetingappt", SqlDbType.Text) ins.Parameters.Add("@followupcalldate3", SqlDbType.Text) ins.Parameters.Add("@appointmentdate", SqlDbType.Text) ins.Parameters.Add("@appointmentlocation", SqlDbType.Text) ins.Parameters.Add("@appointmentkept", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated2", SqlDbType.Text) ins.Parameters.Add("@applicationgenerated3", SqlDbType.Text) ins.Parameters.Add("@comments", SqlDbType.Text) ins.Parameters.Add("@newaccount", SqlDbType.Text) ins.Parameters.Add("@futureopportunity", SqlDbType.Text) ' and now, do the work: For Each r As DataRow In sourceTable.Rows If sourceTable.Rows.IndexOf(sourceTable.Rows(0)) Then 'do nothing Else For i As Integer = 0 To 16 ins.Parameters(i).Value = r(i) Next ins.ExecuteNonQuery() 'If System.Threading.Interlocked.Increment(rowscopied) Mod 10000 = 0 Then 'Console.WriteLine("-- copied {0} rows.", rowscopied) 'End If End If Next End Using destConnection.Close() End Sub
I am looking for best practice when passing a parameter to stored procedure that is not needed. For example, sometime the users will want the list to list only by certain state. Other times the user want all states. How can I make the SP to ignore the where clause if users want all states.
CREATE PROCEDURE usp_Example @State nvarchar(2) AS SELECT FirstName, LastName, State FROM SomeTable WHERE State = @FirstName; GO
Hello All,Does the BCP utility enable you to selectively import rows from a flatfile to a table ?For example:The first column in my flat file contains a record type - 1, 2..7I only need to import types 1, 2, & 3Can this be specified in the .fmt file ?Thanks in advancehharry
Hence you have a database which huge tables and a transactional replication (push subscriptions).
Now my question:
1. if I have to initialize a snapshot but I would like to do it without the snapshot agent, what methods are available?
2. Usually the distribution agent will request an initialize snapshot. How can I tell him, that I would like to use an alternative method and that the distribution agent should NOT request a snapshot?
3. Any suggestions about a good practive for materializing huge and big tables wihtout using the disitrbution agent (e.g. "switch off" replication, bcp table out of the primary site and bcp it into the target site, "start" distribution agent so that it doesn't request a snapshot).
I have a excel workbook with many sheets, in each sheet the first row has to be skipped and the second row contains the column information and thereafter are the records.
The Excel Source in SSIS just gives an option: check if the first row has column names.
But the first row for me is junk -- a link to parent or first sheet-- and has to be skipped and the second row has the column info.
How can this be accomplished .... any suggestions would be of great help!!!
Sample:
Main
id
desc
price
date
1
apple
1.0
1/1/1900
2
banana
2.0
1/1/2000
Main in the first row is actually a hyperlink ... once we click this it takes us to the first sheet in the workbook which has all sheet names as contents.
Is there a way to split at this point and put the 12 rows in a different location? The task is twofold - I don't need these control rows in my data and I need value of "records" to verify loaded number of rows.
UPDATED: After some testing I found out that the Flat File source does not see that footer at all. This is good and bad - I do want to load this metedat into some other tables.
I have a script that creates and populates several tables. However I only want this to occur if one table has a row count greater than zero. I'm trying to use GOTO to script to the end of the script. However I get the message "A GOTO statment references the label 'MYLABEL' but the label has not been declared." How can I do this.
I have something similiar to the following in my script: IF (SELECT COUNT(*) FROM MYTABLE) = 0 BEGIN PRINT 'NO ROWS FOUND' GOTO MYLABEL END
Hi All,I have this data file with fix length(see below). I am able to insertit into the database using bcp, but now I want to skip (do not insert)the row which start with letter 'S' into the database. Is there away todo it? By the way I am using -F2 option to skip the first record.Here is my data:Record 1 04XXX2 13106900240120042003040045061 Testing N POLYDOROS TRUSTEEE2 12621241640280041004040045633 What are they MARTIN &XXXXXS C1000003200400409850000059611000000500001000000001 9613000000576497500S X1000003200000209850000059613000000000000000000001 9613000000573497000Thanks for your help.Ted Lee
I am reading in a deliminated file. In the Script Transformation Editor, if the UPC does not past the checksum test, I want to throw the row out right then. I am not sure how to do that...but it is probably really simple.] Thanks, Linda
Here is my script:
' Microsoft SQL Server Integration Services user script component
' This is your new script component in Microsoft Visual Basic .NET
' ScriptMain is the entrypoint class for script components
'Option Strict Off
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Public Class ScriptMain
Inherits UserComponent
Private Function DoubleTest(ByVal Value As String) As Boolean
Dim d As Double
If Not Double.TryParse(Value, d) Then
'Windows.Forms.MessageBox.Show(Value + " is not numeric")
In Flat File Source properties windows there's Preview node, when we check that node there's an option to skip the data in how many rows. Is it affect the result ?
i have this particular problem with the unpivot.The below is my flat file source.The dates can go upto 130 columns.this count can also vary.SM,SR,SB are again values repeating for diff instrument.They are the values of the instrument on the particular dates.This is a snap shot of one feed.Other feeds may have the dates differing.How do i read this file.
Problem 1:If i skip the first row and unpivot the 2nd row,then with the new feed,with new dates my SSIS package will bomb as it will not find the col names.
Problem 2:IF i uncheck the "Use first row as column headers" then the problem 1 is solved but the o/p will be
20080101
20061102
20061103 1.2
1.3
1.2.
1.5
.....and so on..
IS there any other way to fix this.These are feeds with the spread values of instruments on particular dates.Please help.
RUN 2.01E+11 132238 0 45 INSTRID DATATYPES 20081101 20061102 20061103 Z03369 SM 1.1 1.2 1.3 Z03369 SB 1.3 1.3 1.7
Is it possible to write a sql statement to skip aplpha numeric values? I got a field containing these values; 20, 70, 150, 140, 100, KORT, 90, 180, 160. And I'm trying to check if any value is bigger than 175 (@Limit), but I want to skip the value 'KORT'. So is it possible to check if a value is numeric or not? ISNULL( CONVERT(int, ProductVariant.Size), 0) > @Limit Regards, Sigurd
I'm trying to optimize a few batch import procedures we use in our processes.
It currently works like this:
1) Cursor loop cycles through all data to be imported from IMPORT table
2) For every record there is an attempted insert to PROD table in a TRY-CATCH check to see whether the record would pass all the primary key and foreign key constrains in PROD table
3) Only those that pass the TRY-CATCH check gets imported into PROD table
4) Every row gets logged into a separate LOG table, either with a comment like "Import OK" or "Error: foreign key violation in field 'my_id'"
The thing is, the procedure runs fine when I'm importing several thousands of records, but when it comes to hundreds of thousands, the speed becomes an issue, as I currently get 20 records per second and slowing...
There is no other code in that procedure, no queries. Just the Cursor cycle and the try-catch check.
My replication application need to be able to skip the from-last-stop remaining logs (that means to skip the logs generated from the last stop time of my replication application), how can i realize it? thanks.
I have a .xlsx file, in that file I get data from the 3rd row. Using SSIS I am converting .xlsx file into .csv file. I am able to convert it but in the .csv file the data are populating from the first row itself. I want to get data in 3rd row it self.
I have a text filed in my table.I have sample data looks like <<some status>> << 3/9/2008 10:00:45 PM>> <<personname>>Im interested in searching <<some status>> and <<personname>> together by skipping date in between so my query set should return status and same person name i m looking for.
I can skip the first line with Transform Data Task, it look like can not skip the first line in BulkInsert. But bulkinsert is faster, anybody can help?